report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
The Year 2000 problem exists because the data that computers store and process often use only the last two digits to designate the year. On January 1, 2000, such systems may mistake data referring to 2000 as meaning 1900, possibly leading to numerous errors and disruptions in processing. Financial markets in the United States and countries throughout the world are highly dependent upon the accurate transmission of electronic information, and thus the systems they use must be readied to correctly process Year 2000 dates. If the computer systems used by foreign financial markets and institutions are not ready for the Year 2000 date change, U.S. financial institutions could be adversely affected in various ways. If systems used by foreign markets fail, U.S. institutions may not be able to alter their holdings of foreign financial assets. Market closures and the resulting uncertainty could also cause dramatic drops in prices, thereby producing large losses for U.S. institutions holding such assets. If foreign financial institutions’ systems are not ready, U.S. institutions may not be able to access financial assets held by their foreign business partners and customers or may not receive payments owed by such institutions. If the systems used by foreign infrastructure providers, including those providing telecommunications, power, water, and other services, are not also ready for 2000, U.S. institutions may not be able to communicate with its foreign business partners and customers. They may also be unable to conduct financial transactions with such institutions or within affected countries. To gather information on the extent of U.S. financial institutions’ international activities, we obtained data on U.S. banks’ lending exposures from U.S. banking regulators. For information on U.S. entities’ investments in foreign securities, we obtained information from the U.S. Bureau of Economic Analysis and from Morningstar, Inc., which is a private investment research firm that maintains a proprietary database of U.S. mutual funds’ investment portfolios. We obtained data on foreign exchange and over-the-counter derivatives markets from the Bank for International Settlements. To gather information on how U.S. financial institutions were addressing international Year 2000 risks, we interviewed officials and reviewed available Year 2000 documentation from the headquarters offices of seven major banks and securities firms that were among the most internationally active financial institutions. We interviewed additional representatives of some of these firms in countries outside of the United States. We also interviewed U.S. banking and securities regulators to obtain information on how U.S. financial institutions were assessing their international Year 2000 risks. To gather information on how U.S. regulators were addressing international Year 2000 risks, we reviewed guidance and other issuances and discussed international Year 2000 risks with representatives of the Federal Reserve, the Office of the Comptroller of the Currency (OCC), and the Securities and Exchange Commission (SEC). We also reviewed these organizations’ internal analysis summaries of U.S. and foreign financial institutions’ progress in addressing Year 2000 and securities firms’ regulatory reports by discussing their Year 2000 efforts. To determine how foreign financial institutions and regulators were addressing Year 2000 risks, we interviewed representatives of seven large financial institutions, three market support organizations, and regulatory agencies in France, Germany, Japan, Korea, and the United Kingdom. We also reviewed reports and statements by other organizations that have assessed the Year 2000 readiness of foreign financial institutions. The foreign countries whose Year 2000 efforts we evaluated included 4 of the top 10 countries in which U.S. organizations had significant lending and mutual fund investment exposures. In addition, we interviewed officials from and reviewed Year 2000-related documents provided by international market support organizations that are responsible for clearing and settling transactions, transmitting payment instructions, and other financial messages. We also interviewed officials from and reviewed documents of key international organizations that were established to address Year 2000 problems in global financial markets, including the Global 2000 Coordinating Group and the Joint Year 2000 Council. To identify issues that may require further attention, we interviewed officials at the organizations we contacted. We also reviewed reports and other documents issued by U.S., foreign, and international organizations. Information on foreign institutions and regulation in this report is based on interviews and secondary sources, not our independent technical or legal analysis. We did our work from July 1998 to March 1999 in accordance with generally accepted government auditing standards. We obtained oral comments on a draft of this report from staff of the Board of Governors of the Federal Reserve System and SEC and written comments from the OCC (see app. I). We discuss their comments at the end of this letter. Large, internationally active U.S. financial institutions, which account for most of the financial exposures and relationships with foreign financial institutions and markets, may be at risk if the foreign organizations are not ready for the Year 2000 date change. However, officials from the U.S. institutions we visited told us that they have mostly completed the changes required to ready their own computer systems to process Year 2000 dates. With financial relationships around the world, these officials also said they were assessing the Year 2000 readiness of their customers, business partners, and financial counterparties. Further, they said that they are preparing plans to mitigate the risks their international activities pose to their operations, but that they generally did not anticipate Year 2000 problems in other countries to have a significant long-term impact on their business operations. Although many financial institutions may be active internationally, fewer than 25 large institutions account for most of the total foreign financial exposures of U.S. banks and securities firms. The Year 2000 readiness of foreign organizations’ computer systems is important to these large U.S. financial institutions because they have substantial international financial exposures. U.S. financial institutions are active in various global financial activities. Large U.S. banks and securities firms are active participants in the foreign exchange markets in which transactions valued at $1.5 trillion were estimated to be occurring daily as of April 1998. Determining the portion of U.S. financial institutions’ foreign exchange activities that are conducted with foreign entities is difficult, but about 18 percent of the daily volume of foreign exchange trading is reported to occur in the United States. The comparable share of global daily trading volume in London, which is the most active foreign exchange trading center, was about 32 percent. U.S. firms are active in London and other world foreign exchange centers as well. Over-the-counter derivatives dealing is another global financial market activity in which U.S. financial institutions actively participate.About $70 trillion in notional principal was outstanding in June 1998, with U.S. banking institutions having an estimated $28.2 trillion outstanding as of that time. U.S. banks are also active globally as lenders of funds and, as of June 30, 1998, had a total foreign lending exposure of about $487 billion. Six U.S. banks accounted for over 75 percent of this total exposure. As shown in table 1, U.S. banks’ largest lending exposures were concentrated in the major European markets and Japan. Although most of their lending was in developed countries, large U.S. banks also had exposures to organizations in various emerging market countries. As shown in table 1, exposures to organizations in Brazil ranked among the top 10 for U.S. banks as of June 30, 1998. U.S. banks also had lent over $16 billion to Mexico and had exposures to other emerging market countries in Latin America and the Caribbean totaling about $31 billion. U.S. banks’ lending exposures to Asian countries, excluding Japan, were about $41 billion and to Eastern Europe were about $11 billion. U.S. entities are also active in securities investing internationally. As of year-end 1997, U.S. entities held foreign financial stocks or bonds worth about $1.45 trillion, according to data compiled by the U.S. Bureau of Economic Analysis. However, SEC officials told us that only about 12 securities firms engage in substantial international activities. Other entities active in investing in other countries include U.S. mutual fund organizations. According to information compiled by Morningstar, Inc., U.S. mutual funds’ foreign equity investments were also concentrated in the major European markets and Japan (see table 2). Equity investments by U.S. mutual funds outside of the largest countries generally represented a much smaller portion of the mutual funds’ total foreign activity. According to the data compiled by Morningstar, Inc., investments in markets outside of the largest countries accounted for only 25 percent of the mutual funds’ total foreign investments. U.S. financial regulators have determined that large U.S. financial institutions are assessing the Year 2000 readiness of their key financial relationships. Bank regulators required all U.S. banks to complete assessments of their internal exposures, including international exposures, by September 30, 1998. Bank regulators told us that their most recent examinations showed that large U.S. banks had adequately completed these assessments. SEC officials told us that large U.S. securities firms have reported that they are also gathering information about the Year 2000 readiness of organizations in which they have key financial exposures, including international exposures. At the large U.S. banks and securities firms we contacted, representatives of these institutions described (1) the progress they had made in readying their own systems to process Year 2000 dates and (2) the actions they had taken to assess their international exposures. Officials at each of the seven institutions we contacted told us that the work and testing needed to make nearly all of their systems ready for 2000 was completed by the end of 1998. The securities affiliates of six of these institutions participated in a securities industry test in July 1998, which required them to have part of their systems Year 2000 compliant by that date. Officials at these institutions said they also have incorporated into their overall Year 2000 programs assessments of the readiness of key domestic and international customers, business partners, and their counterparties in financial transactions. The officials said they had detailed programs to both prioritize and assess the readiness of their key suppliers, electronic linkages, business partners, and customers. In most cases, they described using questionnaires to make these assessments. The officials also said they were sending teams to make on-site visits and to personally review the Year 2000 programs of the entities that were their highest priority exposures. For example, representatives of one of the financial institutions we contacted told us that they had determined that, of the over 15,000 relationships the firm had worldwide, 4,000 were deemed to be critical to its operations. They said they prioritized these relationships using various factors, including the extent of business the firm did with each organization. The firm identified about 450 domestic and foreign organizations that it considered most critical to its operations, which included about 200 providers of information technology and infrastructure services and about 250 financial institutions. To assess the readiness of these organizations, officials at this firm told us that they sent survey to the organizations and were conducting on-site visits during which they were attempting to review these organizations’ Year 2000 project plans. The officials said they also were holding discussions with the organizations’ staff on at least 10 areas of concern, such as management involvement and testing procedures. Officials of the U.S. financial institutions we contacted said that contingency planning would be the primary focus of their Year 2000 efforts in 1999, because they have generally completed system remediation and are continuing with testing efforts. Among the efforts that officials at these institutions described were those designed to minimize disruptions to their businesses operations arising from problems encountered by their own institutions, their business partners, or infrastructure providers. Such steps included having alternate power or telecommunication sources or arranging to conduct financial transactions with more than one institution in other countries in the event that their normal partner experienced Year 2000 problems. Representatives of the U.S. financial institutions we contacted said they planned to use the results of their assessments of outside organizations in making business decisions regarding whether they should maintain the same levels of exposure or activity with these organizations. Most of these financial institutions expected their main foreign counterparts to be ready and did not anticipate having to alter large numbers of business relationships for Year 2000 readiness reasons. We talked to four representatives of financial institutions about their firms’ foreign financial relationships. The representatives said that they chose to do business in different foreign markets because they believed these relationships and investments were sound. The representatives also said that they were determined to remain committed to these investments until the business fundamentals in these countries were altered. The Year 2000 readiness of the organizations in some countries, including those considered to have made less progress in their preparations than U.S. financial firms, is less likely to have a serious impact on the U.S. firms because of recent events in world markets. For example, officials in Russia have already acknowledged that they lack the resources to adequately address Year 2000 problems. However, according to officials of most of the U.S. financial institutions we contacted, exposures in markets such as Russia, Eastern Europe, and Southeast Asia have already been substantially reduced due to the market turmoil that has occurred in those regions since 1997. According to Morningstar, Inc., data, U.S. mutual fund investments in Russia totaled just $181 million in January 1999, or less than 1 percent of all such funds’ foreign investments. Banks were more exposed to Russian organizations, with over $6 billion in loans outstanding, but this was just 1.3 percent of U.S. banks’ total foreign lending exposure in 1998. Officials from most of the U.S. financial institutions we contacted said that, in their view, the impact of Year 2000 disruptions in other countries’ markets was not expected to be severe. For example, they said that their firms were advising clients with investments in countries that were more likely to experience Year 2000-related disruptions to, at worst, be prepared for periods of difficulty with payments and settlements in these regions ranging from a few hours to a few weeks. Because these officials believe the duration of any Year 2000-related problems will likely be short, they are advising their clients not to alter their investments solely on the basis of Year 2000-related concerns. Instead, they suggested balancing the impact of these potential, temporary disruptions against the merits of the investments they have made in these regions. One of the large U.S. securities firms we contacted issued a research report on Year 2000 readiness in January 1999. The report stated that, although significant failures by foreign organizations to make payments or to deliver securities could conceivably disrupt segments of the U.S. financial system, the U.S., European, and other monetary authorities have the capability to cover any resulting liquidity shortfalls. Also, a research analysis done by another of the securities firms we contacted examined the Year 2000 readiness of various industrial sectors in the United States and at least 19 other countries. This report noted that Year 2000 problems would probably not cause a major disaster, but that the problems had more potential for disruption in emerging markets. Nevertheless, the report stated that companies will likely cope with Year 2000 problems in the same ways they do during power outages or other disruptions of telecommunications or computer services. U.S. bank and securities regulators are assessing the international risks faced by the entities they oversee. These regulators have approached oversight of Year 2000 issues on the basis of their overall regulatory mandate. The approach taken by bank regulators focuses on their regulatory mandate to protect the safety and soundness of the banking system. Without direct authority to regulate the foreign affiliates of U.S. broker-dealers, SEC has also assessed the international risks that Year 2000 problems may pose to the securities market participants it regulates. U.S. bank regulators view the Year 2000 problem, including the risks posed by banks’ international activities, as potentially threatening the safety and soundness of individual institutions. The regulators have jointly issued at least three sets of guidance that address the risks posed to banks from potential Year 2000 problems experienced by their customers, suppliers, and other business partners, including those in other countries. In a March 1998 statement, bank regulators required banks to assess the Year 2000 readiness of both their domestic and international customers, including organizations that borrow from, provide funds to, or conduct capital market transactions with their institutions. The regulators required banks to have these assessments substantially completed by September 30, 1998. Since 1997, bank regulators have reported that they conducted at least one examination of all U.S.-chartered banks, including the foreign branches of overseas banks. On the basis of these reviews, bank regulators rated the Year 2000 progress of about 96 percent of the institutions examined as satisfactory. The regulators started a second round of examinations in September 1998. This round is to focus primarily on testing, contingency planning, and efforts to assess the readiness of external parties. According to representatives of the Federal Reserve and the OCC, which oversee banks with international operations, large banks appear to have conducted their customer assessments adequately, but some smaller institutions had failed to conduct assessments of all the organizations whose Year 2000 readiness could affect their banks. SEC is charged with protecting U.S. investors and ensuring fair and orderly markets, but SEC does not have the authority to regulate the international activities of U.S. securities firms that are done outside of the regulated broker-dealer. However, SEC has assessed the readiness of U.S. securities markets and the organizations that participate in them, including any relevant international activities of these organizations. SEC has conducted examinations of securities markets, broker-dealers, investment companies, investment advisers, and other organizations they regulate as part of their Year 2000 oversight effort. In addition, since 1990, SEC has had the authority to assess the degree of risk that the securities firms’ unregulated activities pose to the regulated entity. Under this authority, SEC requires securities firms’ regulated broker-dealer affiliates to report on the exposures of their foreign affiliates and holding companies. Finally, SEC requires the entities it regulates to provide supplemental reporting on their Year 2000 efforts. SEC officials have concluded that the Year 2000 risks posed by U.S. securities firms’ international activities are not significant. They said that the foreign exposures of U.S. securities firms are relatively modest compared to their U.S. operations. Moreover, the U.S. securities firms’ most significant international exposures were largely concentrated in their U.K. affiliates. These firms conduct considerable over-the-counter derivatives activities in the United Kingdom, although derivatives exposures may arise from business being conducted with entities in other countries. SEC officials told us that U.K. regulators were active in overseeing these firms’ operations. Since the end of 1998, SEC has required the entities it regulates to file reports discussing their efforts to address Year 2000 problems. In these reports, organizations are to provide additional information on their Year 2000 readiness efforts beyond that required in other statements filed with SEC. Organizations required to submit these reports include all but the smallest securities firms’ regulated broker-dealer affiliates and investment advisers with over $25 million under management or that provide advice to an investment company registered under the Investment Company Act of 1940. The primary purpose of these reports is to provide SEC with more specific information on these entities’ actions to address their Year 2000 problem. In the reports, the firms are required to describe their plans, including the resources they have committed, their progress against the various milestones in the process, and their contingency planning efforts. For broker-dealers, the first reports were required to be filed no later than August 1998 and are to be filed again no later than April 1999. In addition, the regulated entities are also to submit a separate report completed by the firms’ external auditors that serves as an independent verification of the accuracy of the firms’ April 1999 submission. These reports are to be made publicly available, and SEC has posted the August 1998 submissions on its Web site. These reports also provide SEC with some information on the extent to which the entities it regulates are addressing the risks posed by the Year 2000 readiness of external organizations, including those in other countries. One section of the report seeks information on the activities firms have undertaken to assess the readiness of any third parties that provide mission-critical systems, including clearing firms, vendors, service providers, counterparties, and others. In their submissions, the broker- dealer affiliates are to identify the number of the entities they rely upon, whether they have contacted these entities regarding their Year 2000 readiness, and whether their contingency plans address the potential failure by third parties to be ready. Although these reports were technically just required to be filed by the securities firms’ broker-dealer affiliates subject to SEC regulation, SEC officials told us that the firms submitting these reports have generally provided information that addresses the global operations of their firms outside of the regulated U.S. entity, when relevant. We reviewed the August 1998 submissions by 11 of the largest U.S. securities firms. All of these broker-dealer affiliates’ reports indicated that they covered the firms’ global operations and activities with foreign clients. The officials said that they would use the information gathered in these reports to identify entities that they may select for an on-site examination concerning their Year 2000 preparations. SEC officials told us that, as of March 1999, no organizations had been selected specifically for their international operations. However, they said that they have begun developing plans for examining the international operations of selected firms, particularly for those firms active in over-the-counter derivatives. Like their counterparts in the United States, officials of the foreign financial institutions we visited said they were also working to ready their systems for the date change in 2000, although their progress generally lagged those of U.S. institutions. Financial institutions in the countries we visited were also attempting to assess the Year 2000 readiness of their customers and the organizations with which they do business. Financial regulators in these countries told us that they were also assessing the Year 2000 efforts of the entities they oversee, although their activities varied and appeared less extensive than efforts made by U.S. regulators. Other key participants in international financial markets include organizations that provide market support services, such as financial message transmission and clearing and settlement activities. The market support organizations we contacted were also readying their systems for 2000. Finally, two international organizations, including one focusing on regulatory issues and another focusing on issues of concern to financial institutions, provided guidance and other assistance to financial regulators and institutions attempting to address the Year 2000 problem. According to external assessments by consulting groups, regulators, and others, foreign financial institutions have generally made less progress in addressing the Year 2000 problem than have institutions in the United States. Officials of financial institutions in the countries we visited said they were readying their systems for the date change in 2000, but not all of these institutions expected to complete their work in the same time frame expected of U.S. institutions. One reason these institutions’ time frames are different is because many institutions, particularly those in France and Germany, had placed a higher priority on modifying their systems for the introduction of the new European common currency, called the euro, in January 1999. To prepare for the date change in 2000, U.S. regulators expected banks and securities firms to have completed both internal systems modifications and testing by December 1998. We obtained Year 2000 readiness information from seven large foreign financial institutions. Officials from two Swiss institutions and one U.K. bank said that they had mostly completed their internal systems modifications and testing by December 1998. Officials from a German institution we visited said that they had also completed their internal systems modifications by December 1998, but that they had completed the testing of only about 40 percent of their systems by February 1999. Officials from one of the French institutions we contacted expected to complete systems modification and testing by the first quarter of 1999, and officials of the remaining two institutions—including one French and one U.K. bank—expected to be finished by the second quarter of 1999. Although their work on the euro has delayed their Year 2000 efforts, officials representing financial institutions in France and Germany indicated that their euro efforts would help them complete their Year 2000 work on time. For example, these officials said they already had detailed inventories of their information systems and applications, existing test facilities, and project management teams with experience and personnel that could be used for completing their Year 2000 programs. However, some officials of financial institutions and regulators in the United States and United Kingdom expressed concerns over whether institutions in countries that had focused on the euro conversion would be ready for the date change. These officials said they expected the larger financial firms would have sufficient resources to address the euro conversion and Year 2000 efforts simultaneously. However, they said they were concerned that small to medium-sized firms would not be able to adequately complete both projects in such short time frames. Like financial institutions in the United States, officials in the foreign financial institutions we contacted said they also were assessing the Year 2000 readiness of their vendors, electronic linkages, business partners, and customers. These officials said they were also generally using surveys to assess external entities’ readiness and were conducting selected on-site visits to review such organizations’ Year 2000 programs in more detail. The financial regulators in the foreign countries we contacted said that they had taken steps to assess the Year 2000 efforts of the entities they oversee. However, their oversight program approaches varied across countries and, in some cases, appeared to be less extensive than those made by U.S. financial regulators. The guidance issued by foreign regulators and regulatory requirements for disclosing the readiness of foreign financial institutions appeared to be less extensive than those mandated for U.S. financial institutions. In the United States, bank regulators issued at least 11 statements to provide guidance for banks on a variety of topics to help them prepare for the Year 2000. In contrast, foreign regulators in the countries we had visited had issued guidance to the institutions they oversee only once or twice, and this guidance covered a narrower range of topics than the U.S. regulators’ guidance. Regulators and financial institution officials in several of the countries we visited indicated that the U.S. regulators’ guidance had proved helpful. For firms operating in these countries, the requirements regarding publicly disclosing their Year 2000 efforts also differed. In March 1998, the U.K. accounting standards body required firms to discuss the following in public financial statements: Year 2000 risks, efforts to address those risks, and the costs of those efforts. In France, the securities regulatory body mandated that firms issuing publicly available securities make similar disclosures. Germany required no such disclosures, but an official with a German bank indicated that financial institutions were making disclosures for competitive reasons. Financial regulators in Japan had issued a checklist addressing Year 2000 issues to financial institutions in their country, and regulatory examiners reviewing firms’ activities also were using this document. Regulators in Korea had issued guidance on Year 2000 issues, but examiners in that country had also reviewed guidance issued by and received training from U.S. banking regulators. Foreign regulators had also placed less emphasis than U.S. regulators on having the financial institutions they oversee assess the Year 2000 readiness of their key financial relationships. U.S. bank regulators required the institutions they oversee to complete assessments of the readiness of their key financial relationships by September 30, 1998. U.S. securities regulators required U.S. securities firms to report on their own readiness and the extent to which their contingency planning assessed the readiness of external organizations. In general, regulators in France, Germany, Japan, Korea, and the United Kingdom had not specifically directed their financial institutions to make assessments of the Year 2000 readiness of their customers or other entities with which they have critical financial relationships. However, regulators in France and Japan had included questions relating to third-party or customer assessments in the surveys they had administered to financial institutions. In Germany, banks had worked through industry associations to develop a standardized questionnaire for use in obtaining information on customer readiness. Korean regulators had recommended that banks take the Year 2000 readiness of their customers into account when making credit decisions. Approaches to examinations also varied in these countries. In Germany and the United Kingdom, external auditors rather than regulatory bodies usually conduct examinations of banks. Financial regulators in both of these countries said they have tasked the external auditors to address Year 2000 issues as part of the reviews they conduct of banks. Financial regulators in the United Kingdom also said they were making their own limited visits to banks to discuss Year 2000 issues and have incorporated Year 2000-related questions into their regular examinations of securities firms. In France, financial regulators said they were conducting specific Year 2000 examinations of all banks and securities firms. In Japan and Korea, multiple regulatory bodies said they were involved in conducting periodic and ad hoc reviews of financial institutions’ Year 2000 efforts. Conducting international financial transactions frequently requires the involvement of market support organizations, which perform clearance and settlement or other necessary services. For example, many financial institutions use the services of the Society for Worldwide Interbank Financial Telecommunication (SWIFT) in Belgium, which provides a proprietary network for transmitting messages pertaining to financial transactions. SWIFT transmits information among as many as 7,000 institutions in 160 different countries and processes messages relating to an average of $3 trillion per day. Two other key support organizations based in Europe are Euroclear in Belgium and Cedel Bank in Luxembourg. These organizations perform clearance and settlement services on behalf of many internationally active financial institutions, with Euroclear having about 2,200 participants in 70 countries and Cedel Bank providing services to customers in 80 countries. Because of the role they play in international finance, these organizations’ ability to successfully ready their systems for the date change in 2000 is important. Officials from SWIFT, Euroclear, and Cedel Bank told us that they had completed most of their internal systems modifications and testing by December 1998. These officials said that they have testing programs under way with the financial institutions they service. These organizations are also scheduled to participate in a June 1999 global test of worldwide central banks and payment systems that is being sponsored by the New York Clearing House Association. This association operates an electronic payments system for international dollar payments. Two organizations have taken the lead in addressing Year 2000 issues from the perspective of how these issues affect financial institutions and markets internationally. The Global 2000 Coordinating Group is a private sector organization comprising many of the world’s largest banks and securities firms and is leading efforts to ensure the readiness of the global financial system. This group was formed in April 1998 with the mission of identifying and providing resources to areas where coordinated initiatives would assist the financial community in improving its readiness for the date change in 2000. As of January 1999, the group had participants from 244 institutions in 53 countries. To assist financial institutions in addressing the Year 2000 problem, the Global 2000 Coordinating Group has encouraged its members to self- disclose the status of their readiness efforts and has developed a template to standardize the presentation of this information. The group has also developed country assessments that attempt to assess the level of readiness of the financial sector and infrastructure providers such as telecommunications, power, water, and government. Although not being released publicly, these country assessments are being shared with selected public and private sector officials in their respective countries. The group has also published guidance on contingency planning and is attempting to compile a comprehensive list of testing activities being conducted globally. The Joint Year 2000 Council is the other organization actively addressing international Year 2000 issues. The council was formed in April 1998 and comprises senior representatives of the Basle Committee on Banking Supervision, the Committee on Payment and Settlement Systems, the International Association of Insurance Supervisors, and the International Organization of Securities Commissions. Currently, a member of the U.S. Federal Reserve Board of Governors chairs the council. Other staff from the U.S. banking regulators and SEC also participate in several of the activities of the Joint 2000 Council and have also been active in other international forums on a variety of Year 2000 concerns. The council’s main goals have been to (1) share information on Year 2000 oversight strategies and approaches and contingency planning and (2) serve as a focal point for other national and international Year 2000 remediation initiatives. The council has issued various sets of guidance for financial regulators and financial institutions, including papers on testing and procedures for assessing financial institutions. In February 1999, the council released a series of papers on contingency planning. Although the financial sectors in the United States and the other countries we contacted were actively addressing the Year 2000 problem, other issues will require continued attention from financial regulators, financial institutions, and other organizations as 2000 approaches. The readiness of infrastructure providers—including telecommunications, power, water, and other services—were a concern to financial institution officials with whom we spoke. Other areas that financial regulators and financial institutions said they are addressing that will require continued efforts include (1) developing mechanisms for coordinating between regulators and other organizations during the date change period, (2) promoting additional Year 2000 readiness disclosure by foreign organizations, and (3) developing strategies for communicating the readiness status of the financial sector to public. Lastly, financial institutions’ experiences as they participate in the recent introduction of the new currency in Europe and the results of various Year 2000 tests slated to occur around the world in 1999 will likely provide lessons learned and indications of the prospects for a successful Year 2000 transition. The Year 2000 readiness status of infrastructure providers in foreign markets, such as telecommunications firms and power providers, is a major concern to the U.S. and foreign financial institutions operating internationally that we contacted. Regardless of their own readiness and contingency planning, financial markets and the firms participating in them will not likely be able to continue operating after the date change unless the various infrastructure providers are also ready. Regulatory and financial institution officials in the countries we visited expressed several concerns about the readiness of infrastructure providers in their own and other countries. First, officials indicated that they did not have adequate information about the readiness status of many infrastructure providers in other countries. This lack of information contributed to uncertainty about the readiness status of these providers. Many of the officials we contacted said that the providers should make more information publicly available to reduce this uncertainty. The Joint Year 2000 Council has attempted to address the infrastructure readiness issue, noting in its October 1998 bulletin that the scarcity of information available from operators of key infrastructure components has inhibited prudent planning by users of these services.The council also noted that in many areas, no existing public sector body has been directed to require action by providers or the oversight structure is too disjointed to allow one authority to lead such an effort. Another concern of financial market officials regarding infrastructure is that Year 2000 problems in one country’s infrastructure could create problems for other countries because of cross-border linkages. In a September 1998 report, the Organization for Economic Cooperation and Development noted that, along with international financial transactions, sectors such as transport, telecommunications, power provision, and other activities depend on cross-border interconnections that could be vulnerable to Year 2000 breakdowns. In addition, regulatory and financial institution officials told us that they were concerned that inadequate attention was being paid to cross-border dependencies among infrastructure providers. For instance, officials indicated that some German power providers rely on natural gas from Russia, a country that has already acknowledged lacking resources to address Year 2000 issues. Officials expressed concerns that no organization appears to have taken the lead to address this and other such cross-border dependencies. An official with the Global 2000 Coordinating Group noted another example of these dependencies, explaining that Swiss power companies supply power to other countries during peak periods. However, over 1,000 power suppliers exist in Switzerland and determining what actions these organizations have taken to ready their systems for the date change has been difficult. Because power supplies are shared among neighboring countries, in December 1998, the European Commission urged that relevant authorities in each country closely monitor progress in this sector, exchange information with their counterparts, and publicly disclose such information. The commission also urged that such information be shared about air, rail, maritime, and road transport sectors because it also found that little cross-border coordination and information exchange has occurred in these areas. Various organizations are addressing infrastructure issues within individual countries and internationally. As we previously recommended,the banking regulators have been meeting to develop contingency plans addressing domestic and international infrastructure issues. According to an OCC official, a Federal Financial Institutions Examination Council (FFIEC) working group on contingency planning meets monthly and has various subgroups addressing specific issues including international payment systems and the readiness of major institutions in key markets. This official also told us that U.S. banking regulators obtain information from their counterparts in other countries on the readiness of infrastructure providers in those countries and emphasize to these counterparts the importance of having more information publicly disclosed on the status of key infrastructure sectors. In addition to the financial regulators’ efforts, the President’s Council on Year 2000 Conversion has working groups addressing issues relating to various infrastructure sectors in the United States and has also taken actions related to cross-border concerns. For example, national Year 2000 coordinators from the United States and as many as 120 countries discussed infrastructure issues at a meeting held at the United Nations in December 1998. Officials of the President’s Council have also met with their counterparts in other countries to discuss cross-border infrastructure issues. Internationally, individual infrastructure sectors also have taken steps to address Year 2000 issues. For example, the International Telecommunications Union has attempted to gather readiness information and coordinate cross-border testing of telecommunications services. Other important issues, including coordination, disclosure, and communication issues, require attention by U.S. regulators, financial institution officials, and others as the date change in 2000 approaches. For example, regulators and financial institution officials with whom we spoke said that regulators will have to be involved in creating a mechanism for managing information collection shortly before and immediately after the date change. The Joint Year 2000 Council stated, in February 1999, that the accurate exchange of information in late 1999 and the beginning of 2000 between the public, private, and financial market sectors both domestically and across borders would be vital to a smooth transition. As a result, the council encouraged financial market authorities to establish communication channels that could be used before, during, and after the date change. The council also noted that authorities could consider establishing a centralized location for collecting and exchanging information among financial regulators, other authorities, and financial market participants. Such information centers may be useful in coordinating contingency plans in the event of disruptions and failures and may reduce the information demands on financial institutions, which would allow them to concentrate their resources on fixing problems that may occur. Some of the financial institution officials we contacted also called for the establishment of such a coordination mechanism. These officials said that their main concern was the need for a centralized mechanism for gathering and disseminating accurate information about events during the date change to avoid panic. They suggested that government regulators act in this capacity because they generally have access to government counterparts in other countries and could provide more complete and accurate information to the markets. For instance, a representative of one financial institution told us it would be good to have some organization in place that could be contacted in the event institutions were having difficulty with a telephone system in one of the foreign markets in which they conduct business. A Federal Reserve official noted that the U.S. banking regulators were considering how to facilitate this type of communication and were discussing it as part of the contingency planning efforts with other regulators and private-sector financial officials. An OCC official explained that each of the banking regulators was developing its own plans for having coordination mechanisms in place, and that eventually they plan to integrate these efforts through the FFIEC contingency planning working group. As part of this, the banking regulators were also developing a list of contacts among the staff of financial regulators in other countries from whom information about the status of the date change outside of the United States could be readily obtained. Another issue that should concern U.S. and foreign regulators before 2000 arrives involves the need for additional disclosure by organizations in other countries of their Year 2000 readiness status. Many of the financial institution officials we contacted indicated that more information about readiness status should be publicly disclosed by entities in other countries. Because of the need to disclose information to other countries that have an economic interest in their member nations, the European Commission noted in a December 1998 report that member state governments should accelerate or establish mechanisms for coordinating and monitoring Year 2000 readiness. The commission also noted that it is better to have information that problems exist and are being addressed than to have uncertainty created by a total lack of information. On January 29, 1999, the Global 2000 Coordinating Group called for (1) financial firms to redouble their efforts to disclose their Year 2000 readiness and (2) governments to provide more detailed public disclosure of the readiness of sectors that are critical to the safe functioning of the financial industry. Banking regulatory officials told us that, whenever possible, they discuss with foreign regulatory organizations the need for more public disclosure on the Year 2000 readiness status of financial and other organizations. An OCC official said that the December 1998 issuance by the Joint Year 2000 Council on information sharing, which encourages foreign regulatory and other organizations to disclose more information, was an example of how U.S. regulators were supporting this issue. A final issue concerning regulators before the Year 2000 date change occurs involves the need to communicate accurate information about the readiness status of the financial sectors to the public. In the December 1998 paper on information sharing, the Joint Year 2000 Council stated that financial market authorities should set an example by implementing a comprehensive information-sharing program that includes communicating with the general public. The council noted that promoting and preserving public confidence requires strategic coordination. It stated that financial market authorities should develop communication plans to reinforce, as appropriate, the public’s confidence in the financial system. In an additional issuance in February 1999, the council stated that, by providing periodic status reports on the readiness of the financial sector, authorities could shape the perceptions and expectations of the public and minimize irrational and potentially destabilizing behavior. U.S. and foreign financial institution and regulatory officials told us that regulators and governments would have to inform the public about the financial sectors’ Year 2000 status. The U.S. regulatory officials we contacted acknowledged that this need existed, and that they intended to address this issue in the future. Various upcoming events may indicate the potential success of Year 2000 remediation efforts. Many industry and regulatory officials told us that the degree of success experienced during the euro conversion would provide some indication of how well financial institutions may be able to manage the Year 2000 date change. Like the Year 2000 problem, the euro conversion required financial institutions to identify, modify, and test internal computer systems to ensure accurate processing. According to U.S. banking officials, the euro conversion in early January 1999 was mostly successful. However, they said some institutions did experience problems in processing transactions. In addition, adequate information about firms’ operating status during the conversion was not always available. In particular, these officials said that organizations that experienced processing difficulties were reluctant to report any problems they had during the conversion. Financial institutions involved in the conversion had contracted with a private-sector information provider to arrange for dedicated display space on its proprietary network as a means for reporting and sharing information about the status of the financial institutions’ operations. However, when problems began occurring, the institutions involved did not report them using this network. They noted that the failure to disseminate information about problems being experienced by some institutions could have created more serious problems. However, many firms continued transmitting their portions of payments due despite a lack of information about the operating status of other institutions or assurance that they would receive the corresponding payments from other institutions. A U.S. banking official said that, although the institutions involved handled the problems that did arise during the euro conversion, the problems could have been more serious if the U.S. institutions had not continued to make their payments when problems arose. A U.S. banking regulatory official said that the regulators and financial institutions should take action to decrease the likelihood that similar information-sharing problems will occur during the Year 2000 date change. The results of testing by markets and institutions around the world in early to mid-1999 should provide another indicator of the international financial markets’ Year 2000 readiness. Many major markets are to conduct tests of their Year 2000 readiness in 1999 and several tests involving multiple firms are also to be conducted, including the U.S. securities industrywide test that began in March 1999 and a global payments system test planned for June 1999. If successful, these tests should provide more assurances that financial markets and financial institutions will be ready for the actual date change in 2000. Although industrywide testing can demonstrate the coordinated and smooth functioning of U.S. financial markets, some officials cautioned that the results of tests such as these should not be considered definitive proof that participating financial institutions are completely ready for 2000. Large financial institutions may participate in a wide variety of financial activities, and all of their systems would not necessarily be tested in any one industrywide test. For example, officials of two organizations, which conduct a wide range of financial activities in the United States and other countries, said that less than 10 percent of their total systems were used during the U.S. securities industry test in July 1998. We obtained oral comments on a draft of this report from staff of the Board of Governors of the Federal Reserve System and SEC and written comments from OCC (see app. I). Each of the agencies that commented on this report said that it was an accurate summary of the activities that their organizations and the financial institutions they oversee are undertaking to address international Year 2000 risks. They also agreed that the issues we highlighted as requiring their continued attention were important. OCC’s written comments are reprinted in appendix I. SEC staff also suggested some technical changes that we incorporated where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days from its issue date. At that time, we will send copies of this report to Representative Thomas Bliley, Chairman, House Committee on Commerce; Senator Robert F. Bennett, Chairman, Senate Special Committee on the Year 2000 Technology Problem; The Honorable Arthur Levitt, Chairman, Securities and Exchange Commission; The Honorable Alan Greenspan, Chairman, Board of Governors of the Federal Reserve System; and The Honorable John D. Hawke, Jr., Comptroller of the Currency, Office of the Comptroller of the Currency. We will also make copies available to others upon request. Please contact me on (202) 512-8678 if you or your staff have any questions. Major contributors to this report are listed in appendix II. Michael Burnett, Assistant Director Cody Goebel, Assistant Director Jean Paul Reveyoso, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the international risks that the year 2000 computer problem poses to U.S. financial institutions, focusing on the extent to which: (1) large, internationally active U.S. financial institutions were addressing international year 2000 risks; (2) U.S. banking and securities regulators were overseeing these risks for the institutions they regulate; (3) large foreign financial institutions and their regulators are addressing the year 2000 risks; and (4) other issues may require attention before 2000 arrives. GAO noted that: (1) large U.S. financial institutions have financial exposures and relationships with international financial institutions and markets that may be at risk if these international organizations are not ready for the date change occurring on January 1, 2000; (2) however, the 7 large U.S. banks and securities firms GAO visited were taking actions to address these risks; (3) they had identified the organizations with which they had critical foreign business relationships, had assessed the year 2000 readiness status of these organizations, and were developing plans to mitigate the risks that would be posed by the lack of year 2000 readiness of one or more of these organizations; (4) they told GAO that they did not expect potential year 2000 disruptions to have much long-term effect on their global operations; (5) U.S. banking and securities regulators were also addressing the international year 2000 risks of the institutions they oversee; (6) banking regulators had issued guidance for banks on addressing international year 2000 risks and were assessing bank preparations for these risks during bank examinations; (7) securities regulators, although not directly responsible for securities firms' foreign activities, were assessing these firms' efforts to address international and other external year 2000 risks using information obtained through the regulated U.S. broker-dealer affiliate; (8) foreign financial institutions reportedly have lagged behind their U.S. counterparts in preparing for the year 2000 date change; (9) one of the major reasons cited for the lag was that these firms also had to make systems modifications to prepare for the introduction of a new European currency in January 1999; (10) officials from 4 of the 7 large foreign financial institutions GAO visited said they had scheduled completion of their preparations for year 2000 about 3 to 6 months after their U.S. counterparts, but they planned to complete their efforts by mid-1999 at the latest; (11) two international organizations created to assist international year 2000 efforts, the Global 2000 Coordinating Group and the Joint Year 2000 Council, were also playing a major role in assessing readiness and helping global financial market institutions and regulators address year 2000 issues; (12) promoting additional year 2000 readiness disclosure by foreign organizations was an issue for which regulators have taken steps to address; and (13) regulators acknowledged the need to continue developing strategies for communicating the readiness status of the financial sector to alleviate concerns among members of the public.
PART’s standard series of questions is designed to determine the strengths and weaknesses of federal programs by drawing on available program performance and evaluation information. OMB applies PART’s 25 questions to all programs under four broad topics: (1) program purpose and design, (2) strategic planning, (3) program management, and (4) program results (that is, whether a program is meeting its long-term and annual goals). During the fiscal year 2004, 2005, and 2006 budget cycles, OMB applied PART to approximately 20 percent of programs each year and gave each program one of four overall ratings: “effective,” “moderately effective,” “adequate,” or “ineffective,” depending on the program’s scores on those questions. OMB gave a fifth rating of “results not demonstrated” when it decided that a program’s performance information, performance measures, or both were insufficient or inadequate. The summary assessments published with the President’s annual budget proposal include recommended improvements in program design, management, and assessment. For example, a summary of the review’s findings might be followed by the clause “the administration will conduct an independent, comprehensive evaluation of the program,” or “the Budget includes to conduct independent and quality evaluations,” both of which we interpreted as an OMB recommendation to the agency to conduct such an evaluation. In our previous analysis of the fiscal year 2004 PART reviews, we analyzed over 600 recommendations made for the 234 programs assessed and found that half of those recommended improvements in program assessment. PART not only relies on previous program evaluation studies to answer many of the questions but also explicitly asks, in the strategic planning section, “Are independent evaluations of sufficient scope and quality conducted on a regular basis or as needed to support program improvements and evaluate effectiveness and relevance to the problem, interest, or need?” Program evaluations are systematic studies that assess how well a program is working, and they are individually tailored to address the client’s research question. Process (or implementation) evaluations assess the extent to which a program is operating as intended. Outcome evaluations assess the extent to which a program is achieving its outcome-oriented objectives; they focus on program outputs and outcomes but may also examine program processes to understand how outcomes are produced. OMB first applied PART to the fiscal year 2004 budget during 2002, and the assessments were published with the President’s budget in February 2003. In January 2004, we reported on OMB and agency experiences with PART in the fiscal year 2004 budget formulation process. We noted that PART had helped structure OMB’s use of performance information in its budget review and had stimulated agency interest in budget and performance integration. However, its effectiveness as a credible, objective assessment tool was challenged by inconsistency in OMB staff application of the guidance and limited availability of credible information on program results. Moreover, PART’s influence on agency and congressional decision making was hindered by failing to recognize differences in focus and issues of interest among the various parties involved in programmatic, policy, and budget decisions. We noted that PART’s potential value lay in recommended changes in program management and design but would require sustained attention if the anticipated benefits were to be achieved. To strengthen PART and its use, in our January 2004 report we recommended that OMB (1) centrally monitor and report on agency progress in implementing the PART recommendations; (2) improve PART guidance on determining the unit of analysis, and defining program outcomes and “independent, quality evaluation”; (3) clarify expectations regarding agency allocation of scarce evaluation resources among programs; (4) target future reviews based on the relative priorities, costs, and risks associated with clusters of programs; (5) coordinate assessments to facilitate comparisons and trade-offs between related program; (6) consult with congressional committees on performance issues and program areas for review; and (7) articulate an integrated, complementary relationship between GPRA and PART. Requesting that we follow up on the findings in our January 2004 report, you asked that we examine (1) OMB and agency perspectives on the effects of PART recommendations on agency operations and results, (2) OMB’s efforts at ensuring an integrated relationship between PART and GPRA, and (3) steps OMB has taken to involve Congress in the PART process. A companion report addresses all three objectives—including OMB’s outreach to Congress—with regard to all PART reviews. Because of the fundamental role that the availability of program evaluations plays in conducting PART assessments, we conducted an in-depth analysis of agencies’ responses to OMB recommendations that they conduct program evaluations. These recommendations were identified through the analysis of recommendations for our January 2004 review. This report focuses on agencies’ progress on those evaluations and the issues involved in obtaining them. For both analyses, we examined the same four agencies’ experiences with PART. The four agencies were selected to represent a range of program types (such as research and regulatory programs), large and small agencies, and, for the purposes of this report, a large proportion of the OMB evaluation recommendations. All but two of the programs we reviewed had responded to some extent to OMB’s recommendations to conduct an evaluation; agencies did not plan evaluations of the other programs because they were canceled or restructured. However, after 2 years, only about half the programs had completed evaluations, partly because of lengthy study periods and partly because of some lengthy planning phases. The evaluations used a variety of study designs, reflecting differences in the programs and in the questions posed about program performance. About half of the programs we reviewed (11 of the 20) had completed an evaluation by June 2005—2 years after the fiscal year 2004 PART reviews and recommendations were published. Four evaluations were in progress, while 3 were still in the planning stage. Agencies did not plan an evaluation of 2 programs because those programs had been canceled or restructured. (See table 1.) Most of OMB’s evaluation recommendations asked for evaluation of the specific program reviewed, while some PART reviews at DOE and DOL asked the agencies to develop a plan for conducting multiple evaluations. At DOL, where two entire regulatory agencies had been assessed, these agencies had completed multiple studies. OMB gave DOE seven evaluation recommendations in its fiscal year 2004 PART reviews. Six were for research programs in basic science and nuclear energy and one was for its formula grant program to weatherize the homes of low-income families. Since one research program in the Office of Science had previously been evaluated by a panel of external experts called a committee of visitors, OMB explicitly recommended that the other research programs in that office also institute such a process by September 2003. In response, DOE completed evaluations of five of the six research programs, but it did not plan to evaluate the sixth, the Nuclear Energy Research Initiative, because it considered this not a stand-alone program but, rather, a source of funding for follow-up projects to other nuclear energy research programs. DOE revised this program’s objective and now authorizes funds for its projects through the other nuclear energy research programs; thus it is no longer considered a separately funded program to be evaluated. Finally, DOE officials indicated that they had only recently gained funding for planning the evaluation of the Weatherization Assistance program. (A bibliography of related agency evaluation reports appears in app. II.) OMB gave DOL five evaluation recommendations for fiscal year 2004. Two were for evaluations of specific DOL programs: grants to state and local agencies to provide employment-related training to low-income youths and administration of the Federal Employees Compensation Act regarding work-related injuries and illnesses. The three others were regulatory enforcement offices or agencies of DOL that were reviewed in their entirety: the Office of Federal Contract Compliance Programs, regarding equal employment opportunity; the Employee Benefits Security Administration; and the Occupational Safety and Health Administration (OSHA). OMB recommended that the last, which is a large regulatory agency, develop plans to evaluate the results of its regulatory and nonregulatory programs. The two DOL regulatory administrations each completed several evaluations of their enforcement activities by spring 2005, as did two of the three other DOL programs we reviewed. DOL is waiting to conduct an evaluation of the fifth—the youth employment program—until after its reauthorization because that is expected to result in an increased focus on out-of-school youths and a significant change in program activities. In addition, OSHA completed two regulatory “lookback” reviews—assessing the cumulative effects of a regulation over time—one in 2004 and another in 2005. Program officials indicated that they had developed a plan for conducting lookback reviews of employee benefit regulations beginning in fiscal year 2006. OMB recommended evaluations for four diverse HHS programs: (1) grants and technical assistance to states to increase childhood disease immunization, (2) grants to states to help recently arrived refugees find employment, (3) education loan repayment and scholarships for nurses in return for serving in facilities facing a nursing shortage, and (4) direct assistance in constructing sanitation facilities for homes for American Indians and Alaskan Natives. Evaluations of the two state grant programs were still in progress during our review, although an interim report on the immunization program was available. Reports from the two other program evaluations had recently been completed and were under departmental review. OMB recommended evaluations for four SBA programs: (1) support for existing Business Information Centers that provide information and access to technology for small businesses; (2) use of volunteer, experienced business executives to provide basic business counseling and training to current and prospective entrepreneurs; (3) Small Business Development Centers that provide business and management technical assistance to current and prospective entrepreneurs; and (4) the small business loan program that provides financing for fixed assets. OMB also asked all three counseling programs to develop outcome-oriented annual and long-term goals and measures. SBA is conducting customer surveys and had recently initiated a comprehensive evaluation of one its counseling programs, and is planning one for the other in fiscal year 2006. Another evaluation has begun to compare the costs, benefits, and potential duplication of its business loan programs. SBA planned no evaluation of the Business Information Centers program because the program was canceled, partly as a result of the PART review and an internal cost allocation study. In reassessing the need for the program, SBA decided that because of the increase in commercially available office supplies and services and the accessibility of personal computers over the years, such a program no longer needed federal government support. Because evaluations are designed around programs and what they aim to achieve, the form of the evaluations reflected differences in program structure and anticipated outcomes. The evaluations were typically multipurpose, including questions about results as well as the agency processes that managers control in order to achieve those results, and designed to respond to OMB and yield actionable steps that programs could take to improve results. The Nursing Education Loan Repayment and Scholarship programs aim to increase the recruitment and retention of professional nurses by providing financial incentives in exchange for service in health care facilities that are experiencing a critical shortage of nurses. The ongoing evaluation of the two programs combined was shaped by the reporting requirements of the Nurse Reinvestment Act of 2002. The act requires HHS to submit an annual report to Congress on the administration and effect of the programs. Each yearly report is to include information such as the number of enrollees, scholarships, loan repayments and grant recipients, graduates, and recipient demographics to provide a clear description of program beneficiaries. Program beneficiaries are compared with the student, nurse applicant and general populations to assess success in outreach. Information pertaining to beneficiaries’ service in health care facilities is important for determining whether program conditions and program goals have been met. The number of defaulters, default rate, amount of outstanding default funds, and reasons for default are reported for each year. These data as well as follow-up data on whether beneficiaries remain in targeted facilities after their term of commitment will be important in assessing the overall cost-benefit of the program. Subsequent data collection will establish trends and allow for a cost- benefit analysis in the future. The Indian Health Service Sanitation Facilities Construction delivers construction and related program services to provide drinking water and waste disposal facilities for American Indian and Alaska Native homes, in close partnership with tribes. Among other issues, the evaluation examined key areas of service delivery, while the health benefits of clean water were assumed. Specifically, project needs identification and project portfolio management were evaluated to see how well construction efforts are prioritized and targeted to areas of greatest need, and whether facilities construction projects are competently designed, timely, and cost- effective. The completed evaluation recommended that the agency consider integrating its separate data systems into a single portfolio management system to represent all projects or, at least, to adopt standardized project management and financial tracking systems. The primary responsibility of DOL’s Office of Federal Contract Compliance Programs is to implement and enforce rules banning discrimination and establishing affirmative action requirements for federal contractors and subcontractors. Because of the time and expense involved in conducting compliance reviews and complaint investigations, the office is attempting to target establishments for review based in part on an analytic prediction that they will be found to discriminate. The focus of its effectiveness evaluation, therefore, was on identifying a targeting approach and measuring change in the rate of discrimination among federal contractors during the period of oversight. The logic for this choice of outcome measure was based on the expectation that overall rates of discrimination would decrease if the oversight programs were effective. Using data on the characteristics of establishments that had already been reviewed, evaluators used statistical procedures to estimate a model of the probability of discrimination. The coefficients from that model were then used to predict rates of discrimination among contractors who had not been reviewed and among noncontractors. The analysis showed that the office effectively targeted selected establishments for review, but there was no measurable effect on reducing employment discrimination in the federal contractor workforce overall. To improve the office’s effectiveness, the evaluators recommended that the office focus on establishments with the highest predicted rates of discrimination rather than employ its previous approach, targeting larger establishments that are likely to affect a greater number of workers. The DOE Office of Science used a peer review approach to evaluating its basic research programs, adapting the committee of visitors model that the National Science Foundation had developed. Because it is difficult to predict the findings of individual basic research projects, science programs have adapted the peer review model they use for merit selection of projects to evaluate their portfolios of completed (and ongoing) research. The Office of Science convenes panels of independent experts as external advisers to assess the agency’s processes for selecting and managing projects, the balance in the portfolio of projects awarded, and progress in advancing knowledge in the research area and in contributing to agency goals. Panel reviews generally found these programs to be valuable and reasonably well-managed and recommended various management improvements such as standardizing and automating documentation of the proposal review process, adopting program-level strategic planning, and increasing staffing or travel funds to increase grantee oversight. OSHA, pursuant to section 610 of the Regulatory Flexibility Act and section 5 of Executive Order 12866, must conduct lookback studies on OSHA standards, considering public comments about rules, the continued need for them, their economic impacts, complexity, and whether there is overlap, duplicity, or conflict with other regulations. OSHA recently concluded a lookback review on its Ethylene Oxide standard and issued a final report on another lookback review that examined the Presence Sensing Device Initiation standard for mechanical power presses.A press equipped with a sensing device initiates a press cycle if it senses that the danger zone is empty, and if something should enter the zone, the device stops the press. Accidents with mechanical presses result in serious injuries and amputations to workers every year. In the sensing device lookback review, OSHA examined the continued need for the rule, its complexity, complaints levied against the rule, overlap or duplication with other rules, and the degree to which technology, economic conditions, or other factors have changed in the area affected by the rule. Typically, once a standard is selected for a lookback review, the agency gathers information on experience with the standard from persons affected by the rule and from the general public through an announcement in the Federal Register. In addition, available health, safety, economic, statistical, and feasibility data are reviewed, and a determination is made about any contextual changes that warrant consideration. In conducting such reviews, OSHA determines whether the standards should be maintained without change, rescinded, or modified. OSHA found that there was a continued need for the rule but that to achieve the expected benefits of improved worker safety and employer productivity, the rule needed to be changed. Although the technology for sensing device systems had not changed since their adoption in 1988, the technology for controlling mechanical presses had changed considerably, with press operation now often controlled by computers, introducing hazards that were not addressed initially by the standard. Agency officials described two basic barriers to completing the evaluations that OMB recommended: obtaining valid measures of program outcomes to assess effectiveness and obtaining the financial resources to conduct independent evaluations. Although most of the program officials claimed that they had wanted to conduct such evaluations anyway, they noted that the visibility of an OMB recommendation brought evaluation to the attention of their senior management, and sometimes evaluation funds, so that the evaluations got done. Indeed, in response to the PART reviews and recommendations, two of the agencies initiated strong, centrally led efforts to build their evaluation capacity and prioritize evaluation spending. To evaluate program effectiveness, agencies needed to identify appropriate measures of the outcomes they intended to achieve and credible data sources for those measures. However, as noted in our previous report, many programs lacked these and needed to develop new outcome-oriented performance measures in order to conduct evaluations. Agency officials identified a variety of conceptual and technical barriers to measuring program outcomes similar to those previously reported as difficulties in implementing performance reporting under GPRA. SBA officials acknowledged that before the PART reviews, they generally defined their programs’ performance in terms of outputs, such as number of clients counseled, rather than in outcomes, such as gains in small business revenue or employment. SBA revised its strategic plan in fall 2003 and worked with its program partners to develop common definitions across its counseling programs, such as who is the client or what constitutes a counseling session or training. Since SBA had also had limited experience with program evaluation, it contracted for assistance in designing evaluations of the economic impact of its programs. DOL had difficulty conceptualizing the outcomes of regulations in monetary terms to produce the cost-benefit analyses that PART (and the Regulatory Flexibility Act) asks of regulatory programs. For instance, OSHA has historically considered the likely controversy of quantifying the value of a human life in calculating cost-benefit ratios for developing worker health and safety regulations. OSHA officials explained that the Assistant Secretary had helped to mitigate such a controversy by issuing a July 2003 memorandum that directed OSHA staff to identify costs, benefits, net benefits, and the impact of economically significant regulations and their significant alternatives, as well as discuss significant nonmonetized costs and benefits. DOL officials noted that designing a cumulative assessment of the net benefits of employer reporting requirements for pension and health benefit plans was complicated. For example, a primary benefit of reporting is to aid the agency’s ability to enforce other benefit plan rules and thereby protect or regain employees’ benefits. They also pointed out that although health and safety regulations are mandatory, employers are not required to offer benefit plans, so a potential cost of regulators’ overreaching in their enforcement actions could be discouraging employers from offering these pension and health benefits altogether. DOE officials acknowledged that they could not continue to use state evaluations to update the national estimates of energy savings from a comprehensive evaluation of weatherization assistance conducted a decade ago. They recognized that assumptions from the original national evaluation could no longer be supported and that a new, comprehensive national evaluation design was needed. They noted new hurdles to measuring reductions in home heating costs since the previous evaluation: (1) monthly electric bills typically do not isolate how much is spent on heating compared with other needs, such as lighting, and (2) the increased privatization of the utility industry is expected to reduce government access to the utilities’ data on individual household energy use. Other barriers were more operational, such as the features of a program’s data system that precluded drawing the desired evaluative conclusions. For one, regulations need to be in place for a period of years to provide data adequate for seeing effects. HHS officials noted that their databases did not include the patient outcome measures OMB asked for and that they would need to purchase a longitudinal study to capture those data. They also noted that variation in the form of states’ refugee assistance programs and data systems, as well as regional variation in refugees’ needs, made it difficult to conduct a national evaluation. Their evaluation especially relied on the cooperation of state program coordinators. DOL officials pointed out that the federal employees’ compensation program’s data system was developed for employee and management needs and did not lend itself to making comparisons with the very different state employee compensation programs. Evaluation generally competes for resources with other program and department activities. Contracts for external program evaluations that collect and analyze new data can be expensive. In a time of tight resources, program managers may be unwilling to reallocate resources to evaluation. Agencies responded to such limitations by delaying evaluations or cutting back on an evaluation’s scope. Some agency officials thought that evaluations should not be conducted for all programs but should be targeted instead to areas of uncertainty. HHS’s Office of Refugee Resettlement—which was allotted funds especially for its evaluation—is spending $2 million to evaluate its refugee assistance program over 2 years. Costs are driven primarily by the collection of data through surveys, interviews, and focus groups and the need for interpreters for many different languages. Given the size and scope of the program, even with $2 million, program officials would have liked to have more time and money to increase the coverage of their national program beyond the three sites they had selected. DOL program officials explained that although they had had a large program evaluation organization two decades ago, the agency downsized in 1991, the office was eliminated, and now they must search for program evaluation dollars. The program spent $400,000 for an 18-month evaluation of the Federal Employees Compensation Act program, which relied heavily on program administrative data, but they also spent a large amount of staff time educating and monitoring the contractor. Program officials were disappointed with the lack of depth in the evaluation. They believed that their evaluation contractor did not have enough time to plan and conduct a systematic survey, and consequently, their selective interview data were less useful than they would have liked. DOE program officials indicated that they have been discussing an evaluation of Weatherization Assistance since spring 2003, but not having identified funds for an evaluation, they have not been able to develop a formal evaluation plan. They had no budget line item for evaluation, so they requested one in their fiscal year 2005 appropriations. Although there was congressional interest in an evaluation, additional funds were not provided in fiscal year 2005. DOE instructed program officials to draw money for evaluation from the 10 percent of the program’s funds that are set aside for training and technical assistance, increase the federal share from 1.5 percent to 2 percent, and reduce the states’ share to 8 percent. Program officials indicated that the amount from the technical assistance account would cover only planning and initial implementation activities, not the bulk of the evaluation itself. And they were concerned about displacing existing training, so they were still looking for an evaluation funding commitment. Agency officials also questioned PART’s assumption that all programs should have evaluations. SBA officials indicated that some agency appropriations generally precluded SBA’s spending program funds on any but specifically identified program activities. Thus, evaluations had to be funded from agency administrative funds. They thought that it was unreasonable to ask a small agency to finance several program evaluations, as might be expected of a larger agency. SBA dealt with this by conducting evaluations sequentially as funds became available. DOL program officials also thought that spending several hundred thousand dollars for a comprehensive evaluation study was a reasonable investment for a $2.5 billion program but not for small programs. They did not believe that all programs need to be evaluated—especially in a time of budget deficits. They recommended that OMB and agencies should “pick their shots” and should be more focused in choosing evaluations to conduct. They suggested a risk-based approach, giving higher priority to evaluating programs for which costs are substantial and effectiveness uncertain. Most of the agency officials we interviewed declared that they valued evaluation. For example, HHS and DOE officials described evaluation as part of their culture. Many said they had already been planning to do something similar to the evaluation that OMB had recommended. In a couple of cases, OMB’s recommendation appeared to have been shaped by planned or ongoing activities. However, officials in all four agencies indicated that the visibility of a PART recommendation and associated OMB pressure brought management attention, and sometimes funds, to getting the evaluations done. HHS departmental officials said that the agency was a federal leader in terms of evaluation capacity, and that they spend approximately $2.6 billion a year on agency-initiated research, demonstrations, and evaluation. They stated that it is part of their culture to conduct evaluations—because their program portfolio is based in the physical and social sciences. DOE officials said that they embraced the PART process because, as an agency with a significant investment in advancing science and technology, DOE had already been using similar processes, such as peer review, to evaluate its programs. DOE officials noted that DOE had developed a basic evaluation mechanism—independent peer review—that all its research programs undertake. Officials in the Office of Energy Efficiency and Renewable Energy developed a corporate peer review guide summarizing best practices in this field and considered their peer review process as “state of the art,” as it is used as a model nationally and globally. In other cases, agency or congressional interest in evaluation seemed to set the stage for OMB evaluation recommendations. For example, while OMB was reviewing the Nursing Education Loan Repayment program, the Nursing Reinvestment Act of 2002 was enacted, expanding the program and instituting a requirement for annual reports after the first 18 months. The reports were to include data on the numbers of loan applicants and enrollees, the types of facilities they served in, and the default rates on their loans and service commitments and an evaluation of the program’s overall costs and benefits. OMB then recommended that the agency evaluate the program’s impact, develop outcome measures, and begin to track performance against newly adopted benchmarks. To respond to OMB’s request for a long-term outcome measure, the agency agreed to also collect information on how long beyond their service commitment nurses stay in service in critical shortage facilities. In another example previously discussed, the DOE Office of Science had already initiated committee of visitors reviews for its Basic Energy Sciences program, which OMB then recommended for other research programs in that office. The PART and President’s Management Agenda pressed agencies to report progress on the recommendations. OMB published the cumulative set of completed PART review summaries, including the recommendations, in the President’s budget proposals for fiscal years 2004 through 2006. In the fiscal year 2006 budget, OMB reported on the status of its previous recommendations in the PART summaries, whether action had been taken or completed. OMB also asked agencies to report on their progress in implementing PART recommendations to provide input into its quarterly scorecards on agencies’ progress in implementing the President’s Management Agenda initiatives. In addition, OMB precluded agencies from being scored “green” on Budget and Performance Integration if more than 10 percent of their programs were rated “results not demonstrated” 2 years in a row. DOE and DOL program officials reported being asked to update the status of the recommendations every 2 to 3 months. HHS officials noted that since fall 2004, they have been reporting on PART recommendations to OMB twice a year, tracking approximately 100 PART recommendations (with about 200 separate milestones) for the 62 programs reviewed for fiscal years 2004 through 2006. Most of the officials we interviewed believed that because of PART and the President’s Management Agenda, their agencies were paying greater attention to program results and evaluation. Officials at DOL noted that the department spends much time and effort making sure it scores green on the next President’s Management Agenda assessment; for example, the department’s management review board, chaired by Labor’s Assistant Secretary for Management and Administration, discusses these issues monthly. In addition, DOL’s Center for Program Planning and Results reviews programs’ progress on OMB’s recommendations, scores programs internally on the Budget and Performance Integration scorecard, and provides agencies with training and preparation before their PART reviews. The SBA Administrator initiated a series of steps after August 2003 to increase the agency’s focus on achieving results. SBA rewrote its strategic plan to focus on a limited number of strategic goals and integrated its strategic plan, annual performance plan, and performance report. The agency formed a central Office of Analysis, Planning, and Accountability to help each program office develop results-oriented performance measures and conduct program assessments. Although HHS officials said that the department had invested in evaluation long before the PART reviews, Indian Health Service program officials indicated that they had not planned an evaluation of their sanitation facilities program before the PART review. However, they thought it was a good idea and said that the recommendation brought their lack of a recent evaluation to HHS’s attention, making it easier to justify efforts to quantify their program’s benefits. SBA and DOL responded to demands for more performance information by centrally coordinating their assessment activities, helping to address evaluation’s measurement and funding challenges. Centralization helped the agencies to leverage their evaluation expertise throughout the agency and helped them prioritize spending on the evaluations they considered most important. SBA program offices had little experience with outcome measurement and evaluation before the 2002 PART reviews. The central planning office was formed to help the program offices develop outcome measures linked to the agency’s strategic goals and collect and validate their performance data. The office also conducts an annual staff activity survey to support cost allocation across programs, a key step toward performance budgeting. This office took advantage of the similarity in outcome goals across SBA’s programs and the evaluation methodology developed for the counseling programs to contract for the development of a standard methodology for assessing other SBA programs’ economic impacts on small businesses. The central office is also funding the subsequent evaluations. For a small agency, this type of coordination can result in important savings in contract resources as well as staff time. DOL, much larger than SBA, has measurement and evaluation experience, but capacity had declined over time. DOL established the Center for Program Planning and Results in 2001 to provide leadership, policy advice, and technical assistance to GPRA-related strategic and performance planning. The center was expanded in fiscal year 2003 to respond to the President’s Management Agenda and manage the PART process. With a budget of $5 million a year, the center solicits and selects evaluation proposals focusing on program effectiveness submitted by DOL’s component agencies, funds the studies, and helps oversee the external contractors. The center’s officials claimed that the Secretary’s and Assistant Secretary’s support for evaluation, combined with pressure from OMB, has led to increased interest by the component agencies in evaluation, resulting in $6 million to $7 million in proposals competing for $5 million in evaluation funds. Some DOL agencies retained their evaluation expertise and design, fund, and oversee their own evaluations. In addition to helping program offices develop research questions and evaluation designs, the center helps develop agency evaluation capacity by holding “Vendor Days,” when evaluation contractors are invited to exhibit for agency staff the specialized design, data collection, and analysis skills that could inform future studies. Because the OMB evaluation recommendations were fairly general, agencies had flexibility in interpreting the information OMB expected and the evaluations to fund. Some program managers disagreed with OMB on the scope and purpose of their evaluations, their quality, and the usefulness of evaluations conducted by independent third parties. Program managers concerned about an increased focus on process said that they were more interested in learning how to improve program performance than in meeting an OMB checklist. Since a few programs did not discuss their evaluation plans with OMB, it is not certain whether OMB will accept their ongoing evaluations. Agencies had a fair amount of flexibility to design their evaluations. Except for the recommendations to the DOE Office of Science to conduct committee of visitors reviews, OMB’s evaluation recommendations were fairly general, typically telling agencies to conduct an independent evaluation of a program’s effectiveness. Agencies reported little guidance from OMB on how to conduct these evaluations, beyond the PART written guidance and the rationale the examiner provided for not accepting their previous evaluations or measures of program outcomes. They said that follow-up on previous PART recommendations was generally limited to providing responses to the OMB reporting template, unless OMB conducted a second formal PART review. Agencies also had flexibility to determine the timing of their evaluations. Agency officials reported that OMB did not prioritize its recommendations within or among programs. Moreover, because evaluation resources were limited, DOL and SBA officials reported that they had to choose which evaluations to conduct first. The recommendations for the two DOL regulatory agencies explicitly acknowledged their need to balance responsibility for several programs. OMB asked these agencies to develop plans to evaluate their programs or expand existing efforts for more comprehensive and regular evaluation. In the reviews of recommendation status for the fiscal year 2006 budget, OMB credited both agencies with having conducted one or more program reviews and planning others. Agencies were free to choose which programs to evaluate but were likely to be influenced by the potential effect of PART reassessments on their President’s Management Agenda scores and, thus, to attempt to reduce the number of programs rated “results not demonstrated.” Research and development programs were held to a somewhat higher standard than other programs were, since their agencies could not be scored “green” on the separate R&D Investment Criteria Initiative if less than 75 percent of their programs received a score of “moderately effective” or better. DOE officials noted that their Office of Energy Efficiency and Renewable Energy now requires programs to outline their plans for evaluations in their multiyear plans. OMB and the agencies significantly differed in defining evaluation scope and purpose. Program officials were frustrated by OMB’s not accepting their prior evaluations of program effectiveness in the PART review. Some of the difficulties seemed to derive from OMB expecting to find, in the agencies’ external evaluation studies, comprehensive judgments about program design, management, and effectiveness, like the judgments made in the OMB PART assessments. PART’s criteria for judging the adequacy of agency evaluations are complex and may have created some tension as to the importance of one dimension over another. For example, question 2.6 read: “Are independent evaluations of sufficient scope and quality conducted on a regular basis or as needed to support program improvements and evaluate effectiveness and relevance to the problem, interest, or need?” OMB changed the wording of the question to help clarify its meaning and added the reference to “relevance.” However, while OMB’s revised guidance for this question defines quality, scope, and independence, it does not address the assessment of program “relevance.” Specifically, sufficient scope is defined as whether the evaluation focuses on achievement of performance targets and the cause and effect relationship between the program and target—i.e., program effectiveness. This is different from assessing the relevance—i.e., appropriateness—of the program design to the problem or need. Instead, questions in section 1 ask whether the design is free of major flaws and effectively targeted to its purpose. Another potential contribution to differences between OMB and agency expectations for program evaluations is that evaluations designed for internal audiences often have a different focus than evaluations designed for external audiences. Evaluations that agencies initiate typically aim to identify how to improve the allocation of program resources or the effectiveness of program activities. Studies requested by program authorizing or oversight bodies are more likely to address external accountability—to judge whether the program is properly designed or is solving an important problem. HHS officials reported differences with OMB over the acceptability of HHS evaluations. HHS officials were particularly concerned that OMB sometimes disregarded their studies and focused exclusively on OMB’s own assessments. One program official complained that OMB staff did not adequately explain why the program’s survey of refugees’ economic adjustment did not qualify as an “independent, quality evaluation,” although an experienced, independent contractor conducted the interviews and analysis. In the published PART review, OMB acknowledged that the program surveyed refugees to measure outcomes and monitored grantees on-site to identify strategies for improving performance. In our subsequent interview, OMB staff explained that the outcome data did not show the mechanism by which the program achieved these outcomes, and grantee monitoring did not substitute for obtaining an external evaluation, or judgment, of the program’s effectiveness. Other HHS officials said that OMB had been consistent in applying the standards for independent evaluation, but these standards were set extremely high. In reviewing the vaccination program, OMB did not accept the several research and evaluation studies offered, since they did not meet all key dimensions of “scope.” OMB acknowledged that the program had conducted several management evaluations to see whether the program could be improved but found their coverage narrow and concluded “there have previously been no comprehensive evaluations looking at how well the program is structured/managed to achieve its overall goals.” OMB also did not accept an external Institute of Medicine evaluation of how the government could improve its ability to increase immunization rates because the evaluation report had not looked at the effectiveness of the individual federal vaccine programs or how this program complemented the other related programs. However, in reviewing recommendation status, OMB credited the program with having contracted for a comprehensive evaluation that was focused on the operations, management, and structure of this specific vaccine program. DOE Office of Science officials described much discussion with OMB examiners about what was or was not a good committee of visitors review in following up on the status of the evaluation recommendations. Although OMB had revised and extended its guidance on what constituted quality in evaluation, program officials still found this guidance difficult to apply to research programs. They also acknowledged that their first committee of visitors reviews might have been more useful to the program than to OMB. OMB and agencies differed in identifying which evaluation methods were sufficiently rigorous to provide high-quality information on program effectiveness. OMB guidance encouraged the use of randomized controlled trials, or experiments, to obtain the most rigorous evidence of program impact but also acknowledged that these studies are not suitable or feasible for every program. However, as described above, without guidance on which—and when—alternative methods were appropriate, OMB and agency staff disagreed on whether specific evaluations were of acceptable quality. To help develop shared understandings and expectations, federal evaluation officials and OMB staff held several discussions on how to assess evaluation quality according to the type of program being evaluated. When external factors such as economic or environmental conditions are known to influence a program’s outcomes, an impact evaluation attempts to measure the program’s net effect by comparing outcomes with an estimate of what would have occurred in the absence of the program intervention. A number of methodologies are available to estimate program impact, including experimental and quasi-experimental designs. Experimental designs compare the outcomes for groups that were randomly assigned to either the program or to a nonparticipating control group prior to the intervention. The difference in these groups’ outcomes is believed to represent the program’s impact, assuming that random assignment has controlled for any other systematic difference between the groups that could account for any observed difference in outcomes. Quasi- experimental designs compare outcomes for program participants with those of a comparison group not formed through random assignment, or with participants’ experience prior to the program. Systematic selection of matching cases or statistical analysis is used to eliminate any key differences in characteristics or experiences between the groups that might plausibly account for a difference in outcomes. Randomized experiments are best suited to studying programs that are clearly defined interventions that can be standardized and controlled, and limited in availability, and where random assignment of participants and nonparticipants is deemed feasible and ethical. Quasi-experimental designs are also best suited to clearly defined, standardized interventions with limited availability, and where one can measure, and thus control for, key plausible alternative explanations for observed outcomes. In mature full-coverage programs where comparison groups cannot be obtained, program effects may be estimated through systematic observation of targeted measures under specially selected conditions designed to eliminate plausible alternative explanations for observed outcomes. Following our January 2004 report recommendation that OMB better define an “independent, quality evaluation,” OMB revised and expanded its guidance on evaluation quality for the fiscal year 2006 PART reviews. The guidance encouraged the use of randomized controlled trials as particularly well suited to measuring program impacts but acknowledged that such studies are not suitable or feasible for every program, so it recommended that a variety of methods be considered. OMB also formed an Interagency Program Evaluation Working Group in the summer of 2004 to provide assistance on evaluation methods and resources to agencies undergoing a PART review that discussed this guidance extensively. Evaluation officials from several federal agencies expressed concern that the OMB guidance materials defined the range of rigorous evaluation designs too narrowly. In the spring of 2005, representatives from several federal agencies participated in presentations about program evaluation purposes and methods with OMB examiners. They outlined the types of evaluation approaches they considered best suited for various program types and questions (see table 2). However, OMB did not substantively revise its guidance on evaluation quality for the fiscal year 2007 reviews beyond recommending that “agencies and OMB should consult evaluation experts, in-house and/or external, as appropriate, when choosing or vetting rigorous evaluations.” A related source of tension between OMB and agency evaluation interests was the importance of an evaluation’s independence. PART guidance stressed that for evaluations to be independent, nonbiased parties with no conflict of interest, for example, GAO or an Inspector General, should conduct them. OMB subsequently revised the guidance to allow evaluations to be considered independent if the program contracted them out to a third party or they were carried out by an agency’s program evaluation office. However, disagreements continued on the value and importance of this criterion. HHS officials reported variation among examiners in whether their evaluations were considered independent. Two programs objected to OMB examiners’ claims that an evaluation was not independent if the agency paid for it. OMB changed the fiscal year 2005 PART guidance to recognize evaluations contracted out to third parties and agency program evaluation offices as possibly being sufficiently independent, subject to examination case by case. But HHS officials claimed that they were still having issues with the independence standard in the fiscal year 2006 reviews and that OMB’s guidance was not consistently followed from one examiner to the next. DOL program officials stated that using an external evaluator who was not familiar with the program resulted in an evaluation that was not very useful to them. In part, this was because program staff were burdened with educating the evaluator. But more important, they claimed that the contractor designed the scope of the work to the broad questions of PART (such as questions on program mission) rather than focusing on the results questions the program officials wanted information on. In combination, this led to a relatively superficial program review, in their view, that provided the external, independent review OMB wanted but not the insights the program managers wanted. In reviewing the status of its PART recommendations, OMB did not accept advisory committee reviews for two research programs that DOE offered in response because OMB did not perceive the reviews as sufficiently independent. These two program reviews involved standing advisory committees of approximately 50 people who review the programs every 3 years. The OMB examiner believed that the committee was not truly independent of the agency. DOE program officials objected, noting the committee’s strong criticisms of the program, but have reluctantly agreed to plan for an external review by the National Academies. Program officials expressed concern that because evaluators from the National Academies may not be sufficiently familiar with their program and its context, such reviews may not address questions of interest to them about program performance. HHS program officials were also concerned about the usefulness of an evaluation of the sanitation facilities program if it was conducted by a university-based team inexperienced with the program. The agency deliberately guarded against this potential weakness by including two ex- agency officials (one an engineer) on the evaluation team, and by taking considerable effort with the team to define the evaluation questions. Agencies’ freedom to design their evaluations, combined with differences in expectations between agencies and OMB, raises the strong possibility that the evaluations that agencies conduct may not provide OMB with the information it wants. Most of the agency officials we interviewed said that they had discussed their evaluation plans with their OMB examiners, often as part of their data collection review process. SBA and DOL, in particular, appeared to have had extensive discussions with their OMB examiners. However, a few programs have not discussed their plans with OMB, presumably on the assumption that they will meet OMB’s requirements by following its written guidance. Officials in SBA’s and DOL’s central planning offices described extensive discussions of their evaluation plans with their OMB examiners. SBA vetted the evaluation design for SBA’s counseling programs with OMB in advance, as well as the questionnaire used to assess client needs. DOL planning and evaluation officials noted that they had worked with OMB examiners to moderate their expectations for agencies’ evaluations. They said that OMB understands their “real world” financial constraints and is allowing them to “chip away” at their outcome measurement issues and not conduct net impact evaluations in program areas where they do not have adequate funds to do this type of evaluation. HHS program officials were concerned about whether OMB will accept their ongoing evaluation of the immunization program when they receive their next PART review. The evaluation recommendation was general, so they based their design on the fiscal year 2004 criteria and to provide information useful to the program. However, the officials had heard that the fiscal year 2007 evaluation quality criteria were more rigid than those previously used, so they were concerned about whether the program will meet OMB’s evaluation criteria when it is reviewed again. They said they would have liked OMB to consider its evaluation progress and findings so far and to have given them input as to whether the evaluation will meet the current criteria. OMB officials denied that the PART criteria for evaluation quality had changed much in the past two years. They also expected, from their review of the design, that this new evaluation would meet current PART criteria, assuming it was carried out as planned. Several program officials expressed the view that in designing their evaluations, they were more concerned with learning how to improve their programs than in meeting an OMB checklist. Program officials complained that OMB’s follow-up on whether evaluations were being planned sent the message that OMB was more interested in checking off boxes than in having a serious discussion about achieving results. When one program official was asked for the program’s new evaluation plan, he answered “Who needs a plan? I’ve got an evaluation.” DOE program officials indicated that they believe a comprehensive evaluation of Weatherization Assistance should include all the questions that state, regional, and local officials would like to ask and not just establish a new national energy savings estimate. Those questions—also of interest to DOE—include: Which weatherization treatments correlate with energy savings? Should they use their own crews or hire contractors? What are the nonenergy benefits, such as improved air quality or employment impacts? Program officials indicated that they had conducted a great deal of planning and discussion with their stakeholders over the past 5 to 6 months and expect to conduct five or six studies to meet those needs. The PART review process has stimulated agencies to increase their evaluation capacity and available information on program results. The systematic examination of the array of evidence available on program performance has helped illuminate gaps and has helped focus evaluation questions. The public visibility of the results of the PART reviews has brought management attention to the development of agency evaluation capacity. Evaluations are useful to specific decision makers to the degree that the evaluations are credible and address their information needs. Agencies are likely to design evaluations to meet their own needs—that is, in-depth analyses that inform program improvement. If OMB wants evaluations with a broader scope, such as information that helps determine a program’s relevance or value, it will need to take steps to shape both evaluation design and execution. Because agency evaluation resources tend to be limited, they are most usefully focused on illuminating important areas of uncertainty. While regular performance reporting is key to good program management and oversight, requiring all federal programs to conduct frequent evaluation studies is likely to result in many superficial reviews that will have little utility and that will overwhelm agency evaluation capacity. In light of our findings and conclusions in this report, we are making the following recommendations to OMB reiterating and expanding on recommendations in our previous report: OMB should encourage agencies to discuss their plans for program evaluations—especially those in response to an OMB recommendation— with OMB and with congressional and other program stakeholders to ensure that their findings will be timely, relevant, and credible and that they will be used to inform policy and management decisions. OMB should engage in dialogue with agencies and congressional stakeholders on a risk-based allocation of scarce evaluation resources among programs, based on size, importance, or uncertain effectiveness, and on the timing of such evaluations. OMB should continue to improve its PART guidance and training of examiners on evaluation to acknowledge a wide range of appropriate methods. We provided a draft of this report to OMB and the agencies for review and comment. OMB agreed that evaluation methodology should be appropriate to the size and nature of the program and that randomized controlled trials may not be valuable in all settings. It noted its intent to provide additional guidance in this area. OMB disagreed with the reference to the PART as a checklist. This view was not ours but the view of agency officials who expressed concern about the focus of the assessment process. OMB also provided a number of technical comments, which we incorporated as appropriate throughout the report. OMB’s comments appear in appendix III. We also received technical comments from DOE, DOL, and HHS that we incorporated where appropriate throughout the report. SBA had no comments. We are sending copies of this report to the Director of the Office of Management and Budget; the Secretaries of Energy, Labor, and Health and Human Services; the Administrator of the Small Business Administration; appropriate congressional committees; and other interested members of Congress. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-2700 or KingsburyN@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Advanced Fuel Cycle Initiative: Nuclear Energy Research Advisory Committee (NERAC) Evaluation Subcommittee. Evaluation of DOE Nuclear Energy Programs. Washington, D.C.: Sept. 10, 2004. Advanced Scientific Computing Research Program: Advanced Scientific Computing Research. Committee of Visitors Report. Washington, D.C.: April 2004. Generation IV Nuclear Energy Systems Initiative: Nuclear Energy Research Advisory Committee (NERAC) Evaluation Subcommittee. Evaluation of DOE Nuclear Energy Programs. Washington, D.C.: Sept. 10, 2004. High Energy Physics Program: Committee of Visitors to the Office of High Energy Physics. Report to the High Energy Physics Advisory Panel. Washington, D.C.: Apr. 7, 2004. Nuclear Physics Program: Committee of Visitors. Report to the Nuclear Science Advisory Committee. Washington, D.C.: Department of Energy, Office of Science, Feb. 27, 2004. 317 Immunization Program: RTI International. Section 317 Grant Immunization Program Evaluation: Findings from Phase I. Draft progress report. Atlanta, Ga.: Centers for Disease Control and Prevention, January 2005. Indian Health Service Sanitation Facilities Program: Department of Health and Human Services, U.S. Public Health Service, Federal Occupational Health Service. Independent Evaluation Report Summary. Prepared for Indian Health Service Sanitation Facilities Construction Program, Rockville, Maryland. Seattle, Wash.: Mar. 8, 2005. Nursing Education Loan Repayment and Scholarship Program: Department of Health and Human Services, Health Resources and Services Administration, Bureau of Health Professions. HRSA Responds to the Nursing Shortage: Results from the 2003 Nursing Scholarship Program and the Nursing Education Loan Repayment Program: 2002–2003. First report to the United States Congress. Rockville, Md.: n.d. Employee Benefits Security Administration Reports: Mathematica Policy Research, Inc. Case Opening and Results Analysis (CORA) Fiscal Year 2002: Final Report. Washington, D.C.: Mar. 31, 2004. Royal, Dawn. U.S. Department of Labor, Employee Benefits Security Administration: Evaluation of EBSA Customer Service Programs Participant Assistance Program Customer Evaluation. Washington, D.C.: The Gallup Organization, February 2004. Royal, Dawn. U.S. Department of Labor, Employee Benefits Security Administration: Evaluation of EBSA Customer Service Programs Participant Assistance Mystery Shopper Evaluation. Washington, D.C.: The Gallup Organization, January 2004. Royal, Dawn. U.S. Department of Labor, Employee Benefits Security Administration: Evaluation of EBSA Customer Service Programs Participant Assistance Outreach Programs Evaluation. Washington, D.C.: The Gallup Organization, January 2004. Royal, Dawn. U.S. Department of Labor, Employee Benefits Security Administration: Evaluation of EBSA Customer Service Programs Participant Assistance Web Site Evaluation. Washington, D.C.: The Gallup Organization, January 2004. Federal Employees Compensation Act Program: ICF Consulting. Federal Employees Compensation Act (FECA): Program Effectiveness Study. Fairfax, Va.: U.S. Department of Labor, Office of Workers’ Compensation Programs, Mar. 31, 2004. Office of Federal Contract Compliance Programs: Westat. Evaluation of Office of Federal Contract Compliance Programs: Final Report. Rockville, Md.: December 2003. Occupational Safety and Health Administration Reports: ERG. Evaluation of OSHA’s Impact on Workplace Injuries and Illnesses in Manufacturing Using Establishment–Specific Targeting of Interventions. Final report. Lexington, Mass.: July 23, 2004. Marker, David and others. Evaluating OSHA’s National and Local Emphasis Programs. Draft Final Report for Quantitative Analysis of Emphasis Programs. Rockville, Md.: Westat, Dec. 24, 2003. OSHA, Directorate of Evaluation and Analysis. Regulatory Review of OSHA’s Presence Sensing Device Initiation (PSDI) Standard [29 CFR 1910.217(h)]. Washington, D.C.: May 2004. www.osha.gov/dcsp/compliance_assistance/lookback/psdi_final2004.ht ml (Oct. 21, 2005). In addition to the contact named above, Stephanie Shipman, Assistant Director, and Valerie Caracelli made significant contributions to this report. Denise Fantone and Jacqueline Nowicki also made key contributions. Performance Budgeting: PART Focuses Attention on Program Performance, but More Can Be Done to Engage Congress. GAO-06-28. Washington, D.C.: Oct. 28, 2005. Managing for Results: Enhancing Agency Use of Performance Information for Managerial Decision Making. GAO-05-927. Washington, D.C.: Sept. 9, 2005. 21st Century Challenges: Performance Budgeting Could Help Promote Necessary Reexamination. GAO-05-709T. Washington, D.C.: June 14, 2005. Performance Measurement and Evaluation: Definitions and Relationships. GAO-05-739SP. Washington, D.C.: May 2005. Results-Oriented Government: GPRA Has Established a Solid Foundation for Achieving Greater Results. GAO-04-38. Washington, D.C.: Mar. 10, 2004 Performance Budgeting: Observations on the Use of OMB’s Program Assessment Rating Tool for the Fiscal Year 2004 Budget. GAO-04-174. Washington, D.C.: Jan. 30, 2004. Program Evaluation: An Evaluation Culture and Collaborative Partnerships Help Build Agency Capacity. GAO-03-454. Washington, D.C.: May 2, 2003. Program Evaluation: Strategies for Assessing How Information Dissemination Contributes to Agency Goals. GAO-02-923. Washington, D.C.: Sept. 30, 2002. Program Evaluation: Studies Helped Agencies Measure or Explain Program Performance. GAO/GGD-00-204. Washington, D.C.: Sept. 29, 2000. Performance Plans: Selected Approaches for Verification and Validation of Agency Performance Information. GAO/GGD-99-139. Washington, D.C.: July 30, 1999. Managing for Results: Measuring Program Results That Are Under Limited Federal Control. GAO/GGD-99-16. Washington, D.C.: Dec. 11, 1998.
The Office of Management and Budget (OMB) designed the Program Assessment Rating Tool (PART) as a diagnostic tool to draw on program performance and evaluation information for forming conclusions about program benefits and recommending adjustments to improve results. To assess progress in improving the evidence base for PART assessments, GAO was requested to examine (1) agencies' progress in responding to OMB's recommendations to evaluate programs, (2) factors facilitating or impeding agencies' progress, and (3) whether agencies' evaluations appear to be designed to yield the information on program results that OMB expects. GAO examined agency progress on 20 of the 40 evaluations OMB recommended in its PART reviews at four federal agencies: the Department of Energy, Department of Health and Human Services, Department of Labor, and Small Business Administration. About half the programs GAO reviewed had completed an evaluation in the 2 years since those PART reviews were published; 4 more were in progress and 3 were still being planned. Program restructuring canceled plans for the remaining 2 evaluations. Several agencies struggled to identify appropriate outcome measures and credible data sources before they could evaluate program effectiveness. Evaluation typically competed with other program activities for funds, so managers may be reluctant to reallocate funds to evaluation. Some agency officials thought that evaluations should be targeted to areas of policy significance or uncertainty. However, all four agencies indicated that the visibility of an OMB recommendation brought agency management attention--and sometimes funds--to get the evaluations done. Moreover, by coordinating their evaluation activities, agencies met these challenges by leveraging their evaluation expertise and strategically prioritizing their evaluation resources to the studies that they considered most important. Because the OMB recommendations were fairly general, agencies had flexibility in interpreting the kind of information OMB expected. Some program managers disagreed with OMB on the purpose of their evaluations, their quality, and the usefulness of "independent" evaluations by third parties unfamiliar with their programs. Agency officials concerned about an increased focus on process said that they were more interested in learning how to improve program results than in meeting an OMB checklist. Since a few programs did not discuss their evaluation plans with OMB, it is not certain whether OMB will find their ongoing evaluations useful during the programs' next PART review. GAO concludes that the PART review process stimulated agencies to increase their evaluation capacity and available information on program results. Further, agencies are likely to design evaluations to meet their own needs--that is, in-depth analyses that inform program improvement. If OMB wants evaluations with a broader scope, such as information that helps determine a program's relevance or value, it will need to take steps to shape both evaluation design and execution. Finally, because agency evaluation resources tend to be limited, they are most usefully focused on important areas of uncertainty. Regular performance reporting is key to good management, but requiring all federal programs to conduct frequent evaluation studies is likely to result in superficial reviews of little utility and to overwhelm agency evaluation capacity.
Program evaluations are systematic studies that use research methods to address specific questions about program performance. Evaluation is closely related to performance measurement and reporting. Whereas performance measurement entails the ongoing monitoring and reporting of program progress toward preestablished goals, program evaluation typically assesses the achievement of a program’s objectives and other aspects of performance in the context in which the program operates. In particular, evaluations can be designed to better isolate the causal impact of programs from other external economic or environmental conditions in order to assess a program’s effectiveness. Thus, an evaluation study can provide a valuable supplement to ongoing performance reporting by measuring results that are too difficult or expensive to assess annually, explaining the reasons why performance goals were not met, or assessing whether one approach is more effective than another. Evaluation can be key in program planning, management, and oversight by providing feedback on both program design and execution to program managers, legislative and executive branch policy officials, and the public. In our 2013 survey of a stratified random sample of federal managers, we found that most federal managers reported lacking recent evaluations of their programs. Although only about a third had recent evaluations of their programs or projects, the majority of those who had evaluations reported that they contributed to understanding program performance, assessing program effectiveness or value, making changes to improve program management or performance, and sharing what works with others. Those who had evaluations cited most often a lack of resources as a barrier to implementing evaluation findings. Agency evaluators noted that it takes a number of studies rather than just one study to influence change in programs or policies. Experienced evaluators identified three strategies to facilitate evaluation influence: leadership support of evaluation, building a strong body of evidence, and engaging stakeholders throughout the evaluation process. Our previous literature review found that the key elements of national or organizational capacity to conduct and use evaluation in decision making include an enabling environment that has leadership support for using evidence in decision making; organizational resources to support the supply and use of credible evaluations; and the robust, transparent availability of evaluation results. Our 2014 survey of PIOs on the presence of these elements in their agencies found uneven levels of evaluation expertise, organizational support within and outside the organization, and use across the government. About half the 24 agencies reported committing resources to obtain credible evaluation by establishing a central office responsible for evaluation, yet those agencies with centralized leadership reported greater evaluation coverage and use of the results in decision making. Only six of these agencies reported having stable funding or agency-wide evaluation plans. GPRAMA established an expectation that evidence would have a greater role in agency decision making. The act changed agency performance management roles, planning and review processes, and reporting to ensure that agencies use performance information in decision making and are held accountable for achieving results and improving government performance. The act required the 24 CFO Act agencies and OMB to establish agency priority goals and government-wide cross-agency priority goals, review progress on the goals quarterly, and report publicly on their progress and strategies to improve performance on a government performance website. In addition, GPRAMA, along with OMB guidance, established and defined performance management responsibilities for agency officials in key management roles. In particular, the PIO was given a central role in promoting the agencies’ use of evaluation and other evidence to improve program performance. The act charged the Performance Improvement Council, which includes PIOs from all 24 CFO Act agencies, to facilitate agencies’ exchange of successful practices and the development of tips and tools to strengthen agency performance management. OMB’s guidance implementing GPRAMA also directed agencies to conduct strategic reviews of annual progress toward each strategic objective in their strategic plans to inform agency strategic decision making, budget formulation, and preparation of annual performance plans and reports. Guided by the PIO, agencies are to consider a wide range of evidence (including research, evaluation, and performance indicators) in these reviews and identify areas where additional evaluations or analyses of performance data are needed. Further, GPRAMA is part of a government-wide focus on the crucial role of evidence for improving the effectiveness of federal programs. Since 2009, OMB has issued several memorandums urging efforts to strengthen the use of rigorous impact evaluation, designate a high-level official responsible for evaluation to develop and manage a research agenda, and demonstrate the use of evidence and evaluation in budget submissions, strategic plans, and performance plans. A 2013 OMB memorandum urged agencies to develop an evidence and innovation agenda to exploit existing administrative data to conduct low-cost experiments and implement outcome-focused grant designs and research clearinghouses to catalyze innovation and learning. OMB staff have also established several interagency workgroups to promote sharing evaluation expertise and have organized a series of workshops and interagency collaborations. For example, in 2016 we recommended that OMB establish a formal means for agencies to collaborate on tiered evidence grants, a new grant design in which funding is based on the level of evidence available on the effectiveness of the grantee’s service delivery model. OMB’s Evidence Team convened an interagency working group on tiered evidence grants that meets quarterly and established a website for the group to share resources. This team also co-chairs the Interagency Council on Evaluation Policy, a group of 10 agency evaluation offices that have collaborated on developing common policies and conducting workshops. The Trump Administration’s 2018 Budget proposal endorses a continued commitment to agencies building a portfolio of evidence on what works and how to improve results, investing in evidence infrastructure and capacity, and acting on a strong body of evidence to obtain results. In 2016, the Congress enacted and the President signed two pieces of legislation encouraging federal agency evaluation. The Evidence-Based Policymaking Commission Act of 2016 created the Commission and charged it with conducting a comprehensive study of the data inventory, data infrastructure, database security, and statistical protocols related to federal policy making and the agencies responsible for maintaining that data. This study was to include a determination of the optimal arrangement for which administrative data on federal programs and tax expenditures, survey data, and related statistical data series may be integrated and made available to facilitate program evaluation, continuous improvement, policy-relevant research, and cost-benefit analyses, while considering the privacy of personally identifiable information. In its September 2017 report, the Commission made 22 recommendations to improve secure, private, and confidential access by researchers to government data; modernize data privacy protections; implement a National Secure Data Service to manage secure record linkage and data access for evidence building; and strengthen federal agency evidence- building capacity. In particular, the Commission recommended that each federal department should identify a Chief Evaluation Officer and develop a multi-year learning agenda of high priority research and policy questions to address. The Foreign Aid Transparency and Accountability Act of 2016 (FATAA) requires the President to set guidelines for monitoring and evaluating federal foreign assistance by January 2018. The guidelines are to provide direction to the several federal agencies that administer foreign assistance on how to, for example, establish annual monitoring and evaluation plans, quality assurance procedures, and public dissemination of findings and lessons learned. In 2017, we surveyed federal managers asking the same questions as those we asked in 2013 about managers’ access to evaluation and their use in decision making. Our 2017 survey found no change government- wide in managers’ access to evaluations since 2013. We estimate that 40 percent of federal managers reported having access to recent evaluations of their programs, while another 39 percent reported that they did not know if an evaluation had been conducted. About half the managers who had evaluations once again reported that they contributed to a great or very great extent to improving program management or performance and assessing program effectiveness (54 and 48 percent, respectively), while fewer reported that they contributed to allocating program resources or informing the public (35 and 22 percent, respectively). In 2017, an estimated 40 percent of federal managers reported that an evaluation had been completed within the past 5 years for any of the programs, operations, or projects they were involved in—statistically unchanged from the 2013 survey (37 percent). As in 2013, Senior Executive Service (SES) managers reported having evaluations statistically significantly more often than non-SES managers did (56 percent versus 39 percent in 2017; 54 percent versus 36 percent in 2013).This should be expected, since SES managers are likely to oversee a range of programs broader than that of non-SES managers, any one of whose programs might have been evaluated. An estimated 18 percent of managers reported not having any evaluations, while twice as many managers (an estimated 39 percent) reported that they did not know if an evaluation had been conducted. We believe this may reflect midlevel managers’ lack of familiarity with activities outside their programs. As in 2013, non-SES managers reported twice as often as SES managers that they did not know whether an evaluation had been performed (40 percent versus 19 percent in 2017; 41 percent versus 24 percent in 2013). And in other questions in our survey about GPRAMA provisions, non-SES managers reported significantly more often than SES managers that they were not familiar with cross- agency priority goals (42 versus 22 percent), one or more of their agency’s priority goals (21 versus 9 percent), or their agency’s quarterly performance reviews (61 versus 44 percent). Because these goals and their related reviews apply only to a subset of an agency’s goals, midlevel managers are less likely to be directly involved in them. Of the estimated 40 percent of managers who reported having evaluations, most (86 percent) reported that the agency itself primarily conducted or contracted for these evaluations. Many of these managers also reported that studies were completed by their Inspector General (49 percent), GAO (38 percent), or others such as the National Academy of Sciences and independent boards (17 percent). Because of variation in the responsibilities of managers, we cannot deduce from these results how many programs have been evaluated. However, even if additional evaluations had been conducted by others within or outside the agency, if managers were unaware of them, their results would not have been available for use. Because evaluations are designed to meet decision makers’ information needs, our survey asked federal managers who had recent evaluations to what extent those evaluations contributed to 11 different activities. For the 40 percent of managers who reported having evaluations, the results are very similar to the results of our 2013 survey: federal managers with evaluations credited them with contributing to a great or very great extent to assessing program effectiveness or implementing changes to improve program management or performance (48 and 54 percent, respectively), with no statistically significant changes since 2013. Managers reported less frequently that evaluations contributed greatly to allocating program resources or informing the public (figure 1). Consistent with the 2013 survey results, many managers who reported having evaluations reported that they contributed to a great or very great extent to direct efforts to improve programs such as: implementing changes to improve program management or performance (an estimated 54 percent in 2017), developing or revising performance goals (45 percent), sharing what works or other lessons learned with others (44 percent), and designing or supporting program reforms (39 percent). Evaluations vary in their scope and complexity and may address questions about program implementation as well as program effectiveness, so any resulting recommendations may point to simple corrections or broad re-thinking of a policy’s relevance or effectiveness. In a previous study, evaluators told us that it usually takes a number of studies, rather than just one, to influence change in programs or policies. As one evaluator put it, “the process by which evaluation influences change is iterative, messy, and complex. Policy changes do not occur as a direct result of an answer to an evaluation question; rather, a body of evaluation results, research, and other evidence influences policy and practice over time.” Moreover, designing and approving major program reforms typically involves a number of stakeholders outside the agency. Sharing what works with others is often the most direct action federal managers can take in decentralized programs in which they do not have direct control of program activities conducted by others at the state and local levels. To address this, federal agencies use a variety of methods to disseminate evaluation findings to local decision makers, such as establishing searchable evaluation clearinghouses online or disseminating findings through electronic listservs, through webinars, or at research and evaluation conferences. Fewer managers reported that evaluations contributed to streamlining programs to reduce duplicative activities to a great or very great extent (an estimated 27 percent). We have issued several reports outlining numerous areas of potential duplication, overlap, and fragmentation in federal programs. In these reviews, we identified the need for improved coordination and collaboration as well as better evaluation of these programs’ performance and results to help inform decisions about how to better manage these programs. Evaluation studies, if carefully designed, can address specific questions about the extent of fragmentation, overlap, and duplication as well as the individual and joint effectiveness of related programs. A broad review of evidence on related programs and the relationships among them can clarify the extent of and reveal opportunities for reducing or better managing fragmentation, overlap, and duplication. Managers who reported having access to evaluations reported that evaluations contributed to a great or very great extent to improving their understanding of program performance, such as by assessing program effectiveness, value, or worth (an estimated 48 increasing understanding about the program or topic (48 percent); and supplementing or explaining performance results (44 percent). The primary purpose of program and policy evaluations is to provide systematic evidence on how well a program is working, whether it is operating as intended or achieving its intended results. They can be especially useful for helping improve program performance when they help identify for whom or under what conditions a program or approach is effective or ineffective or the reasons for change (or lack of change) in program performance. We have also reported that evaluations can help measure more complex or costly forms of performance than can be obtained routinely, such as by following up on high school students’ success in college. Similar to the 2013 survey results, fewer managers found that evaluations contributed to a great or very great extent to allocating resources within the program (35 percent), or supporting program budget requests (33 percent), than to improving program management or understanding (54 and 48 percent, respectively). This result is not surprising because many factors and priorities influence the budget process and need to be considered when deciding how to allocate limited resources among competing needs. Evaluators told us that high-stakes decisions such as funding are taken rarely on the basis of a single study but, rather, on the basis of a body of evidence. Our 2014 survey of the PIOs at the 24 CFO Act agencies provided a mixed picture of evaluation use in allocating resources. Almost half (10) reported that their agencies had increased their use of evaluation in supporting budget requests and allocating resources within programs since 2010; while 5 PIOs either provided no opinion or reported little or no agency use of evaluation evidence to support budget or policy changes as part of their agency’s annual budget process. Similar to the 2013 survey results, less than half the federal managers who reported having evaluations also reported that evaluations contributed to informing the public about how programs are performing to a great or very great extent (an estimated 22 percent). In fact, similar to 2013, 20 percent of these managers reported no basis to judge whether these evaluations informed the public. As we noted in our 2013 report, federal managers’ use of evaluation appears to be oriented more internally than externally, and they may think that they are not in a position to know whether the public reads their reports. This does not mean that agencies do not make their evaluation reports public. In our 2014 survey of the 24 PIOs, half reported that their agencies posted evaluation reports in a searchable database on their websites, and a third reported disseminating evaluation reports by electronic mailing lists. Simply having program evaluations does not ensure that managers will use their results in management or policy making. As we noted above, our reviews of the research and policy literature have found that organizational and national capacity to conduct and use evaluation in decision making relies on leadership support for using evidence in decision making, organizational resources, and the availability of evaluation results. In addition, the nature of study results can influence evaluation use; mixed or inconclusive results may not suggest a clear path of action. To help understand the relative importance of these factors for evaluation use, our survey asked federal managers who had recent evaluations of any of their programs, operations, or projects to what extent specific factors regarding leadership support, policy context, staff capabilities, or evaluation characteristics hindered or facilitated using evaluations in their agencies. Managers’ views of which factors facilitate or hinder evaluation use have changed little since our 2013 survey. Managers who reported having evaluations once again most often reported that lack of resources to implement results was a barrier to evaluation use (an estimated 29 percent). They most often identified leadership support for evaluation (38 percent), and the evaluation’s relevance to decision makers (36 percent) as facilitators of evaluation use. While 19 percent perceived lack of staff knowledgeable in evaluation as a barrier, 35 percent reported that staff involvement facilitated use. As in our 2013 survey, many agency managers (35 percent) reported they had no basis to judge the influence of the presence or absence of congressional support for evaluation. In our 2014 survey, the PIOs generally identified the same factors facilitating evaluation use. Managers who reported having access to recent evaluations of their programs rated lack of resources to implement evaluation findings more often than any other potential barrier (see figure 2). They also reported modest concerns related to program context and agency capacity or support for evaluation as barriers to evaluation use more often than potential problems with study quality. For the estimated 40 percent of managers who reported having evaluations, the factor that they most often reported hindering the use of program evaluations to a great or very great extent was a lack of resources to implement evaluation findings (29 percent), which was also the most commonly reported factor in 2013 (33 percent, difference not statistically significant). This is not surprising given today’s constrained federal budget resources. In a climate of budget reductions, agencies are hard-pressed to argue for expanding or creating new programs. But agencies may also lack resources to undertake corrective action within existing programs, such as providing additional staff training or increasing oversight or enforcement efforts. Few federal managers who reported having evaluations cited factors related to agency and policy context as barriers that hinder the use of evaluations to a great or very great extent, such as: difficulty resolving differences of opinion among internal or external stakeholders (an estimated 18 percent), difficulty distinguishing between the results produced by the program and results caused by other factors (17 percent), and concern that the evaluation did not address issues of relevance to decision makers (15 percent). The wide range of stakeholders for federal programs can include the Congress, executive branch officials, nonfederal program partners (state and local agencies and community-based organizations), program beneficiaries, regulated entities, and the policy research community. Their perspectives on evaluation results may differ because of differences in their policy opinions or the complexity of evaluation findings. For programs with broad goals, stakeholders may differ in their perception of a program’s purpose and how program “success” should be defined. Disagreements about what to do next can occur when evaluation findings are not wholly positive or negative. Some federal managers who reported having evaluations also reported that difficulty distinguishing between results produced by the program and results caused by other factors was a great or very great barrier to evaluation use (18 percent). Across the federal government, programs aim to achieve outcomes that they do not control, that are influenced by other programs or external social, economic, or environmental factors, complicating the task of assessing program effectiveness. Typically, this challenge is met by conducting a net impact evaluation that compares what occurred with an estimate of what would have occurred in the absence of the program. However, these studies can be difficult to conduct, may have unexpected or contradictory findings, and need to be considered in the context of the larger body of evidence. Some managers (an estimated 15 percent) rated concern about the relevance of an evaluation’s issues to decision makers as hindering use to a great or very great extent, but three times as many managers (47 percent) reported that this was a small or insignificant barrier. Our previous literature review found that collaboration with program stakeholders in evaluation planning is a widely recognized element of evaluation capacity. We also described in a previous report how experienced agency evaluation offices reach out to key program stakeholders to identify important policy and program management questions, vet initial ideas with the evaluations’ intended users, and then scrutinize the proposed portfolio of studies for relevance and feasibility within available resources. The resulting evaluation agenda aims to provide timely, credible answers to important policy and program management questions. This can help ensure that their evaluations will be used effectively in management and legislative oversight. More recently, OMB, in the President’s proposed budget for fiscal year 2018, encouraged agencies to expand on this practice by adopting a “learning agenda” in which they collaboratively identify the critical questions that, when answered, will help their programs be more effective. A learning agenda would then identify the most appropriate tools and methods (for example, research, evaluation, analytics, or performance measures) to answer each question. OMB noted that the selected questions should reflect the priorities and needs of a wide array of stakeholders involved in program and policy decision making: Administration and agency officials, program offices and program partners, researchers, and the Congress. As we noted above, in 2017, the Commission on Evidence-Based Policymaking also recommended that departments create learning agendas. Two infrequently reported barriers related to agency evaluation resources at both the staff and executive levels, at about the same levels as in 2013, are: lack of staff knowledgeable about interpreting or analyzing program evaluation results (an estimated 19 percent rated great or very great extent), and lack of ongoing top executive commitment or support for using program evaluation to make program or funding decisions (17 percent). In contrast, almost half of agency managers who reported having evaluations reported that these two issues hindered evaluation use to a small extent or not at all (an estimated 46 to 47 percent, respectively). The research literature has clearly established leadership support for using evidence in decision making as important for evaluation use. However, it is likely that most managers who have evaluations also have at least some leadership support for evaluation. Our 2014 survey of 24 PIOs found that the 9 agencies who reported having independent, centralized evaluation authority reported greater evaluation use in management and policy making. Program evaluations–-especially net impact evaluations that attempt to isolate a program’s effects from the effects of other factors-–typically employ more complex analytic techniques than performance monitoring, so their results may be unfamiliar to staff without training in research and statistics. Evaluation expertise is needed to plan, conduct, or procure evaluation studies, but program staff also need sufficient knowledge to understand and translate evaluation results into steps toward program improvement. Our 2014 survey of 24 PIOs found that about half the agencies reported increases in hiring staff with research and evaluation expertise and in training staff in research and analysis skills since 2011, but 7 acknowledged additional training was needed to a great or very great extent in data management and statistical analysis, performance measurement and monitoring, and translating evaluation results into actionable recommendations. In both the 2013 and 2017 surveys, the agency managers with evaluations agreed that factors related to study limitations were not serious barriers; approximately half reported that they hindered evaluation use to a small extent or not at all: difficulty determining how to use evaluation findings to improve the program (an estimated 50 percent rated a small extent or not at all), difficulty obtaining study results in time to be useful (51 percent), concern about the credibility (validity or reliability) of study results (55 percent), difficulty generalizing the results to other persons or localities (56 difficulty accepting findings that do not conform to expectations (58 percent). We have reported that an effective evaluation design aims to provide credible, timely answers to the intended users’ questions. Even with the best planning, however, an evaluation might not meet decision makers’ needs. First, the pace of policy making is much quicker than the time it takes to conduct an evaluation. Second, there is no guarantee that study results will point to a clear path of action. We previously reported that, to manage these uncertainties, experienced evaluators recommended building a strong body of evidence and engaging stakeholders throughout the process. A body of evidence—including various forms of evidence—is considered more valuable than a single study because having multiple studies with similar results strengthens confidence in the conclusions, and a body of information can yield answers to a variety of different questions, whenever stakeholders pose them. Comparing results obtained under different conditions can help explain what might be driving seemingly contradictory results. Evaluators pointed out that they rarely based decisions on a single study. Individual evaluation studies typically do not simply identify whether a program works but, rather, they assess the effects of an individual program or intervention on specific domains for the specific populations or conditions studied. Developing a body of evidence is also a strategy for ensuring that information is available for input to fast-breaking policy discussions. Engaging stakeholders throughout the evaluation process permits targeting the evaluation’s questions and timing to decision makers’ needs, gaining their buy-in to the study’s credibility and relevance, and providing stakeholders with interim results or lessons learned about program changes that they can implement right away. Few agency managers who reported having evaluations viewed lack of ongoing congressional commitment or support for using program evaluation to make program or funding decisions as a barrier to use to a great or very great extent (an estimated 16 percent). However, twice as many managers (35 percent) reported they had no basis for determining whether congressional commitment was a barrier. We found this same phenomenon in 2013 as well (18 percent and 39 percent, respectively), most likely reflecting midlevel managers’ lack of direct contact with congressional members and staff. This is also consistent with responses to a parallel question included in our survey of federal managers about congressional commitment or support for using performance information to make program or funding decisions. About a third of the full sample of federal managers reported that they had no basis to judge whether lack of congressional support for using performance information hindered its use. Congressional committees have a number of opportunities to communicate their support for evaluation, such as: consulting with agencies as they revise their strategic plans and agency priority goals (APG); requesting agency evaluations to address specific questions about policy or program implementation or results; conducting oversight hearings on agency performance; and reviewing agency evaluation plans to ensure that they address issues of congressional interest. While the Congress holds numerous oversight hearings and requests studies from GAO, it is not clear whether it regularly requests agencies to conduct evaluations. In our 2014 survey, fewer than half the PIOs (10) reported having congressional mandates to evaluate specific programs. Despite GPRAMA’s requirement that agencies consult with the Congress in developing their strategic plans and priority goals, we found their communication to be one-directional, resembling reporting more than dialogue. In our 2013 interviews with evaluators, one evaluator explained that, for the most part, they conduct formal briefings for the Congress in a tense, high-stakes environment; they lack the opportunity for informal discussion of their results. In 2013 we recommended that OMB ensure that agencies adhere to OMB’s guidance for website updates to provide a description of how congressional consultations were incorporated in each APG. Our analysis of the sections on the 2016—2017 APGs on Performance.gov in October 2016 generally found that agencies either did not include information about congressional input or had not updated Performance.gov to reflect the most recent round of stakeholder engagement. As of June 2017, Performance.gov has been archived as agencies develop updated goals and objectives for release in February 2018 with the President’s next Budget submission to the Congress. To learn what factors facilitate evaluations’ use in decision making, we added a new question to our survey of federal managers with evaluations on the extent to which 12 factors facilitate their use (see figure 3). We selected these factors to parallel factors found in our 2013 survey to hinder use as well as others that were found to facilitate use in our previous interviews with evaluators and in our 2014 survey of the PIOs. In 2017, federal managers who reported having evaluations most frequently reported that agency leadership support for evaluation, staff involvement, and evaluation relevance to decision makers facilitated evaluation use. Although neither the survey respondents nor the survey questions are directly comparable, the PIOs we surveyed in 2014 reported similar factors as facilitating evaluation use. These groups differed in their views on the importance of quarterly performance reviews, possibly reflecting their different responsibilities and levels of involvement. Both the federal managers and the senior agency officials reported limited knowledge of congressional requests for or interest in evaluation. Consistent with the literature on factors supporting evaluation use, about one-third of agency managers who reported having evaluations rated top executive commitment or support for using program evaluation to make program or funding decisions the most often of the factors presented (an estimated 38 percent to a great or very great extent). About twice as many managers reported this factor as facilitating evaluation use as those who rated its absence as hindering evaluation use to a great or very great extent (17 percent). This may be because, as we noted above, these respondents have evaluations and thus probably already have some leadership support for evaluation; lack of leadership support was not much of a problem for them. While our 2014 survey did not ask the PIOs to what extent top leadership support for using evaluations in decision making facilitated its use, many reported that their agencies’ senior leadership demonstrated commitment to using evidence (of various types) in management and policy making through guidance (17 of 22) or internal agency memorandums (12 of 22). Some PIOs also rated holding goal leaders accountable for progress on APGs—another form of leadership support—very useful for improving their agencies’ capacity to use evaluations in decision making (8 of 23 PIOs). GAO and others have commented that for evaluation results to be acted on, not only must decision makers generally support using evidence to inform decisions but also the studies themselves must be seen as relevant and credible. About one-third of agency managers with evaluations in 2017 rated importance of an evaluation’s issues to agency decision makers as facilitating use to a great or very great extent (an estimated 36 percent). This is about twice as many as the managers who said the absence of relevance hindered evaluation use to a great or very great extent (15 percent). We interpret this to mean that the managers perceived their evaluations as generally addressing relevant issues and that the evaluations’ relevance contributed to their use in agency decision making. Despite managers’ high regard for top management’s support for evaluation, it is notable that few managers reported that consideration of evaluation findings in agency quarterly performance reviews facilitated their use in decision making. GPRAMA introduced these reviews to encourage the use of performance information in agency decision making by requiring agencies to review progress on their APGs quarterly and to report publicly on their progress and strategies to improve performance, as needed. Although about a quarter of the PIOs reported in 2014 (6 of 23) that these reviews were very useful in improving agencies’ capacity to use evaluations, the managers surveyed in 2017 were not as sanguine. About a third of the managers with evaluations reported that they had no basis to judge whether these reviews facilitated use (35 percent), and few (14 percent) rated them as facilitating use to a great or very great extent. It may be that few middle managers participated in these reviews; they are only required for APGs, a small subset of an agency’s performance goals (generally 2—8 goals at each agency). Sixty-one percent of the total sample of managers reported that they were not at all familiar with these reviews. Alternatively, evaluations might contribute more effectively to the annual strategic reviews, which aim for a comprehensive assessment of progress on the results the agency aims to achieve. OMB’s guidance for these reviews directs agencies to consider a broad array of evidence and external influences on their objectives, identify any gaps in their evidence and areas where additional evaluations or other analyses are needed, and thus focus their limited evaluation resources to inform the strategic decisions facing the agency. Our 2017 survey did not ask federal managers about these strategic reviews; thus, we do not know whether midlevel managers were aware of or involved in these reviews. Experienced evaluators have told us that engaging staff throughout the evaluation process can gain their buy-in on the relevance and credibility of evaluation findings. In addition, providing program staff with interim results or lessons learned from early program implementation can ensure timely data for program decisions. In 2017, one-third of agency managers with evaluations rated program staff involvement in planning or conducting evaluation studies as greatly or very greatly facilitating use (an estimated 35 percent). This is consistent with our 2014 survey, in which about half the PIOs also rated staff involvement in planning and conducting evaluation studies as very useful for improving agency capacity to use evaluations in decision making (11 of 23). Evaluations may use complex analytic techniques with which program staff are unfamiliar, thus inhibiting staff’s involvement and their ability to interpret the findings. However, only an estimated 19 percent of managers rated lack of staff who are knowledgeable about interpreting or analyzing program evaluation results as greatly or very greatly hindering use. A quarter of managers (25 percent) reported that one possible response—providing program staff and grantees with technical assistance on evaluation and its use–-facilitated evaluation use to a great or very great extent. In 2014, about half the surveyed PIOs agreed; 11 of 23 rated this strategy as very useful for improving agency capacity to use evaluations. Other factors that managers in the 2017 survey rated often as facilitating use were parallel to factors that they rated often as barriers. About a quarter of managers (an estimated 29 percent) reported that agency staff ability to make recommended program changes facilitated use to a great to very great extent. This factor is parallel to the most frequently rated factor to hinder use—lack of resources to implement the evaluation findings—that a similar number identified (29 percent great to very great extent). As we noted above, midlevel managers may not have the authority or resources to implement a study’s recommendations. In addition, the positive characteristics of a study may influence its use. About a third of agency managers who reported having evaluations reported that clear implications of results for improving program design or management (31 percent) facilitated use to a great to very great extent. The absence of such clarity is one of the factors that an evaluator previously told us could lead to disagreements, and such disagreements may lead to inaction. Mixed results or the absence of a clear explanation for disappointing program results can impede consensus on an evaluation’s lessons for program improvement. A strong evaluation design can help prevent message muddling by testing alternative explanations, but it cannot ensure that an evaluation will provide clear implications because the results of an evaluation, like a research study, are inherently uncertain. Written evaluation policies and standards help provide benchmarks for ensuring the quality of an organization’s processes and products. The American Evaluation Association (AEA) publishes a guide for developing and implementing U.S. government evaluation programs that recommends that agencies, among other things, develop written evaluation policies and quality standards, consult with program stakeholders, and prepare annual and long-term evaluation plans to support future decision making. In our 2014 survey of PIOs, about a quarter of the 24 PIOs surveyed reported that their agencies had written agency-wide policies or guidance for key issues contained in that guide: selecting and prioritizing evaluation topics, consulting program staff and subject matter experts, ensuring internal and external evaluator independence and objectivity, selecting evaluation approaches and methods, ensuring completeness and transparency of evaluation reports, timely public dissemination of evaluation findings and recommendations, or tracking implementation of evaluation findings. A few more PIOs (10 of 24) reported having agency-wide policies on ensuring the quality of data collection and analysis. In our 2017 survey, we estimate that 28 percent of managers who reported having evaluations reported that agency policies and procedures to ensure evaluation quality facilitated use to a great or very great extent. Our survey did not ask which types of policies they had, so we do not know whether they included all of the topics listed above. Only a small number of managers—13 percent—reported having no basis to judge their policies’ influence, suggesting that most agencies have evaluation policies, although those policies may not apply agency-wide. The reported positive influence of such policies on evaluation quality is also consistent with the fact that about half the managers with evaluations reported that various factors regarding study limitations did not significantly hinder evaluation use in decision making, as discussed above. Experienced evaluators consult with stakeholders in developing their evaluation or learning agenda to help ensure their evaluations’ credibility and relevance to current management and policy issues. In the 2017 survey, managers with evaluations rated consultation with stakeholders on the agency’s evaluation agenda high for facilitating evaluation use (28 percent to a great or very great extent), although 22 percent responded they had no basis to judge. In our 2014 survey of PIOs, only 7 reported having an agency-wide evaluation agenda. The Congress is a prominent member of federal program stakeholders but congressional interest in and requests for evaluation were not widely reported by the PIOs we surveyed in 2014. Congressional mandates are requirements in statute for an agency (including GAO) to conduct a study, usually specifying the topic and a reporting date. GAO is often requested to report on the progress and success of new programs or program provisions. In our 2014 survey, fewer than half the PIOs (10 of 23) reported that they had any congressional mandate to evaluate a specific program in their agency. Consistent with this low reporting of congressional requests for evaluation, about one-third of managers who reported having evaluations in our 2017 survey reported that they had no basis to judge whether congressional requests or mandates facilitated evaluation use (31 percent). However, 23 percent reported that such requests facilitated use to a great or very great extent. Thus, while congressional evaluation requests are not widely reported among PIOs, they appear to be influential among some federal managers. For several years, OMB has encouraged agencies to use program evaluations and other forms of evidence to learn what works and what does not, and how to improve results. Yet, agencies appear not to have expanded their capacity to conduct or use evaluation in decision making since 2013. Because the majority of agency managers who reported having evaluations also reported that they contributed to improving program performance (54 percent), this lack of evaluation capacity constitutes a lost opportunity to improve the efficiency and effectiveness of limited government resources. The survey results reinforce lessons from our previous reports: involving agency staff and executives in planning and conducting evaluations helps ensure that those evaluations are relevant, credible, and used in agency decision making. Agency managers who reported having evaluations also reported top executive support for using evaluations to make decisions, the importance of the evaluation’s issues to decision makers, and involving agency staff in planning or conducting evaluation studies, most often among factors facilitating evaluation use. GAO, as well as OMB, AEA, and the Commission on Evidence-Based Policymaking, has noted that it is important to develop an evaluation plan or agenda to ensure that even an agency’s scarce research and evaluation resources are targeted to its most important issues and can shape budget and policy priorities and management practices. Although only some agencies have developed agency-wide evaluation agendas, evaluators who have them have found that consulting with stakeholders on their evaluation agendas helps ensure evaluation credibility and relevance, and facilitates the use of evaluation results. Congressional support—through either authorization or appropriation of funds—is often needed for agencies to implement desired program reforms. Although 28 percent of federal managers with evaluations reported that consulting with external stakeholders on their evaluation agendas greatly contributes to their use, we saw limited knowledge of congressional consultation. Congressional consultation on agency evaluation plans could increase the studies’ credibility and relevance for those audiences. Although evaluations were generally not reported as contributing greatly to quarterly performance reviews of progress on agency priority goals, they might contribute more effectively to an agency’s annual strategic review. OMB’s guidance envisions strategic reviews as a more comprehensive assessment of a broad range of evidence on and factors influencing progress on an agency’s desired results. Agencies are also directed to identify any gaps in their evidence and take steps to address them in these reviews; thus, the strategic review could produce an evaluation agenda that is targeted to the agency’s management, budget, and policy priorities. To help ensure that federal agencies obtain the evidence needed to address the most important questions to improve program implementation and performance, we recommend that the Director of the Office of Management and Budget direct each of the 24 Chief Financial Officer Act agencies to prepare an annual agency-wide evaluation plan that describes the key questions for each significant evaluation study that the agency plans to begin in the next fiscal year, and congressional committees; federal, state and local program partners; researchers; and other stakeholders that were consulted in preparing their plan. (Recommendation 1) We requested comments on a draft of this report from the Director of the Office of Management and Budget. In an email response, an OMB staff member commented that it would be more appropriate and effective to encourage agencies to create an annual evaluation plan, rather than require or direct them to do so. Because OMB has encouraged agencies to conduct and use evaluations in decision making for several years with mixed success, we believe that a more directive approach is needed. We are sending copies of this report to the Director of the Office of Management and Budget, and to appropriate congressional committees. This report is also available at no cost on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2700 or kingsburyn@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs may be found on the last page of the report. Staff who made key contributions to the report are listed in appendix II. We administered a web-based questionnaire on organizational performance and management issues to a stratified random sample of 4,395 from a population of approximately 153,779 mid-level and upper- level civilian managers and supervisors working in the 24 executive branch agencies covered by the Chief Financial Officers Act of 1990 (CFO Act), as amended. The sample was drawn from the Office of Personnel Management’s (OPM) Enterprise Human Resources Integration database as of September 2015, using file designators for performance of managerial and supervisory functions. The sample was stratified by agency and by whether the manager or supervisor was a member of the Senior Executive Service (SES). The management levels covered general schedule (GS) or equivalent schedules in other pay plans at levels comparable to GS-13 through GS- 15 and career SES or equivalent. In reporting the questionnaire data, we use “government-wide” or “across the federal government” to refer to these 24 CFO Act executive branch agencies, and “federal managers” and “managers” to refer to both managers and supervisors. We designed the questionnaire to obtain the observations and perceptions of respondents on various aspects of such results-oriented management topics as the presence and use of performance information, agency climate, and program evaluation use. In addition, to address the implementation of GPRA Modernization Act of 2010 (GPRAMA), the questionnaire included a section requesting respondents’ views on its various provisions including cross-agency priority goals, agency priority goals, and quarterly performance reviews. This survey is similar to surveys we have conducted five times previously at the 24 CFO Act agencies—in 1997, 2000, 2003, 2007, and 2013. The questions on GPRAMA provisions and program evaluation use were new in 2013. The 2017 questionnaire includes new questions on the use of performance information and factors that facilitate the use of program evaluation. Several components of the new evaluation question were drawn from our 2014 survey of Performance Improvement Officers (PIOs) on their agencies’ evaluation capacity resources and activities, discussed below, and interviews with agency officials. Before administering the survey, GAO subject matter experts, survey specialists, and a research methodologist reviewed new questions. We also conducted pretests of the new questions with federal managers in several of the 24 CFO Act agencies and based revisions on the feedback we received. The objectives of this report address whether agency managers reported change in their access to and use of program evaluations since 2013 and their views about factors that facilitate or hinder the use of program evaluation. Therefore, this report analyzes results on a subset of survey questions concerning those topics. It then compares these results, when appropriate, to results previously obtained in the 2013 survey of federal managers, as well as the results of our 2014 PIO survey. For the 2014 PIO survey, we administered a web-based questionnaire to the PIOs or their deputies at the 24 CFO Act agencies about agencies’ evaluation resources, policies, and activities and the activities and resources they found useful in building their evaluation capacity. GAO subject matter experts, a survey specialist, and research methodologist also reviewed this survey’s questions. In addition we pretested the questionnaire in person with PIOs at three federal agencies. Because this was not a sample survey, it has no sampling errors but may be subject to nonsampling errors that stem from differences in how a question is interpreted. The survey of PIOs is not directly comparable to the survey of federal managers because the questions about factors influencing evaluation use are not exactly the same, and the PIOs, as senior officials typically reporting to the agency Chief Operating Officer, have very different responsibilities from the population of midlevel and upper-level managers and supervisors responding to the Federal Managers Survey. Most of the items on the 2017 Federal Managers Survey were closed- ended, meaning that depending on the particular item, respondents could choose one or more response categories or rate the strength of their perception on a 5-point “extent” scale ranging from “to no extent” at the low end of the scale to “to a very great extent” at the high end. On most items, respondents also had an option of choosing the response category “no basis to judge/not applicable.” A few items gave respondents “yes,” “no,” or “do not know” options. To administer the survey, we sent an e-mail to managers in the sample that notified them of the survey’s availability on the GAO website and included instructions on how to access and complete the survey. Managers in the sample who did not respond to the initial notice received multiple e-mail reminders and follow-up phone calls asking them to participate in the survey. We administered the survey to all 24 CFO Act agencies from November 2016 through March 2017. For additional details on the survey methodology, see our report summarizing our body of work on GPRAMA’s implementation. From the 4,395 managers selected for the 2017 survey, we found that 388 of the sampled managers had left the agency, were on detail, or had some other reason that excluded them from the population of interest. We received usable questionnaires from 2,726 sample respondents. The response rate across the 24 CFO Act agencies ranged from 36 percent to 82 percent, with a weighted response rate of 67 percent for the entire sample. An estimated 40 percent of respondents reported that an evaluation had been completed within the past 5 years for any of the programs, operations, or projects with which they had been involved. The overall survey results can be generalized government-wide to the population of managers as described above at each of the 24 CFO Act agencies. The responses of each eligible sample member who provided a useable questionnaire were weighted in the analysis to account statistically for all members of the population. All results are subject to some uncertainty or sampling error as well as nonsampling error. The government-wide percentage estimates based on our sample from 2017 presented in this report have 95 percent confidence intervals within plus or minus 4 percentage points of the estimate itself for the initial question about whether an evaluation had been completed and within plus or minus 7 percentage points for subsequent questions posed to those who reported having evaluations. Online supplemental materials show all the questions asked on the survey along with the percentage estimates and associated 95 percent confidence intervals for each question for each agency and government-wide. In addition to the contact named above, Stephanie Shipman (Assistant Director), Valerie Caracelli (Analyst in Charge), Pille Anvelt, Timothy Guinane, Jill Lacey, Benjamin Licht, Krista Loose, Anna Maria Ortiz, Penny Pickett, and Steven Putansu made key contributions to this report. Managing for Results: Further Progress Made in Implementing the GPRA Modernization Act, but Additional Actions Needed to Address Pressing Governance Challenges. GAO-17-775. Washington, D.C.: September 29, 2017. Supplemental Material for GAO-17-775: 2017 Survey of Federal Managers on Organizational Performance and Management Issues. GAO-17-776SP. Washington, D.C.: September 29, 2017. 2017 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-17-491SP. Washington, D.C.: April 26, 2017. Tiered Evidence Grants: Opportunities Exist to Share Lessons from Early Implementation and Inform Future Federal Efforts. GAO-16-818. Washington, D.C.: September 21, 2016. Fragmentation, Overlap, and Duplication: An Evaluation and Management Guide. GAO-15-49SP. Washington, D.C.: April 14, 2015. Program Evaluation: Some Agencies Reported that Networking, Hiring, and Involving Program Staff Help Build Capacity. GAO-15-25. Washington, D.C.: November 13, 2014. Managing for Results: Executive Branch Should More Fully Implement the GPRA Modernization Act to Address Pressing Governance Challenges. GAO-13-518. Washington, D.C.: June 26, 2013. Managing for Results: 2013 Federal Managers Survey on Organizational Performance and Management Issues. GAO-13-519SP. Washington, D.C.: June 2013. Program Evaluation: Strategies to Facilitate Agencies’ Use of Evaluation in Program Management and Policy Making. GAO-13-570. Washington, D.C.: June 26, 2013. Managing for Results: Agencies Should More Fully Develop Priority Goals under the GPRA Modernization Act. GAO-13-174. Washington, D.C.: April 19, 2013. Designing Evaluations: 2012 Revision. GAO-12-208G. Washington, D.C.: January 2012. Program Evaluation: Experienced Agencies Follow a Similar Model for Prioritizing Research. GAO-11-176. Washington, D.C.: January 14, 2011. Government Performance: Lessons Learned for the Next Administration on Using Performance Information to Improve Results. GAO-08-1026T. Washington, D.C.: July 24, 2008. Program Evaluation: Studies Helped Agencies Measure or Explain Program Performance. GAO/GGD-00-204. Washington, D.C.: September 29, 2000.
GPRAMA aims to ensure that agencies use performance information in decision making to achieve results and improve government performance. GPRAMA requires GAO to evaluate the act's implementation; this report is one in a series on its implementation. GAO examined the extent of agencies' use of program evaluations—a particular form of performance information—and factors that may hinder or facilitate their use in program management and policy making. GAO surveyed a stratified random sample of 4,395 federal civilian managers and supervisors to obtain their perspectives on several results-oriented management topics, including the extent of and factors influencing evaluation use. GAO compared the results to those of a similar GAO survey of federal managers in 2013 and a GAO survey of Performance Improvement Officers in 2014. GAO also interviewed OMB staff and reviewed guidance on using evaluation in decision making. In a 2017 government-wide survey, GAO found that most federal managers lack recent evaluations of their programs. Forty percent reported that an evaluation had been completed within the past 5 years of any program, operation, or project they were involved in. Another 39 percent of managers reported that they did not know if an evaluation had been completed, and 18 percent reported having none. Managers who reported having evaluations also reported that those evaluations contributed to a great or very great extent to improving program management or performance (54 percent) and to assessing program effectiveness or value (48 percent). These figures are not statistically different from the results of GAO's 2013 survey. Of the 40 percent of managers who reported having evaluations, the factor most often rated as having hindered use to a great or very great extent, as in 2013, was lack of resources to implement the evaluation findings (29 percent). Managers reported limited knowledge of congressional support for using their results; 35 percent were not able to judge whether lack of support was a barrier. Federal managers who reported having evaluations most frequently reported that agency leadership support for evaluation, staff involvement, and an evaluation's relevance to decision makers facilitated evaluation use. GAO previously reported that involving agency staff in planning and conducting evaluations helps to ensure they are relevant, credible, and used in decision making. The Office of Management and Budget (OMB) encouraged agencies to use the annual strategic reviews the GPRA Modernization Act of 2010 (GPRAMA) requires to assess evidence gaps and inform their strategic decisions and budget making. GAO and OMB have noted the importance of developing an evaluation plan or agenda to ensure that an agency's scarce research and evaluation resources are targeted to its most important issues. While 28 percent of managers with evaluations rated consultation with stakeholders high for facilitating use, another 22 percent reported having no basis to judge. GAO previously noted limited knowledge of agency consultation with the Congress. While 23 percent of managers with evaluations reported congressional requests or mandates facilitated evaluation use, more (31 percent) reported having no basis to judge. GAO concludes that Agencies' continued lack of evaluations may be the greatest barrier to their informing managers and policy makers and constitutes a lost opportunity to improve the efficiency and effectiveness of limited government resources. Although only some agencies have developed agency-wide evaluation plans, evaluators who have them found that obtaining stakeholder input helped ensure evaluation relevance and facilitate use of their results. Congressional consultation on agency evaluation plans could increase the studies' credibility with those whose support is needed to implement program reforms. An agency's annual strategic review provides a good opportunity to help target its evaluation agenda to its management, budget, and policy priorities. To help ensure that agencies obtain the evidence needed to address important questions to improve program implementation and performance, GAO recommends that the Director of OMB direct federal agencies to prepare an annual agency-wide evaluation plan that describes the congressional and other stakeholders that were consulted. OMB staff stated that agencies should be encouraged, rather than directed, to create an annual evaluation plan. Because OMB has already been encouraging evaluation, GAO believes a more directive approach is needed.
A catastrophic disaster exposes residents and responders to a variety of traumatic experiences that put them at risk for adverse psychological consequences. Preparedness at the federal, state, and local levels is critical to the nation’s ability to provide the services needed to address these problems during response. In light of the emergence of threats posed by terrorism and the complex issues involved in responding to those threats, GAO has identified disaster preparedness and response as a major challenge for the 21st century. Research has shown that people who have experienced or witnessed certain incidents during or after a catastrophic disaster—such as serious physical injury, destruction of a home, or long-term displacement from the community—can experience an array of psychological consequences. For example, studies found that 1 to 2 months after the WTC attack, the rate of probable PTSD was 11.2 percent among a sample of adults in the New York City metropolitan area, compared with about 4 percent elsewhere in the United States, and Manhattan residents reported increases in smoking, alcohol consumption, and marijuana use. Research has also shown that psychological effects can persist or emerge months or years after the event has occurred. For example, a 2006 study on the use of counseling services by people affected by the WTC attack found that some people first sought counseling services more than 2 years after the event. Certain populations may be especially vulnerable to psychological consequences following a disaster. These include children and survivors of past traumatic events. Others who may be especially vulnerable include people who had a preexisting mental illness at the time of a disaster. Research has also shown that disaster responders may be especially vulnerable because of the direct and protracted nature of their exposure to traumatic experiences, extended working hours, and sleep deprivation. A CDC survey of New Orleans firefighters and police officers about 2 to 3 months following Hurricane Katrina found that about one-third of respondents reported symptoms of depression or PTSD, or both. Psychological responses can also be affected by the characteristics of the particular disaster and its aftermath. Terrorism differs from natural disasters in that it can create a general sense of fear in the population outside the affected area. The Institute of Medicine noted that although terrorism and other disasters may share important characteristics, “the malicious intent and unpredictable nature of terrorism may carry a particularly devastating impact for those directly and indirectly affected.” During the recovery phase of a catastrophic natural disaster, ongoing stress due to the perceived loss of support associated with large-scale dislocation of the population can also affect mental health. In an assessment of health-related needs for residents returning to the New Orleans area 7 weeks after Hurricane Katrina, researchers found that many respondents had emotional concerns—such as feeling isolated or crowded—and about half had levels of distress that indicated a possible need for mental health services. The Robert T. Stafford Disaster Relief and Emergency Assistance Act (Stafford Act) is the principal federal statute governing federal disaster assistance and relief. State and local governments have the primary responsibility for disaster response, and the Stafford Act established the process for states to request a presidential disaster declaration for affected counties in order to obtain supplemental assistance—such as physical assets, personnel, and funding—from the federal government when a disaster exceeds state and local capabilities and resources. The President may make a disaster declaration for both catastrophic disasters and smaller-scale disasters that exceed a state’s ability to respond. The Stafford Act and FEMA’s regulations contain provisions related to disaster preparedness. The act encourages each state to have a plan that stipulates the state’s overall responses in the event of an emergency. FEMA regulations require that, as a condition of receiving CCP funds to respond to a disaster, states agree to include mental health disaster planning in their overall plans. The regulations do not require that state mental health and substance abuse agencies develop their own disaster plans, but such plans are recommended by SAMHSA as important components of disaster preparedness. In 2003, SAMHSA issued mental health disaster planning guidance to help state and local mental health agencies create or revise disaster plans. The agency recommends, for example, that plans describe the specific responsibilities of state mental health agencies and other organizations in responding to a disaster and responsibilities for maintaining and revising a disaster plan. In 2004, SAMHSA issued guidance recommending that state substance abuse agencies develop all-hazard substance abuse disaster plans. The guidance recommends, among other things, that these plans include information on working with other agencies and providers and on providing medications, such as methadone. DHS created the National Response Plan in December 2004 to provide an all-discipline, all-hazards approach for the management across jurisdictions of domestic incidents such as catastrophic natural disasters and terrorist attacks when federal involvement is necessary. The National Response Plan details the missions, policies, structures, and responsibilities of federal agencies for coordinating resource and programmatic support to states, tribes, and other federal agencies. DHS has responsibility for coordinating the federal government’s response to disasters, including administering the provisions of the Stafford Act. FEMA administers funding for disaster relief by reimbursing federal, state, and local government agencies and certain nongovernmental organizations for eligible disaster-related expenditures. The National Response Plan also gives FEMA responsibility to coordinate mass care, housing, and human services, including coordinating the provision of immediate, short- term assistance for people dealing with the anxieties, stress, and trauma associated with a disaster. In addition, HHS is designated as the primary agency for coordinating public health and hospital emergency preparedness activities and coordinating the federal government’s public health and medical response. Depending on the circumstances of a disaster, HHS’s responsibilities may include assessing mental health and substance abuse needs, providing disaster mental health training materials, and providing expertise in long-term mental health services. Other agencies—including the Departments of Defense, Justice, Labor, and Veterans Affairs (VA)—support HHS’s preparedness and response efforts. For over 30 years, the federal government has used CCP to support short- term crisis counseling and public education services to help alleviate the psychological distress caused or aggravated by disasters for which a presidential disaster declaration has been made. FEMA administers CCP in conjunction with SAMHSA, which provides technical assistance, develops program guidance, and conducts oversight on behalf of FEMA. States seeking CCP funding following a presidentially declared disaster can apply to FEMA for an immediate grant and, if necessary, a longer-term grant. The Immediate Services Program (ISP) grant funds CCP services for up to 60 days following a disaster declaration, and states applying for the grant must do so within 14 days of the declaration. The Regular Services Program (RSP) grant is designed to help states meet a continuing need for crisis counseling services for up to an additional 9 months. States applying for an RSP grant must do so within 60 days of a disaster declaration. If a state decides to apply for an RSP grant, the ISP grant can be extended until the RSP application is reviewed and a funding decision has been made. A state’s CCP application must demonstrate that the need for crisis counseling in the affected area is beyond the capacity of state and local resources. A state must develop its needs assessment by using a prescribed formula that, among other things, includes the estimated numbers of deaths, persons injured, and damaged or destroyed homes attributable to the disaster. This needs assessment is critical for developing a state’s program plan and budget request, which must also be included in its application. FEMA reviews all ISP and RSP applications and receives input from SAMHSA, which also reviews the applications. FEMA has final authority for all funding decisions. Both ISP and RSP grants are generally managed by state mental health agencies, which typically contract with community organizations to provide CCP services. The CCP model was designed to meet the short-term mental health needs of people affected by disasters through outreach that involves education, individual and group counseling, and referral for other services. The main focus of the model is to help people regain their predisaster level of functioning by, among other things, providing emotional support, mitigating additional stress, and providing referrals to additional resources that may help them recover. CCP services, which are to be provided anonymously and free of charge, are primarily delivered through direct contact with disaster survivors in familiar settings—such as homes, schools, community centers, and places of religious worship. Services are designed to be delivered by teams of mental health professionals and paraprofessionals from the community affected by the disaster. The mental health professionals, who have prior specialized mental health or counseling training and are usually licensed by the state, typically coordinate and supervise paraprofessionals who may not have had previous training as mental health professionals. Paraprofessionals working as CCP crisis counselors provide outreach, crisis counseling, and referrals. All members of the teams are to be trained in the basics of crisis counseling and CCP. States cannot use CCP funds to provide longer-term services such as treatment for psychiatric disorders or substance abuse, office-based therapy, or medications. The state programs are expected to refer survivors who may need such services to an appropriate agency or licensed mental health professional. From fiscal year 2001 through fiscal year 2006, the majority of CCP grant funding has been used to meet needs following catastrophic disasters. According to FEMA, during this period, the agency obligated a total of about $424 million in CCP funds, with about $289 million (about 68 percent) obligated for states that responded to the three catastrophic disasters in our review—the WTC attack, Hurricane Charley, and Hurricane Katrina. According to FEMA, the agency obligated about $167 million for New York and other states that responded to the WTC attack; about $7 million for Florida to respond to Hurricane Charley; and about $51 and $23 million in CCP funds for Louisiana and Mississippi, respectively, to respond to Hurricane Katrina. In addition, FEMA allowed 26 additional states, commonly called “host states,” to apply for CCP funding to assist people displaced as a result of Hurricane Katrina. According to FEMA, the agency obligated for these host states a total of about $37 million, ranging from about $13,000 to about $13 million each. For example, the agency obligated about $13 million and $129,000 for Texas and Washington, respectively. At SAMHSA’s request, VA’s National Center for PTSD (NCPTSD) conducted an evaluation of CCP and provided its report in June 2005. NCPTSD researchers examined state CCPs that were for disasters occurring from October 1996 through September 2001 and that concluded by December 2003, which resulted in an examination of programs implemented by 27 states to respond to 28 disasters. The evaluation also included case studies of four specific disasters—the bombing of the Murrah Federal Building in Oklahoma City, Oklahoma, in 1995; Hurricane Floyd in 1999; the WTC attack in 2001; and the Rhode Island nightclub fire in 2003. Although NCPTSD’s evaluation found that CCP performed well in certain respects, it identified a number of ways in which states had difficulties implementing their CCPs, and it indicated that drawing conclusions about some aspects of the program was difficult because data were of poor quality and incomplete. Federal grants have helped states prepare for the psychological consequences of catastrophic and other disasters, and SAMHSA has conducted an assessment of disaster plans from many state mental health and substance abuse agencies. In fiscal years 2003 and 2004, SAMHSA awarded grants to mental health and substance abuse agencies in 35 states specifically for disaster planning. CDC, HRSA, and DHS have also provided preparedness funding that states may use for mental health or substance abuse preparedness, but the agencies’ data-reporting requirements do not produce information on the extent to which states used funds for this purpose. In 2007, SAMHSA completed an assessment of mental health and substance abuse disaster plans developed by states that received its preparedness grant. SAMHSA found that these plans showed improvements over those that had been submitted by states as part of their application for the preparedness grant. The agency also identified several ways in which the plans could be improved. In addition to assisting states with their preparedness, HHS is taking steps to be better prepared to send federal resources to help states respond to the psychological consequences of disasters. SAMHSA awarded $6.8 million over fiscal years 2003 and 2004 specifically to help state mental health and substance abuse agencies prepare for the psychological consequences of catastrophic and other disasters. The agency awarded 35 states; the total amount awarded to each individual state ranged from about $105,000 to about $200,000. Two of the six states in our review, New York and Texas, received a SAMHSA grant. New York, which already had a mental health disaster plan, used the funds to develop a plan for its state substance abuse agency. Texas, which was already developing a mental health disaster plan, used the grant to help fund a consortium of state agencies with postdisaster mental health responsibilities—including mental health, public safety, and victims’ services—and to increase the role of substance abuse providers in preparedness activities. Mental health officials from one of the four states in our review that did not apply for a SAMHSA grant told us their agency did not apply because it was already engaged in planning with the state public health agency, and officials from the other three states said they did not apply due to competing demands on their time. CDC, HRSA, and DHS public health and homeland security preparedness grant funds can also be used by states to prepare for the psychological consequences of disasters, and we found examples of states using CDC and HRSA funds for this purpose. During fiscal years 2002 through 2006, CDC and HRSA awarded about $6.1 billion in grants to states and selected urban areas to improve public health and hospital preparedness, and DHS provided about $12.1 billion in grants to states and localities for broad preparedness efforts. CDC and HRSA require that states document how they plan to engage in certain mental health and substance abuse preparedness activities, and although there is no requirement that states spend their DHS grant funds to prepare for the psychological consequences of disasters, a state may choose to do so. These grant programs fund broader preparedness efforts, and their data-reporting requirements do not produce information on the full extent to which states used funds for mental health and substance abuse preparedness activities. We found that, according to state officials, public health agencies in five of the six states in our review—all but Mississippi—used either CDC or HRSA preparedness funds to support mental health and substance abuse agencies’ activities at least once during fiscal years 2002 through 2006. For example, in Florida, Texas, and Washington, public health agencies allocated funds to mental health and substance abuse agencies for the development of a disaster plan or to pay the salaries of disaster planners. In Louisiana, the mental health agency received funds to, among other things, develop criteria for a registry of volunteer mental health professionals and help mental health and substance abuse treatment facilities develop disaster plans. Mental health or substance abuse officials in the six states we reviewed told us their agencies were not allocated funds from their state’s DHS grant during fiscal years 2002 through 2006. In addition to awarding grants to states, federal agencies have funded training and developed guidance to enhance states’ preparedness for the psychological consequences of disasters. For example, SAMHSA established its Disaster Technical Assistance Center in fiscal year 2003 to provide training and technical assistance to state mental health and substance abuse agencies. SAMHSA also distributes various guidance documents, such as guidance to help prevent and manage stress in disaster response workers before, during, and after a disaster. In addition, CDC, HRSA, and DHS fund the development of training activities that can benefit the preparedness of states’ mental health providers. For example, HRSA officials told us that the agency’s Bioterrorism Training and Curriculum Development Program awarded a contract in 2006 to an accrediting body for counseling programs to incorporate mental health disaster preparedness into its educational standards, and CDC’s Centers for Public Health Preparedness Program awarded grants to academic institutions to develop and assess training on mental health preparedness and response. SAMHSA reviewed state mental health and substance abuse plans as part of its disaster preparedness grant program. In 2007, the agency completed a review of the disaster plans of 34 of the 35 states that received a SAMHSA preparedness grant to, among other things, give SAMHSA aggregated information about states’ disaster planning and technical assistance needs. According to SAMHSA, the mental health and substance abuse disaster plans of these 34 states showed improvement over the plans the states had submitted in 2002 as part of their grant applications. Areas SAMHSA identified as showing improvement included stronger partnership for planning and response among state mental health and substance abuse services agencies; an increased number of unified plans that encompass both mental health and substance abuse services issues; stronger partnerships with key stakeholders such as emergency management, public health agencies, and voluntary organizations that are active in disasters; and clearer identification and articulation of the disaster response role of state mental health and substance abuse agencies. SAMHSA also identified several ways in which the plans could be improved. For example, it reported that while most plans indicated that the state deploys disaster responders to provide mental health and substance abuse services, about one-third of the plans needed to provide more detailed information on the training, qualifications, and safe deployment of these responders. SAMHSA also reported that although states were more likely to incorporate substance abuse services into their disaster planning, about half the plans still did not indicate specific planning and response actions that substance abuse agencies should take. In reviewing mental health and substance abuse disaster plans from five of the six states in our study, we made observations that are consistent with SAMHSA’s findings. For example, we found that the five states’ disaster plans varied in their attention to substance abuse topics. Two states in our review issued separate mental health and substance abuse plans. Each of the other three states issued a unified disaster plan to cover both mental health and substance abuse, but only one of the three plans specifically discussed both types of services. The other two plans primarily discussed mental health services and had few specific references to providing substance abuse services following a disaster. For example, these plans did not include specific information about providing methadone treatment for people with a drug abuse disorder following a disaster—information that was provided in the separate substance abuse plans. In addition, we found that disaster plans of the states in our review did not always identify specific actions or responsibilities related to serving the mental health and substance abuse needs of certain special populations. Three state disaster plans did not identify specific actions for preparing to work with children, and two plans did not include provisions for specific cultural minorities. Mental health and substance abuse officials from the states in our review told us that they recognized various gaps in their disaster preparedness. For example, state officials discussed the need to provide additional training to disaster responders and said they would like to collaborate more extensively with state health, emergency management, and education agencies. One observation that state mental health officials made was that schools could be an important local resource for providing postdisaster services to children but that relationships between state mental health agencies and schools are sometimes not in place prior to a disaster. Officials from several states described benefits from meeting other states’ officials at SAMHSA’s regional training conferences, but told us that resource limitations or the need to first plan within their own state made it difficult to continue these relationships. SAMHSA’s report recommended that the agency conduct state-specific needs assessments to identify individual states’ technical assistance needs for mental health and substance abuse disaster planning. SAMHSA officials told us that the agency is exploring methods to conduct such assessments and that the agency would need to determine the availability of resources for the assessments. To help states address the psychological consequences of disasters, HHS, as the lead federal department for public health and medical preparedness, is implementing several efforts to be better prepared to send federal resources to help states. For example, HHS is increasing the capacity of federal disaster response teams to provide mental health services to disaster victims and responders. Based on lessons learned following Hurricane Katrina, the White House Homeland Security Council recommended that HHS organize, train, and equip medical and public health professionals in preconfigured and deployable teams. In response, HHS organized U.S. Public Health Service Commissioned Corps officers into several teams—including five Rapid Deployment Force teams that each include 4 mental health providers and five Mental Health Teams that each include about 20 mental health providers. HHS created team rosters and sponsored a large-scale training exercise from July 15, 2007, through August 24, 2007, that allowed the team members, including the mental health providers, to train together. HHS also plans to recruit additional mental health providers into the Commissioned Corps. HHS officials told us that there has been a shortage of mental health providers in the Commissioned Corps and that requirements for deployment on short notice made it difficult for agencies to ensure that team members’ regular responsibilities are fulfilled while the team member is deployed. For fiscal year 2008, HHS proposed to recruit providers to staff full-time, dedicated Health and Medical Response Teams. Two teams—each with 105 members, including at least 4 mental health providers—would serve as the primary responders for the Commissioned Corps and reduce the deployment burden placed on other officers. HHS has also taken steps to increase the supply of drugs indicated for psychological disorders that should be available in the event of a disaster. Prior to Hurricane Katrina, HHS began developing Federal Medical Stations to provide mass casualty capability (i.e., equipment, material, and pharmaceuticals) to augment local health care infrastructures overwhelmed by a terrorist attack or natural disaster. These stations included a cache of drugs focused on urgent and emergency care. Given the large number of evacuees with special medical needs who required care following the hurricane, HHS revised the cache in 2006 to increase the types of drugs specifically indicated for mental health conditions from 20 to 33. For example, HHS increased the types of antidepressants and antipsychotics and added five new classes of drugs, including drugs to treat sleep disorders. State officials told us they experienced difficulties in applying for CCP funding and implementing their programs, particularly in the wake of catastrophic disasters. States had problems collecting information needed to prepare their ISP applications within FEMA’s application deadline and preparing parts of their ISP and RSP applications, including estimating the number of people who might need crisis counseling services. FEMA and SAMHSA officials told us they had taken steps to revise the applications and supporting guidance to help address these difficulties. States also experienced lengthy application reviews, and FEMA and SAMHSA officials said they had taken steps to improve the submission and review process. In addition, state officials told us they experienced problems implementing their CCPs, such as difficulties resulting from FEMA’s policy of not reimbursing state CCPs for indirect program costs. Additional problems that state officials cited were related to assisting people in need of more intensive counseling services and making referrals for mental health and substance abuse treatment. FEMA and SAMHSA are considering options to address some of these concerns, but they do not know when they will make these decisions. Officials in the six states in our review told us they encountered difficulties as they prepared their CCP applications following the catastrophic disasters included in our review, including difficulties in collecting the information required for their ISP applications within established deadlines. Officials said that the amount of information required for their applications was difficult to collect because of the scope of the disasters and the necessity for responding on other fronts, such as ensuring the safety of patients and personnel at state-run mental health facilities. For example, Texas officials estimated that the state hosted more than 400,000 Hurricane Katrina evacuees and that they had to collect information for over 250 counties to estimate how many people might need crisis counseling services. Furthermore, several state officials said that some of the information required for the ISP application, such as that on preliminary damages and the location of people who might need services, was not always available or reliable immediately following a catastrophic disaster. According to SAMHSA, because information from traditional sources was lacking following Hurricane Katrina, states were allowed to use other sources—such as newspaper reports and anecdotal evidence—to complete their applications. However, Louisiana and Mississippi officials told us that obtaining the information required by SAMHSA to complete the application was difficult and was sometimes unavailable in the immediate aftermath of the hurricane. Officials in three states we contacted said that the difficulty of completing their applications on time was exacerbated because multiple disasters affected the same jurisdictions in close succession and they were required to submit a separate application for each one. Louisiana, for example, had to submit separate ISP applications following Hurricanes Katrina and Rita, even though the hurricanes affected overlapping areas and occurred less than 1 month apart. State officials also told us that the CCP application’s needs assessment formula, which they are to use to estimate the number of people who might need crisis counseling services, created problems in estimating needs following catastrophic disasters in their states. The needs assessment formula includes several categories of loss, including deaths, hospitalizations, homes damaged or destroyed, and disaster-related unemployment. State officials told us that the formula’s loss categories did not capture data that they considered critical to assessing mental health needs following a catastrophic disaster, such as estimates of populations at increased risk for psychological distress, including children and the elderly, or information on destroyed or damaged community mental health centers. While states can include such information in a narrative portion of their application, several state officials told us it was not clear to them how this narrative information is factored into funding decisions. NCPTSD’s evaluation of CCP for SAMHSA described concerns similar to those noted by states in our review about the accuracy of the results produced by using the formula. Moreover, NCPTSD concluded that the formula could be a contributing factor in discrepancies it found between states’ estimates of people in need and the numbers of people actually served. Preparing the sections of the application on plans for providing CCP services and on program budgets also was difficult, according to state officials. For example, state officials said the application guidance did not provide sufficient detail to indicate what federal officials would consider reasonable numbers of supervisors, outreach workers, and crisis counselors to hire. Several officials also said it was difficult to use the fiscal guidance to determine what agency officials would consider a reasonable budget for various CCP activities, such as use of paid television and radio advertisements for outreach. State officials said that having more detailed guidance would help them develop better proposals and minimize the need to revise their applications during the review process. Federal program officials told us they have taken steps to address various difficulties that states experienced in collecting information and preparing their CCP applications. Agency officials told us that they recently made changes intended to reduce the amount of information required in the ISP and RSP applications, modified the needs assessment formula, and clarified the applications and supporting guidance. For example, in the revised needs assessment formula the weights assigned to most of the loss categories have been adjusted for estimating the number of people who could benefit from CCP services. According to SAMHSA, the revised ISP and RSP applications were approved in September 2007; the agency made these available to states in November 2007. In 2006, in response to feedback from states regarding difficulties with the application process, FEMA and SAMHSA revised their 4-day CCP basic training course for states to increase its focus on preparing CCP applications. According to a FEMA program official, the course was also revised in 2007 to reflect recent changes to the applications and supporting guidance, and FEMA program officials have requested that FEMA’s Emergency Management Institute offer the course annually instead of every other year. This program official also told us that a Web-based CCP orientation course was developed and that it is required for all those who attend the basic training course. State officials told us that FEMA and SAMHSA’s CCP application review process was lengthy after catastrophic disasters, especially for RSP applications submitted following Hurricane Katrina. A FEMA official estimated that for CCP applications submitted in 2002 through 2006 it had generally taken the agencies about 14 days to review and make funding decisions for ISP applications and about 28 to 70 days to review and make funding decisions for RSP applications. Our analysis of CCP applications for the catastrophic disasters in our review showed that it took FEMA and SAMHSA from 5 to 39 days to review and make funding decisions for ISP applications and 58 to 286 days to review and make funding decisions for RSP applications. (See table 1.) State officials told us that the lengthy reviews and the resulting delays in obtaining RSP funding created difficulties for their CCPs. According to state officials, delays in application approval contributed to delays in executing contracts with service providers, delays in hiring staff, and problems retaining staff. They told us they needed to obtain a decision on their RSP application as quickly as possible so they could better plan and implement their programs. Federal program officials told us that several factors contributed to the time it took to review applications following these catastrophic disasters. These factors included an unanticipated high volume of CCP applications following Hurricane Katrina. CCP applications submitted by states that were directly affected by Hurricane Katrina, as well as by 26 states hosting people who had evacuated after Hurricane Katrina, created unanticipated demands that were well beyond the normal capacity of their CCP staff to handle. FEMA officials told us that the two agencies typically reviewed an average of 17 new ISP applications and 13 new RSP applications each year from fiscal year 2002 through 2006, but that they reviewed 31 ISP applications and 20 RSP applications in response to Hurricane Katrina alone. According to SAMHSA officials, the agency had not planned for the surge of applications created by FEMA’s decision to allow host states to apply for CCP funding and had no policies in place at the time to enable SAMHSA’s CCP and grants management staff to adapt quickly to the submission of so many CCP applications within a few weeks. To respond, some SAMHSA staff had to handle double the number of applications they usually process, and the agency supplemented its six CCP reviewers by using six staff from other parts of the agency to help review CCP grant applications. FEMA hired three temporary staff to assist with the application review process after Hurricane Katrina. However, a report prepared for SAMHSA on its response to Hurricane Katrina noted that some agency staff who assisted with the review of applications did not have sufficient knowledge of CCP and the grant review process and therefore required training. SAMHSA and FEMA have taken actions to be prepared for such a surge in applications in the future. A SAMHSA official told us that the agency sent five staff from various parts of the agency to the August 2007 4-day CCP basic training course to help them gain a better understanding of the CCP application process. In December 2007, the agency hired a staff person, whose position was funded by FEMA, in its grants management office to, among other things, assist in the review of CCP applications and the fiscal monitoring of CCP grants. They expect the addition of this employee to help to shorten application review times for RSP grants. SAMHSA officials told us that the need to obtain further information from the states also contributed to the length of the reviews following Hurricane Katrina. In examining various CCP applications submitted by the states in our study after Hurricane Katrina and related correspondence, we noted instances in which SAMHSA sent letters to states informing them that their applications contained errors, were incomplete, or required clarification for the agency to proceed with its review. In some instances, the agency made multiple requests to a state to clarify a specific part of its application. For example, SAMHSA found that Louisiana did not provide complete information in its RSP application related to the process the state planned to use to identify the local service providers with which it would contract—a process that was different from the one traditionally used to contract for CCP services. According to federal officials, issues related to this process resulted in Louisiana not submitting a complete application until 6 months after its initial application, which in turn created enormous delays in the application review process. SAMHSA officials told us that another reason for the need to obtain additional clarification was that the agency had established a more stringent CCP application review process in July 2005. According to SAMHSA officials, the revised CCP applications and guidance should help reduce the need for states to revise their applications during the review process. In addition, agency officials told us that the 2007 CCP basic training course for applicants included information on the application process and on the new review standards. State officials said they faced difficulties in implementing their CCPs following catastrophic disasters. For example, they told us that FEMA’s policy of not reimbursing states and counseling service providers for indirect costs caused difficulties for state CCPs. They also described the need for expanded crisis counseling services and cited additional concerns. States told us that FEMA’s policy of precluding states and their CCP service providers from obtaining reimbursement for indirect costs has created difficulties in implementing their programs. Under CCP guidelines, states and their CCP service providers cannot be reimbursed for indirect costs related to managing and monitoring their programs that are not directly itemized in their program budgets. However, state officials told us that it can be difficult for their agencies and service providers to determine what proportion of their overall administrative costs is attributable to CCP activities. In addition, several state officials also told us that CCP service providers often have limited capacity in their overall agency budgets to redirect funds from other services to cover the indirect costs associated with their CCP work. State officials told us that the inability to obtain reimbursement for indirect costs contributed to difficulties in recruiting and retaining service providers. According to Louisiana officials, for example, that inability contributed to the decision of one of its largest Hurricane Katrina CCP contractors providing services in New Orleans to withdraw from the state’s program in 2007. While CCP guidance precluded reimbursement for indirect costs, the provider decided to request reimbursement. In a June 2006 letter to the state mental health office, the provider stated that participating in the state’s CCP had created a financial burden that included moving funds from other services to its CCP contract. In July 2006, FEMA declined the provider’s request to include indirect costs in its budget, stating that under CCP guidelines all budget charges must be direct and that the provider should work with the state to see whether any of these costs could be reclassified as direct costs in the provider’s budget. Officials in FEMA’s grants management office told us that although CCP policy prohibits reimbursement for indirect costs, they were unaware of statutory or regulatory prohibitions on the reimbursement of such costs. Furthermore, they told us that other FEMA disaster response grant programs do allow indirect cost reimbursement. For example, grantees can be reimbursed for indirect costs under FEMA’s Public Assistance Program and Hazard Mitigation Grant Program. Other federal postdisaster response grant programs also allow grantees to be reimbursed for indirect costs, including the SAMHSA Emergency Response Grant (SERG) program and the Department of Education’s Project School Emergency Response to Violence (Project SERV) program. Concern about the exclusion of indirect costs from CCP reimbursement has been a long-standing issue. In 1995, FEMA’s Inspector General issued a report on CCP that said that the reimbursement of indirect costs appeared allowable under applicable federal law and regulations. The Inspector General recommended that FEMA review its policy on reimbursement for indirect costs. FEMA and SAMHSA officials told us that reimbursement for indirect costs was a recurring concern for states and service providers and that states have advocated for a change in this policy. SAMHSA officials said that allowing indirect cost reimbursement would promote participation of a broader array of local service providers. A FEMA official told us that the agency had been considering whether to develop a new CCP policy to allow reimbursement for such costs. According to this official, FEMA had been examining this issue since June 2006, when it received the letter from the Louisiana CCP service provider. This official also told us that SAMHSA provided recommendations in March 2007 on potential modifications to CCP guidance and application materials to allow reimbursement for indirect costs. According to this official, however, FEMA still needed to examine various implementation issues, including which types of indirect costs might be reimbursed and what changes to the application review process might be needed. As of October 2007, this official did not know when the agency would make a decision about whether to allow reimbursement for indirect costs. State officials told us that after catastrophic disasters they faced the challenge of how to assist people who were experiencing more serious postdisaster distress than traditional CCP services could resolve. According to New York, Louisiana, and Mississippi officials, some CCP clients who did not display symptoms suggesting they needed a referral for mental health or substance abuse treatment nevertheless could have benefited from more intensive crisis counseling than was provided in the CCP model. Furthermore, in the case of Hurricane Katrina, Mississippi officials told us that they wanted to able to serve as many people as possible within their CCPs because the devastation resulted in fewer mental health and substance abuse providers being available to accept referrals for treatment. To assist these people, officials in New York, Louisiana, and Mississippi asked FEMA and SAMHSA to allow their state CCPs to offer expanded types of services after catastrophic disasters in their states. In response to the states’ requests, FEMA and SAMHSA officials allowed the existing state CCPs to develop pilot programs offering expanded crisis counseling services consistent with the nonclinical, short-term focus of the CCP model. New York’s expanded services, known as “enhanced services,” were offered through the New York City Fire Department (FDNY) and through community-based providers for both adults and children. FDNY’s services started in September 2002, about 12 months after the WTC attack. The community-based services started in spring 2003. Provided by mental health professionals, these expanded services were based on cognitive behavioral approaches. These services included helping clients recognize symptoms of postdisaster distress and develop skills to cope with anxiety, depression, or other symptoms. Individuals referred for expanded services were offered a series of up to 12 counseling sessions. New York’s community-based expanded services for adults ended in December 2003; its community-based services for children ended in December 2004, as did the FDNY’s services. In November 2006, FEMA and SAMHSA allowed Louisiana and Mississippi to plan for providing expanded crisis counseling services, known as “specialized crisis counseling services,” to supplement CCP services offered to people affected by Hurricane Katrina. Each state developed and implemented expanded services based on operating principles developed by SAMHSA and tailored to the needs of its population. Louisiana and Mississippi began offering their expanded services in January 2007, about 17 months after the hurricane. In contrast to New York’s series of up to 12 sessions, the expanded services offered by Louisiana and Mississippi were designed to be delivered in a single stand-alone session by mental health professionals, although clients could obtain additional sessions. The states’ CCPs used a standardized assessment and referral process to determine whether to refer people for expanded services, such as stress management. Louisiana and Mississippi used providers with prior mental health training to refer expanded services clients for mental health and substance abuse treatment services. In addition, the states used paraprofessionals to link clients with other disaster-related services and resources, such as financial services, housing, transportation, and child care. According to a SAMHSA official, Louisiana’s CCP is scheduled to stop providing expanded services to adults and children in February 2008. Mississippi, which focused on providing expanded services to adults, stopped providing services in April 2007. Several state officials said that it would be beneficial if the CCP model could be expanded to include more intensive crisis counseling services and if states could make these types of services available sooner. For example, several officials told us that if expanded services were a permanent part of CCP it would enable states responding to catastrophic disasters to incorporate expanded services at an earlier stage in their CCP service plans, training programs, and budgets. A New York official told us that after the state received approval for the general concept of expanded services, it took the state a few additional months to prepare a proposal, obtain federal approval, and contract with and train the providers. Because the state did not begin offering expanded services until it started phasing down its delivery of traditional CCP services, fewer crisis counselors were available to refer clients from traditional services to expanded services. NCPTSD’s 2005 evaluation of CCP for SAMHSA recommended that, at least on a trial basis, expanded services should become a well-integrated part of state CCPs that is implemented relatively early in state programs. NCPTSD also recommended evaluating the efficacy of such services. FEMA and SAMHSA officials told us that after completion of Louisiana’s program they planned to examine which elements of Louisiana’s, Mississippi’s, and New York’s expanded services programs might be beneficial to incorporate into CCP. FEMA and SAMHSA officials also said that NCPTSD has begun to develop an additional approach to providing postdisaster counseling services that they would also like to examine after it has been developed. FEMA and SAMHSA officials said they also planned to try to determine the most opportune time to start offering expanded services to disaster survivors. These officials did not know when the review would be completed. Officials we interviewed in three of the states in our review expressed concerns about the ability of state CCPs to appropriately refer people needing mental health or substance abuse treatment services. Several state officials said that the paraprofessional crisis counselors who generally identify people for referral are not always able to properly identify people who have more serious psychological problems. Officials said there was a constant need to provide staff with training on CCP assessment and referral techniques to ensure that they could identify people who needed a referral. In its evaluation of CCP for SAMHSA, NCPTSD also reported concerns by states related to the ability of paraprofessionals to identify people needing a referral. According to SAMHSA, a CCP trainer’s toolkit that was completed in August 2007 includes information on proper techniques for conducting CCP assessments and referrals. The agency is planning to distribute the toolkit to states in spring 2008 when it holds a planned training. Officials we spoke to in all six states in our review told us that their CCPs were constrained by FEMA’s policy of not allowing CCP funds to be used to provide some case management services. According to CCP guidance, case management is not typically an allowable program service. Several state officials told us that it would be beneficial if state CCPs could provide some form of case management after catastrophic disasters, when many survivors are likely to have numerous needs and may require additional support to obtain services necessary for their recovery. State officials said that Hurricane Katrina highlighted the difficulties that disaster survivors can have negotiating complex service and support systems. Louisiana officials said, for example, that many people who experienced extraordinary levels of stress because of the hurricane had low literacy skills and clearly needed support to make connections to additional services and resources. One state official also told us that because the practical difficulties of meeting needs involving housing can often be a cause of the emotional distress that CCPs are trying to alleviate, they would like CCP crisis counselors to be able to more directly help people connect with disaster-related services to meet these needs. State officials also told us that it was difficult to identify people in need of crisis counseling services because FEMA does not give state CCPs access to specific information on the location of people registered for federal disaster assistance. NCPTSD’s evaluation of CCP also noted that the unavailability of this information made it difficult for state CCPs to locate people who might need services. Several state officials told us that FEMA had provided them with some counts of disaster registrants at the state, county, or Zip Code levels but that they also needed information on the specific locations of disaster survivors to conduct effective outreach. FEMA officials told us that the agency stopped providing information on the specific location of registrants in the 1990s. They also told us that it was their understanding that FEMA stopped providing this information due to concerns about the privacy of registrants. The scope and magnitude of catastrophic disasters can result in acute and sustained psychological trauma that can be debilitating for extended periods of time. While CCP is a key component of the federal government’s response to the psychological consequences of disasters, we have identified two important limitations that can affect states’ ability to use CCP to respond to the special circumstances of catastrophic disasters. First, state officials responding to the WTC attack and Hurricane Katrina identified the need to provide expanded crisis counseling services through CCP. FEMA and SAMHSA recognized such a need when they permitted three state CCPs to expand their programs to provide more intensive short-term crisis counseling than the CCP model generally allows. FEMA and SAMHSA officials told us they intended to consider incorporating certain types of expanded services into CCP. Promptly determining what types of expanded services should become a permanent part of CCP would enable states to more effectively develop their CCP proposals and provide their populations with needed counseling services in the event of future catastrophic disasters. Second, FEMA’s policy of precluding states and their CCP service providers from obtaining reimbursement for indirect costs associated with managing and monitoring their CCPs has made it difficult for states to effectively administer their CCPs. State officials reported that the lack of reimbursement for indirect costs made it more difficult to recruit and retain service providers and contributed to a major contractor’s withdrawal from Louisiana’s Hurricane Katrina CCP. Other FEMA disaster response grant programs allow reimbursement for such costs. Although FEMA had been examining this issue for over a year, an agency official did not know when the agency would reach a decision on whether to revise CCP policy to allow coverage of indirect costs. Including indirect costs in CCP and not requiring service providers to absorb these costs could expand the pool of providers willing to participate in this program. This could strengthen states’ ability to assist disaster victims in coping with the psychological consequences of catastrophic disasters. To address gaps identified by federal and state officials in the federal government’s ability to help states respond to the psychological consequences of catastrophic disasters, we recommend that the Secretary of Homeland Security direct the Administrator of FEMA, in consultation with the Administrator of SAMHSA, to expeditiously take the following two actions: determine what types of expanded crisis counseling services should be formally incorporated into CCP and make any necessary revisions to program policy, and revise CCP policy to allow states and service providers that receive CCP funds to use them for indirect costs. We provided a draft of this report to DHS and HHS for comment. Both DHS and HHS generally concurred with both of our recommendations and stated that they had taken or will take steps toward implementing them. However, they did not provide specific timelines for completing these actions. (DHS’s comments are reprinted in app. III; HHS’s comments are reprinted in app. IV). In response to our recommendation to expeditiously allow reimbursement for indirect costs within CCP, both departments commented that allowing reimbursement for such costs will promote broader participation of local service providers. In its comments, DHS also said that the inclusion of indirect costs will help expedite the application review process and that FEMA has been working with SAMHSA to revise CCP policy to allow reimbursement for indirect costs. HHS stated that the draft report accurately reflected concerns regarding the exclusion of indirect costs and that SAMHSA had previously given FEMA a recommendation supporting a change in this policy. Although DHS and HHS indicated that they are working on a revision of the policy to allow reimbursement of such costs, they did not provide a timeline for completing this activity. As our report notes, FEMA has been examining this issue since 2006, and it is important to complete this work expeditiously so that in the event of a future disaster, state CCPs could be in a better position to attract the participation of a broad array of service providers. In response to our recommendation to expeditiously determine what types of expanded crisis counseling services should be formally incorporated into CCP, HHS and DHS commented that, as our draft report indicated, they plan to wait until Louisiana has completed its pilot expanded services program before making this determination. They said that because Louisiana had applied for an extension of its CCP, they cannot provide a timeline for completion of their reviews of expanded services pilots. We believe, however, that federal program officials already have a considerable amount of information about these pilots—New York and Mississippi have completed their programs and Louisiana has been providing information on an ongoing basis. We believe that it is important for FEMA and SAMHSA to expeditiously review the experience of the pilot programs and other relevant information so they can expeditiously determine which expanded services should be formally incorporated into CCP. This will help ensure that states responding to a disaster will be able to provide the appropriate range of CCP services to assist people who are in need of crisis counseling services. In addition, HHS commented that SAMHSA has initiated a workgroup to ensure that the CCP model reflects current best practices. However, we have learned that, as of January 2008, the workgroup had not yet begun to conduct its work. HHS and DHS commented on our discussion of states’ reports on difficulties they had experienced in preparing their CCP applications. DHS stated that FEMA, in consultation with SAMHSA, took action to expedite the submission, review, and approval of ISP applications submitted after Hurricane Katrina, including allowing the use of shorter applications by states hosting Hurricane Katrina evacuees. We clarified our discussion of host states’ ability to apply for CCP funds to note that FEMA allowed them to submit an abbreviated ISP application. DHS also commented that it is not feasible to have one grant application or one grant for two separate disasters because FEMA must separately account for and report on funds for specific disasters. We attempted to obtain further clarification from DHS about why FEMA separately accounts for funds for each disaster, but DHS did not provide this information. In addition, HHS and DHS commented that the draft report’s description of the needs assessment process failed to capture the degree to which they had provided states flexibility in quantifying survivor needs after Hurricane Katrina. DHS said that FEMA and SAMHSA did not rely primarily on damage assessments, as few had been completed. Rather, FEMA registration numbers, newspaper reports, and anecdotal data were relied on to estimate need. Our draft report described action taken by FEMA to help states collect information needed to prepare their ISP applications after Hurricane Katrina, and we have revised the final report to make it clear that states were allowed to use other sources of information. In commenting on the CCP application review process, HHS and DHS said that the data in our report showing that it took up to 286 days to review applications were misleading because only Louisiana’s application took that long to review and the state’s proposed use of a different procedure for identifying the local service providers with which it would contract caused enormous delays in the review process. However, the review of New York’s RSP after the WTC disaster also took over 200 days, and the reviews for four of the five states in our study took longer than FEMA’s estimated average review period. The draft report contained information on several factors that contributed to longer review times, and we added to the final report information on Louisiana’s proposal to use an alternative procedure and its effect on the length of the review of Louisiana’s RSP application. HHS also commented on our discussion of states’ concerns that lengthy reviews and resulting delays in obtaining funding created difficulties for CCPs in executing contracts with service providers and implementing their programs. HHS said that the report should note that these challenges were the result of state fiscal and contracting practices that do not relate to the availability of federal funds. Although state practices may contribute to delays, extended federal reviews also may contribute to delays in states’ ability to implement their CCPs. DHS commented on our description of states’ discussion of the importance of case management services for CCP clients and mentioned the Post-Katrina Emergency Management Reform Act of 2006, which amended the Stafford Act to allow for the provision of case management services to meet the needs of survivors of major disasters. These services could include financial assistance to help state or local government agencies or qualified private organizations to provide case management services. In its comments, DHS also stated that FEMA has entered into an interagency agreement with HHS to collaborate closely on the development and implementation of a case management program; this agreement is for the development of a pilot program to determine the best methods of providing case management services. FEMA provided additional information indicating that its case management program will coordinate with CCP. In its comments, HHS made observations about the importance of recognizing culture and language issues as barriers to effective responses to catastrophic disasters, incorporating behavioral health into all grantee planning and response activities, and requiring grant recipients to report on how funds were used to address the psychological consequences of a disaster. These are important points, and we would encourage HHS agencies to consider them in their disaster preparedness and response programs. HHS noted that HRSA’s National Bioterrorism Hospital Preparedness Program, Emergency System for Advance Registration of Volunteer Healthcare Professionals, and Bioterrorism Training and Curriculum Development Program have been transferred to ASPR; we added this information to the final report where appropriate. HHS also identified actions it had taken in response to GAO’s May 2005 report on the CCP, including improving the fiscal monitoring of grants. In addition, HHS noted that the grants management position funded by FEMA that we discussed in our draft report was filled in December 2007. Our final report reflects this development. In its comments, HHS said that instead of referring to 35 states that received disaster preparedness grants, our report should refer to 34 states and the District of Columbia; the draft report noted that our use of the word “state” included states, territories, Puerto Rico, and the District of Columbia. In addition, HHS suggested that we revise the title of the report by removing the reference to CCP needing improvements. In light of our findings and recommendations, we believe the need for expeditious action supports the original title. HHS also provided technical comments, which we incorporated where appropriate. We are sending a copy of this report to the Secretaries of Health and Human Services and Homeland Security. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-7114 or bascettac@gao.gov. Contacts for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To do our work, we obtained program documents and interviewed officials from the Department of Health and Human Services (HHS), including the Administration for Children and Families, Centers for Disease Control and Prevention (CDC), Centers for Medicare & Medicaid Services (CMS), Health Resources and Services Administration (HRSA), National Institutes of Health, Office of the Assistant Secretary for Preparedness and Response, and Substance Abuse and Mental Health Services Administration (SAMHSA); the Department of Education; the Department of Homeland Security (DHS), including the Federal Emergency Management Agency (FEMA); the Department of Justice; and the Department of Veterans Affairs (VA), including the National Center for Posttraumatic Stress Disorder (NCPTSD). We spoke with researchers from the National Center for Child Traumatic Stress at the University of California, Los Angeles, and the National Center for Disaster Preparedness at Columbia University. We also interviewed officials from national organizations, including the American Red Cross, National Alliance on Mental Illness, National Association of State Mental Health Program Directors, National Association of State Alcohol and Drug Abuse Directors, and National Emergency Management Association. In addition, we reviewed relevant literature. We conducted additional work in six judgmentally selected states that had experience responding to the psychological consequences of three catastrophic disasters during fiscal years 2002 through 2006 that we included in our scope: the World Trade Center (WTC) attack in 2001, Hurricane Charley in 2004, and Hurricane Katrina in 2005. We included New York because it responded to the WTC attack; Florida because it responded to Hurricane Charley; and Louisiana and Mississippi because they responded to Hurricane Katrina. We included Texas in our review because it hosted a large number of people displaced by Hurricane Katrina, and we included Washington because it hosted people displaced by Hurricane Katrina and has features, such as large ports, that make it vulnerable to natural and man-made disasters. Results from this nongeneralizable sample of six states cannot be used to make inferences about other states. To examine actions by federal agencies to help states prepare for the psychological consequences of catastrophic disasters, we reviewed key federal preparedness and response documents—such as the National Response Plan, the Interim National Preparedness Goal, and FEMA’s Guide for All-Hazard Emergency Operations Planning—and recent reports on the federal government’s response to Hurricane Katrina. We identified federal grant programs and other activities that were related to disaster preparedness and were funded during fiscal year 2002 through fiscal year 2006 by reviewing relevant documents and through discussions with federal and state officials. For key HHS and DHS preparedness grant programs, we reviewed relevant documentation, such as application guidance, and interviewed federal program officials. We obtained disaster plans for the mental health and substance abuse agencies in the six states included in our review and examined the plans we received. We also interviewed mental health and substance abuse officials from these six states about their preparedness activities. In addition, we examined SAMHSA’s 2007 report on mental health and substance abuse disaster plans developed by states that received its preparedness grant. To examine states’ experiences in obtaining and using federal Crisis Counseling Assistance and Training Program (CCP) grants to respond to the psychological consequences of catastrophic disasters, we reviewed program documentation, including the applicable statute, regulations, guidance, and grantee reports. We also reviewed CCP applications or other relevant documentation that the six states submitted to FEMA for declared counties in response to one of the three catastrophic disasters in our review. We reviewed documentation to obtain information on states’ experiences in applying for CCP funding and on FEMA’s and SAMHSA’s processes for reviewing applications, including examining the length of time it took the agencies to review applications and make funding decisions for the selected catastrophic disasters. In addition, we interviewed state mental health officials from the six states to obtain additional information on their experiences applying for CCP funding and implementing their CCPs following these three disasters. We interviewed FEMA and SAMHSA officials to obtain their perspectives on states’ applications and states’ experiences implementing their CCPs to respond to catastrophic disasters and to obtain information pertaining to FEMA’s and SAMHSA’s administration of the program. Furthermore, we examined the 2005 report on CCP prepared for SAMHSA by NCPTSD. To identify other federal programs that have supported mental health and substance abuse services in response to catastrophic disasters, we reviewed GAO reports, Congressional Research Service reports, the Catalog of Federal Domestic Assistance, and pertinent legislation and program regulations. We interviewed federal program officials about these programs and obtained available information, including grantee applications, award data, and reports, to determine how the programs were used to respond to mental health or substance abuse needs following the three catastrophic disasters included in our review. We present information on the use of various federal programs to respond to needs following the catastrophic disasters in our review; the list we present is not exhaustive. To determine the amount of Deficit Reduction Act of 2005 funds used by the 32 states that had been approved by CMS for demonstration projects following Hurricane Katrina, we analyzed data in CMS’s Medicaid Budget and Expenditure System (MBES), which includes claims data for health care services, including inpatient mental health care services. We analyzed MBES claims data available as of June 27, 2007, for services provided August 24, 2005, or later to eligible people affected by Hurricane Katrina. To assess the reliability of the MBES data, we discussed the database with an agency official and conducted electronic testing of the data for obvious errors in completeness. States submit all claims data to the system electronically and must attest to the completeness and accuracy of the data. These data are preliminary in nature, in that they are subject to further review by CMS and are likely to be updated as states continue to submit claims for Deficit Reduction Act funding. We determined that these data were sufficiently reliable for the purpose of our report. We conducted our work from March 2006 through February 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to CCP, federal agencies have used other programs following catastrophic disasters to help states and localities provide mental health and substance abuse services to disaster survivors. The following list presents information on the use of various federal programs to respond to needs following the catastrophic disasters in our review; it is not an exhaustive list. Federal agencies have used established grant programs to help states respond to the psychological consequences of catastrophic disasters, some of which are generally intended to be used following smaller-scale emergencies. SAMHSA awarded funds through its Emergency Response Grant (SERG) program following Hurricane Katrina. The agency provided a total of $900,000 to Alabama, Louisiana, Mississippi, and Texas to help meet the overwhelming need for assistance. For example, Texas was awarded $150,000 and helped evacuees in the Houston Astrodome and other shelters who needed methadone medication because of opiate addiction. The Department of Education awarded funds through its Project School Emergency Response to Violence (Project SERV) program following the 2001 terrorist attacks and Hurricanes Katrina and Rita. The agency provided about $14 million and $7 million following these respective disasters to help local education agencies respond by providing services that could include crisis counseling, mental health assessments, and referrals. For example, following the 2001 terrorist attacks, New York used Project SERV funds to provide counseling and after-school mental health services. The Department of Justice provided funds through its Antiterrorism and Emergency Assistance Program to help states and localities respond to victims’ mental health needs following mass violence and acts of terrorism. Following the WTC attack, for example, New York used $5 million of its grant award from the Department of Justice to provide additional funding to 15 service providers providing crisis counseling services through the state CCP. HHS has also temporarily modified or expanded ongoing federal health care and social service programs to help states provide mental health and substance abuse services after specific catastrophic disasters. CMS allowed states to temporarily cover certain health care costs associated with catastrophic disasters through Medicaid and the State Children’s Health Insurance Program (SCHIP). For example, following Hurricane Katrina, the Congress appropriated $2 billion to cover certain health care costs related to Hurricane Katrina through Medicaid and SCHIP. CMS allowed 32 states that either were directly affected by the hurricane or had hosted evacuees to temporarily expand the availability of coverage for certain people affected by the hurricane. CMS allowed states to submit claims for reimbursement for health care services that were provided August 24, 2005, or later. As of June 27, 2007, these states had submitted claims to CMS for health care services totaling about $1.7 billion, of which about $15.7 million was for mental health services provided in inpatient facilities—such as hospitals, nursing homes, and psychiatric facilities. (See table 2 for information on the amount of claims submitted by states, including four states that were in our review.) HHS’s Administration for Children and Families awarded $550 million in supplemental Social Services Block Grant funds following the 2005 Gulf Coast hurricanes to temporarily expand the program to help 50 states and the District of Columbia meet social and health care service needs. The funds could be used for providing case management and counseling, mental health, and substance abuse services, including medications. States could also use the funds for the repair, renovation, or construction of community mental health centers and other health care facilities damaged by the hurricanes. For example, Mississippi was awarded about $128 million in supplemental funding and used about $10 million of the funding in part to restore services to mental health treatment facilities for adults and children and provide transportation to mental health services. Federal agencies also awarded funding outside of these established programs to help states provide disaster-related mental health and substance abuse services after specific catastrophic disasters in our review. Some of these programs focused on specific at-risk groups, such as disaster responders, while others were established to meet the mental health needs of broader populations. HHS is coordinating federally funded programs for responders to the WTC disaster—including firefighters, police, other workers or volunteers, and federal responders—that provide free screening, monitoring, or treatment services for physical illnesses and psychological problems related to the disaster. We have previously reported on the progress of these programs. SAMHSA provided $28 million to nine states most directly affected by the September 11 attacks to provide various substance abuse and mental health services for people directly affected by the attacks. These services included assessments, individual counseling, group therapy, specialized substance abuse treatment, and case management. In addition to the contact named above, Helene F. Toiv, Assistant Director; William Hadley; Alice L. London; and Roseanne Price made major contributions to this report. September 11: Problems Remain in Planning for and Providing Health Screening and Monitoring Services for Responders. GAO-07-1253T. Washington, D.C.: September 20, 2007. Homeland Security: Observations on DHS and FEMA Efforts to Prepare for and Respond to Major and Catastrophic Disasters and Address Related Recommendations and Legislation. GAO-07-1142T. Washington, D.C.: July 31, 2007. September 11: HHS Needs to Ensure the Availability of Health Screening and Monitoring for All Responders. GAO-07-892. Washington, D.C.: July 23, 2007. Emergency Management: Most School Districts Have Developed Emergency Management Plans, but Would Benefit from Additional Federal Guidance. GAO-07-609. Washington, D.C.: June 12, 2007. Disaster Preparedness: Better Planning Would Improve OSHA’s Efforts to Protect Workers’ Safety and Health in Disasters. GAO-07-193. Washington, D.C.: March 28, 2007. Public Health and Hospital Emergency Preparedness Programs: Evolution of Performance Measurement Systems to Measure Progress. GAO-07-485R. Washington, D.C.: March 23, 2007. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-618. Washington, D.C.: September 6, 2006. Federal Emergency Management Agency: Crisis Counseling Grants Awarded to the State of New York after the September 11 Terrorist Attacks. GAO-05-514. Washington, D.C.: May 31, 2005. Hurricane Katrina: Status of Hospital Inpatient and Emergency Departments in the Greater New Orleans Area. GAO-06-1003. Washington, D.C.: September 29, 2006. Mental Health Services: Effectiveness of Insurance Coverage and Federal Programs for Children Who Have Experienced Trauma Largely Unknown. GAO-02-813. Washington, D.C.: August 22, 2002.
Catastrophic disasters, such as Hurricane Katrina, may result in trauma and other psychological consequences for the people who experience them. The federal government provides states with funding and other support to help them prepare for and respond to disasters. Because of congressional interest in these issues, GAO examined (1) federal agencies' actions to help states prepare for the psychological consequences of catastrophic disasters and (2) states' experiences obtaining and using grants from the Crisis Counseling Assistance and Training Program (CCP) to respond to the psychological consequences of catastrophic disasters. CCP is a program of the Department of Homeland Security's (DHS) Federal Emergency Management Agency (FEMA). GAO reviewed documents and interviewed program officials from federal agencies and conducted additional work in six states with experience responding to catastrophic disasters: Florida, Louisiana, Mississippi, New York, Texas, and Washington. Federal agencies have awarded grants and conducted other activities to help states prepare for the psychological consequences of catastrophic and other disasters. For example, in fiscal years 2003 and 2004, the Department of Health and Human Services' (HHS) Substance Abuse and Mental Health Services Administration (SAMHSA) provided grants to mental health and substance abuse agencies in 35 states for disaster planning. In 2007, SAMHSA completed an assessment of mental health and substance abuse disaster plans developed by states that received a preparedness grant. SAMHSA found that, for the 34 states with plans available for review, these plans generally showed improvement over those that had been submitted by states as part of their application for its preparedness grant. The agency also identified several ways in which the plans could be improved. For example, about half the plans did not indicate specific planning and response actions that substance abuse agencies should take. Similarly, GAO's review of the plans available from six states found varying attention among the plans to covering substance abuse issues. SAMHSA officials said the agency is exploring methods of determining states' individual technical assistance needs. Other federal agencies--the Centers for Disease Control and Prevention, the Health Resources and Services Administration, and DHS--have provided broader preparedness funding that states may use for mental health or substance abuse preparedness, but these agencies' data-reporting requirements do not produce information on the extent to which states used funds for this purpose. States in GAO's review experienced difficulties in applying for CCP funding and implementing their programs following catastrophic disasters. CCP, a key federal postdisaster response grant program to help states deliver crisis counseling services, is administered by FEMA in collaboration with SAMHSA. State officials said they had difficulty collecting information needed for their CCP applications and experienced lengthy application reviews. FEMA and SAMHSA officials said they have taken steps to improve the application submission and review process. State officials also said they experienced problems implementing their CCPs. For example, they said that FEMA's policy of not reimbursing states and their CCP service providers for indirect costs, such as certain administrative expenses, led to problems recruiting and retaining service providers. Other FEMA postdisaster response grant programs allow reimbursement for indirect costs. A FEMA official said the agency had been considering since 2006 whether to allow indirect cost reimbursement under CCP but did not know when a decision would be made. States also cited difficulties assisting people who needed more intensive crisis counseling services than those traditionally provided through state CCPs. FEMA and SAMHSA officials said they plan to consider options for adding other types of crisis counseling services to CCP, based in part on states' experiences with CCP pilot programs offering expanded crisis counseling services. The officials did not know when they would complete their review and reach a decision.
The federal-aid highway program is financed through motor fuel taxes and other levies on highway users. Federal aid for highways is provided largely on a cash basis from the Highway Trust Fund. States have financed roads primarily through a combination of state revenues and federal aid. Typically, states raise their share of the funds by taxing motor fuels and charging user fees. In addition, debt financing—issuing bonds to pay for highway development and construction— represents about 10 percent of total state funding for highways, although some states make greater use of borrowing than others. Federal-aid highway funding to states is typically in the form of grants. These grants are distributed from the Highway Trust Fund and apportioned to states based on a series of funding formulas. Funding is subject to grant-matching rules—for most federally funded highway projects, an 80-percent federal and 20-percent state funding ratio. States are subject to pay-as-you-go rules where they obligate all of the funds needed for a project up front and are reimbursed for project costs as they are incurred. In the mid-1990s, FHWA and the states tested and evaluated a variety of innovative financing techniques and strategies. Many financing innovations were approved for use through administrative action or legislative changes under NHS and TEA-21. Three of the techniques approved were SIBs, GARVEEs, and TIFIA loans. SIBs are state revolving loan funds that make loans or loan guarantees to approved projects; the loans are subsequently repaid, and recycled back into the revolving fund for additional loans. GARVEEs are any state issued bond or note repayable with future federal-aid highway funds. Through the issuance of GARVEE bonds, projects are able to meet the need for up-front capital as well as use future federal highway dollars for debt service. TIFIA allows FHWA to provide credit assistance, up to 33 percent of eligible project costs, to sponsors of major transportation projects. Credit assistance can take the form of a loan, loan guarantee, or line of credit. See appendix II for additional information about these financing techniques. According to FHWA, the goals of its Innovative Finance Program are to accelerate projects by reducing inefficient and unnecessary constraints on states’ management of federal highway funds; expand investment by removing barriers to private investment; encourage the introduction of new revenue streams, particularly for the purpose of retiring debt obligations; and reduce financing and related costs, thus freeing up the savings for investments into the transportation system itself. When Congress established the TIFIA program in TEA-21, it set out goals for the program to offer sponsors of large transportation projects a new tool to leverage limited Federal resources, stimulate additional investment in our nation’s infrastructure, and encourage greater private sector participation in meeting our transportation needs. Over the last 8 years, many states have used one or more of the FHWA- sponsored alternative financing tools to fund their highway and transit infrastructure projects. As of June 2002: 32 states (including the Commonwealth of Puerto Rico) have established SIBs and have entered into 294 loan agreements with a dollar value of about $4.06 billion; 9 states (including the District of Columbia and Commonwealth of Puerto Rico) have entered into TIFIA credit assistance agreements for 11 projects, representing $15.4 billion in transportation investment; and 6 states have issued GARVEE bonds with face amounts totaling $2.3 billion. These mechanisms have given states additional options to accelerate the construction of projects and leverage federal assistance. It has also provided them with greater flexibility and more funding techniques. States’ use of innovative financing techniques has resulted in projects being constructed more quickly than they would be under traditional pay- as-you-go financing. This is because techniques such as SIBs can provide loans to fill a funding gap, which allows the project to move ahead. For example, using a $25 million SIB loan for land acquisition in the initial phase of the Miami Intermodal Center, Florida accelerated the project by 2 years, according to FHWA. Similarly, South Carolina used an array of innovative finance tools when it undertook its “27 in7 program”—a plan to accomplish infrastructure investment projects that were expected to take 27 years and reduce that to just 7 years. Officials in the states that we contacted that were using FHWA innovative finance tools noted that project acceleration was one of the main reasons for using them. Innovative finance—in particular the TIFIA program—can leverage federal funds by attracting additional nonfederal investments in infrastructure projects. For example, the TIFIA program funds a lower share of eligible project costs than traditional federal-aid programs, thus requiring a larger investment by other, non-federal funding sources. It also attracts private creditors by assuming a lower priority on revenues pledged to repay debt. Bond rating companies told us they view TIFIA as “quasi-equity” because the federal loan is subordinate to all other debt in terms of repayments and offers debt service grace periods, low interest costs, and flexible repayment terms. It is often difficult to measure precisely the leveraging effect of the federal investment. As a recent FHWA evaluation report noted, just comparing the cost of the federal subsidy with the size of the overall investment can overstate the federal influence—the key issue being whether the projects assisted were sufficiently credit-worthy even without federal assistance and the federal impact was to primarily lower the cost of the capital for the project sponsor. However, TIFIA’s features, taken together, can enhance senior project debt ratings and thus make the project more attractive to investors. For example, the $3.2 billion Central Texas Turnpike project—a toll road to serve the Austin–San Antonio corridor—received a $917 million TIFIA loan and will use future toll revenues to repay debt on the project, including revenue bonds issued by the Texas Transportation Commission and the TIFIA loan. According to public finance analysts from two ratings firms, the project leaders were able to offset potential concerns about the uncertain toll road revenue stream by bringing the TIFIA loan to the project’s financing. FHWA’s innovative finance techniques provide states with greater flexibility when deciding how to put together project financing. By having access to various alternatives, states can finance large transportation projects that they may not have been able to build with pay-as-you-go financing. For example, faced with the challenge of Interstate highway needs of over $1.0 billion, the state of Arkansas determined that GARVEE bonds would make up for the lack of available funding. In June 1999, Arkansas voters approved the issuance of $575 million in GARVEE bonds to help finance this reconstruction on an accelerated schedule. The state will use future federal funds, together with the required state matching funds and the proceeds from a diesel fuel tax increase, to retire the bonds. The GARVEE bonds allow Arkansas to rebuild approximately 380 miles, or 60 percent of its total Interstate miles, within 5 years. Although FHWA’s innovative financing tools have provided states with additional options for meeting their needs, a number of factors can limit the use of these tools. State DOTs are not always willing to use federal innovative financing tools, nor do they always see advantages to using them. For example, officials in two states indicated that they had a philosophy against committing their federal aid funding to debt service. Moreover, not all states see advantages to using FHWA innovative financing tools. For example, one official indicated that his state did not have a need to accelerate projects because the state has only a few relatively small urban areas and thus does not face the congestion problems that would warrant using innovative financing tools more often. Officials in another state noted that because their DOT has the authority to issue tax-exempt bonds as long as the state has a revenue stream to repay the debt, they could obtain financing on their own and at lower cost. Not all state DOTs have the authority to use certain financing mechanisms, and others have limitations on the extent to which they can issue debt. For example, California requires voter approval in order to use its allocations from the Highway Trust Fund to pay for debt servicing costs. In Texas, the state constitution prohibits using highway funds to pay the state’s debt service. Other states limit the amount of debt that can be incurred. For example, Montana has a debt ceiling of $150 million and is now paying off bonds issued in the late 1970s and early 1980s and plans to issue a GARVEE bond in the next few years. Some financing tools have limitations set in law. For example, five states are currently authorized to use TEA-21 federal-aid funding to capitalize their SIBs. Although other states have created SIBs and use them, they could not use their TEA-21 federal-aid funding to capitalize them. Similarly, TIFIA credit assistance can be used only for certain projects. TIFIA’s requirement that, in general, projects cost at least $100 million restricts its use to large projects. We assessed the costs that federal, state and local governments (or special purpose entities they create) would incur to finance $10 billion in infrastructure investment using four current and newly proposed financing mechanisms for meeting infrastructure investment needs. To date, most federal funding for highways and transit projects has come through the federal-aid highway grants—appropriated by Congress from the Highway Trust Fund. Through the TIFIA program, the federal government also provides subsidized loans for state highway and transit projects. In addition, the federal government also subsidizes state and local bond financing of highways by exempting the interest paid on those bonds from federal income tax. Another type of tax preference—tax credit bonds— has been used, to a very limited extent, to finance certain school investments. Investors in tax credit bonds receive a tax credit against their federal income taxes instead of interest payments from the bond issuer. Proposals have been made to extend the use of this relatively new financing mechanism to other public investments, including transportation projects. The use of these four mechanisms to finance $10 billion in infrastructure investment result in differences in (1) total costs—and how much of the cost is incurred within the short term 5-year period and how much of it is postponed to the future; (2) sharing costs—or the extent to which states must spend their own money, or obtain private investment, in order to receive the federal subsidy; and (3) risks—which level of government bears the risk associated with an investment (or compensates others for taking the risk). As a result of these differences, for any given amount of highway investment, combined and federal government budget costs will vary, depending on which financing mechanism is used. Total costs—and how much of the cost is incurred within the short term 5- year period and how much of it is postponed to the future—differ under each of the four mechanisms. As figure 1 shows, grant funds are the lowest-cost method to finance a given amount of investment expenditure, $10 billion. The reason for this result is that it is the only alternative that does not involve borrowing from the private sector through the issuance of bonds. Bonds are more expensive than grants because the governments have to compensate private investors for the risks that they assume (in addition to paying them back the present value of the bond principal). However, because the grants alternative does not involve borrowing, all of the public spending on the project must be made up front. The TIFIA direct loan, tax credit bond, and tax-exempt loan alternatives involve increased amounts of borrowing from the private sector and, therefore, increased overall costs. Grants entail the highest short term costs as these costs, in our example, are all incurred on a pay-as-you-go basis. The tax-exempt bond alternative, which involves the most borrowing and has the highest combined costs, also requires the least amount of public money up front. There are significant differences across the four alternatives in the cost sharing between federal and state governments. (See fig. 2). Federal costs would be highest under the tax credit bond alternative, under which the federal government pays the equivalent of 30 years of interest on the bonds. Grants are the next most costly alternative for the federal government. Federal costs for the tax-exempt bond and TIFIA loan alternatives are significantly lower than for tax credit bonds and grants. In some past and current proposals for using tax credit bonds to finance transportation investments, the issuers of the bonds would be allowed to place the proceeds from the sales of some bonds into a “sinking fund” and, thereby, earn investment income that could be used to redeem bond principal. This added feature would reduce (or eliminate) the costs of the bond financing to the issuers, but this would come at a significant additional cost to the federal government. For example, in our example where states issue $8 billion of tax credit bonds to finance highway projects, if the states were allowed to issue an additional $ 2.4 billion of bonds to start a sinking fund, they would be able to earn enough investment income to pay back all of the bonds without raising any of their own money. However, this added benefit for the states could increase costs to the federal government by about 30 percent—an additional $2.7 billion (in present value), raising the total federal cost to $11.7 billion. In some cases private investors participate in highway projects, either by purchasing “nonrecourse” state bonds that will be repaid out of project revenues (such as tolls) or by making equity investments in exchange for a share of future toll revenues. By making these investments the investors are taking the risk that project revenues will be sufficient to pay back their principal, plus an adequate return on their investment. In the case where the nonrecourse bond is a tax-exempt bond, the state must pay an interest rate that provides an adequate after-tax rate of return, including compensation for the risk assumed by the investors. By exempting this interest payment from income tax, the federal government is effectively sharing the cost of compensating investors for risk. Nevertheless, the state still bears some of the risk-related cost and, therefore has an incentive to either select investment projects that have lower risks, or select riskier projects only if the expected benefits from those projects are large enough to warrant taking on the additional risk. In the case of a tax credit bond where project revenues would be the only source of financing to redeem the bonds and the federal government would be committed to paying whatever rate of credit investors would demand to purchase bonds at par value, the federal government would bear all of the cost of compensating the investors for risk. States would no longer have a financial incentive to balance higher project risks with higher expected project benefits. Alternatively, the credit rate could be set equal to the interest rate that would be required to sell the average state bonds (issued within the same timeframe) at par value. In that case, states would bear the additional cost of selling bonds for projects with above- average risks. In the case of a TIFIA loan for a project that has private sector participation, the federal loan does not compensate the private investors for their risk; instead, the federal government assumes some of the risk and, thereby, lowers the risk to the private investors and lowers the amount that states have to pay to compensate for that risk. In summary, Mr. Chairman, alternative financing mechanisms have accelerated the pace of some surface transportation infrastructure improvement projects and provided states additional tools and flexibility to meet their needs—goals of FHWA’s Innovative Finance Program. FHWA and the states have made progress to attain the goal Congress set for the TIFIA program—to stimulate additional investment and encourage greater private sector participation—but measuring success involves measuring the leverage effect of the federal investment, which is often difficult. Our work raises a number of issues concerning the potential costs and benefits of expanding alternative financing mechanisms to meet our nation’s surface transportation needs. Congress likely will weigh these potential costs and benefits as it considers reauthorizing TEA-21. Expanding the use of alternative financing mechanisms has the potential to stimulate additional investment and private participation. But expanding investment in our nation’s highways and transit systems raises basic questions of who pays, how much, and when. How alternative financing mechanisms are structured determines how much of the needs are met through federal funding and how much are met by the states and others. The structure of these mechanisms also determines how much of the cost of meeting our current needs are met by current users and taxpayers versus future users and taxpayers. While alternative finance mechanisms can leverage federal investments, they are, in the final analysis, different forms of debt financing. This debt ultimately must be repaid, with interest, either by highway users—through tolls, fuel taxes, or licensing and vehicle fees—or by the general population through increases in general fund taxes or reductions in other government services. Proposals for tax credit bonds would shift the costs of highway investments away from the traditional user-financed sources, unless revenues from the Highway Trust Fund are specifically earmarked to pay for these tax credits. Mr. Chairman this concludes my prepared statement. I would be pleased to answer any questions you or other members of the Committees have. For further information on this testimony, please contact JayEtta Z. Hecker (heckerj@gao.gov) or Steve Cohen (cohens@gao.gov). Alternatively, they may be reached on (202) 512-2834. Individuals making key contributions to this testimony include Lynn Filla-Clark, Jennifer Gravelle, Gail Marnik, Jose Oyola, Eric Tempelis, Stacey Thompson, and Jim Wozny. We estimated the costs that the federal, state or local governments (or special purpose entities they create) would incur if they financed $10 billion in infrastructure investment using each of four alternative financing mechanisms: grants, tax credit bonds, tax-exempt bonds, and direct federal loans. The following subsections explain our cost computations for each alternative. We converted all of our results into present value terms, so that the value of the dollars spent in the future are adjusted to make them comparable to dollars spent today. This adjustment is particularly important when comparing the costs of bond repayment that occur 30 years from now with the costs of grants that occur immediately. We estimated the cost to the federal and state governments of traditional grants with a state match. We assume the state was responsible for 20% of the investment expenditures. We then found the percentage of federal grants such that the federal grant plus the state match totaled $10 billion. This form of matching resulted in the state being responsible for $2 billion of the spending and the federal government being responsible for $8 billion. We estimated the cost to the federal and state governments of issuing $8 billion in tax credit bonds with a state match of $2 billion. The cost to the federal government equals the amount of tax credits that would be paid out over a given loan term. We estimated the amount of credit payment in a given year by multiplying the amount of outstanding bonds in a given year by the credit rate. We assumed that the credit rate would be approximately equal to the interest rates on municipal bonds of comparable maturity, grossed up by the marginal tax rate of bond purchasers. For the results presented in figures 1 and 2 we assumed that the bonds would have a 30-year term and would have a credit rating between Aaa and Baa. The cost to the issuing states would consist of the repayment of bond principal in future years, plus the upfront cost of $2 billion in state appropriations for the matching contribution. The cost of tax-exempt bonds to the state or local government (or special purpose entity) issuers would consist of the interest payments on the bonds and the repayment of bond principal. The cost to the federal government would equal the taxes forgone on the income that bond purchasers would have earned form the investments they would have made if the tax-exempt bonds were not available for purchase. For the results presented in figures 1 and 2 we made the same assumptions regarding the terms and credit rating of the bonds as we did for the tax credit bond alternative. We computed the cost of interest payments by the state by multiplying the amount of outstanding bonds by the current interest rate for municipal bonds with the same term and credit rating. We assumed that the pretax rate of return that bond purchasers would have earned on alternative investments would have been equal to the municipal bond rate divided by one minus the investors’ average marginal tax rate. Consequently, the federal revenue loss was equal to that pretax rate of return, multiplied by the amount of tax-exempt bonds outstanding each year (in this example), and then multiplied by the investors’ average marginal tax rate. In order to have our direct loan example reflect the financing packages typical of current TIFIA projects, we used data from FHWA’s June 2002 Report to Congress to determine what shares of total project expenditures were financed by TIFIA direct loans, federal grants, bonds issued by state or local governments or by special purpose entities, private investment, and other sources. We assumed that the $10 billion of expenditures in our example was financed by these various sources in roughly the same proportions as they are used, on average, in current TIFIA projects. We estimated the federal and nonfederal costs of the grants and bond financing components in the same manner as we did for the grants and tax-exempt bond examples above. To compute the federal cost of the direct loan component, we multiplied the dollar amount of the direct loan in our example by the average amount of federal subsidy per dollar of TIFIA loans, as reported in the TIFIA report. In the results presented in figure 1, this portion of the federal cost amounted to $130 million. The nonfederal costs of the loan component consist of the loan repayments and interest payments to the federal government. We assumed that the term of the loan was 30 years and that the interest rate was set equal to the federal cost of funds, which is TIFIA’s policy. The private investment (other than through bonds), which accounted for less than one percent of the spending, and the “other” sources, which accounted for about three percent of the spending, were treated as money spend immediately on the project. A number of factors—including general interest rate levels, the terms of the bonds or loans, the individual risks of the projects being financed— affect the relative costs of the various alternatives. For this reason, we examined multiple scenarios for each alternative. In particular, current interest rates are relatively low by historical standards. In our alternative scenarios we used higher interest rates, typical of those in the early 1990s. At higher interest rates, the combined costs of the alternatives that involve bond financing would be higher, while the costs of grants would remain the same. If we had used bonds with 20-year terms, instead of 30-year terms in the examples, the costs of the three alternatives that involve bond financing would be lower, but they would still be greater than the costs of grants. One of the earliest techniques tested to fund transportation infrastructure was revolving loan funds. Prior to 1995, Federal law did not permit states to allocate federal highway funds to capitalize revolving loan funds. However, in the early 1990s, transportation officials began to explore the possibility of adding revolving loan fund capitalization to the list of eligible uses for certain federal transportation funds. Under such a proposal, federal funding is used to “capitalize” or provide seed money for the revolving fund. Then money from the revolving fund would be loaned out to projects, repaid, and recycled back into the revolving fund, and subsequently reinvested in the transportation system through additional loans. In 1995, the federally capitalized transportation revolving loan fund concept took shape as the State Infrastructure Bank (SIB) pilot program, authorized under Section 350 of the NHS Act. This pilot program was originally available only to a maximum of 10 states, but then was expanded under the 1997 U.S. DOT Appropriations Act, which appropriated $150 million in federal general funds for SIB capitalization. TEA-21 established a new SIB pilot program, but limited participation to four states—California, Florida, Missouri, and Rhode Island. Texas subsequently obtained authorization under TEA-21. These states may enter into cooperative agreements with the U.S. DOT to capitalize their banks with federal-aid funds authorized in TEA-21 for fiscal years 1998 through 2003. Of the states currently authorized, only Florida and Missouri have capitalized their SIBs with TEA-21 funds. As part of TEA-21, Congress authorized the Transportation Infrastructure Finance and Innovation Act of 1998 (TIFIA) to provide credit assistance, in the form of direct loans, loan guarantees, and standby lines of credit to projects of national significance. The TIFIA legislation authorized $10.6 billion in credit assistance and $530 million in subsidy cost to cover the expected long-term cost to the government for providing credit assistance. TIFIA credit assistance is available to highway, transit, passenger rail and multi-modal project, as well as projects involving installation of intelligent transportation systems (ITS). The TIFIA statute sets forth a number of prerequisites for participation in the TIFIA program. The project costs must be reasonably expected to total at least $100 million, or alternatively, at least 50 percent of the state’s annual apportionment of federal-aid highway funds, whichever is less. For projects involving ITS, eligible project costs must be expected to total at least $30 million. Projects must be listed on the state’s transportation improvement program, have a dedicated revenue source for repayment, and must receive an investment grade rating for their senior debt. Finally, TIFIA assistance cannot exceed 33 percent of the project costs and the final maturity date of any TIFIA credit assistance cannot exceed 35 years after the project’s substantial completion date.
As Congress considers reauthorizing the Transportation Equity Act for the 21st Century (TEA-21) in 2003, it does so in the face of a continuing need for the nation to invest in its surface transportation infrastructure at a time when both the federal and state governments are experiencing severe financial constraints. As transportation needs have grown, Congress provided states--in the National Highway System Designation Act of 1995 and TEA-21--additional means to make highway investments through alternative financing mechanisms. A number of states are using existing alternative financing tools such as State Infrastructure Banks, Grant Anticipation Revenue Vehicles bonds, and loans under the Transportation Infrastructure Finance and Innovation Act. These tools can provide states with additional options to accelerate projects and leverage federal assistance--they can also provide greater flexibility and more funding techniques. Federal funding of surface transportation investments includes federal-aid highway program grant funding appropriated by Congress out of the Highway Trust Fund, loans and loan guarantees, and bonds that are issued by states that are exempt from federal taxation. Expanding the use of alternative financing mechanisms has the potential to stimulate additional investment and private participation. However, expanding investment in the nation's highways and transit systems raises basic questions of who pays, how much, and when.
In November 1985, the Congress directed the Department of Defense (DOD) to destroy the U.S. stockpile of obsolete chemical agents and munitions and also directed that the disposal program provide for the maximum protection of the environment, the public, and the personnel involved in disposing of the munitions. Although the Army considers the likelihood of a chemical release at one of its eight storage sites to be extremely small, the health effects of an accident can be severe. Some munitions contain nerve agents, which can disrupt the nervous system and lead to loss of muscular control and death. Others contain a series of blister agents commonly, but incorrectly, referred to as mustard agents, which blister the skin and can be lethal in large amounts. State and local officials, in accordance with state laws, have primary responsibility for developing and implementing emergency response programs for communities in the event of an emergency. In 1988, the Army established CSEPP to assist communities near the chemical stockpile storage sites to enhance existing emergency preparedness and response capabilities in the unlikely event of a chemical accident. Most communities near the sites had little capability to respond to a chemical emergency when CSEPP began. Threats to the stockpile include external events such as earthquakes, airplane crashes, and tornadoes and internal events such as spontaneous leakage of chemical agent, accidents during handling and maintenance activities, and self-ignition of propellant. The effect of a chemical stockpile accident would depend on such things as the amount and type of agent released, meteorological conditions, and the community’s proximity to the storage site and emergency response capabilities. The Department of the Army is responsible for managing and funding CSEPP. Section 1521(c)(3) of 50 U.S.C. states that the Secretary of Defense may make grants to state and local governments, either directly or through the Federal Emergency Management Agency (FEMA), to assist those governments in carrying out functions related to emergency preparedness. Under a memorandum of understanding, the Army delegated partial management of the program to FEMA. As the primary source of technical expertise in chemical weapons, the Army determines overall program direction and provides funding. As the primary source of expertise in emergency preparedness, FEMA distributes Army funds to states through cooperative agreements and provides technical assistance. “Cooperative agreements” are legal instruments that provide federal funds when there will be substantial involvement by federal agencies in the management of state and local programs. In contrast to cooperative agreements, “federal grants” are legal instruments that provide funds when there will be no substantial federal involvement. Program funds flow from the Army to FEMA headquarters, through FEMA regional offices, and to the states. States provide funds to counties as their subgrantees. According to CSEPP guidance, FEMA is responsible for working with state and local governments in developing emergency preparedness plans, upgrading community response capabilities, and conducting training. A combined Army and FEMA office, called the CSEPP Core Team, coordinates and implements public affairs, exercises, training, communications, and other activities for the program. (See app. I for funds allocated to CSEPP entities for fiscal years 1988 through 1995.) At the state level, the Alabama EMA is responsible for CSEPP and other emergency programs. Six Alabama counties participate in the program: Calhoun, Clay, Cleburne, Etowah, St. Clair, and Talladega. Of the six counties, Calhoun County has the largest population at risk and has received most of the funds. Calhoun EMA manages CSEPP and other emergency programs for the county. Anniston Army Depot in Alabama stores 661,000 chemical weapons containing more than 2,200 tons of nerve and mustard agents. Included in Anniston’s stockpile are approximately 78,000 nerve agent-filled M55 rockets, the stockpile’s most unstable weapon. Before constructing its chemical stockpile disposal facility at Anniston and other stockpile sites, the Army is required to obtain certain permits and approvals from federal, state, and local regulatory agencies. Under the Resource Conservation and Recovery Act of 1976, as amended (42 U.S.C. 6901 et seq), the Environmental Protection Agency has delegated the administration of the environmental permitting process to the Alabama Department of Environmental Management (DEM). Since 1989, the Army and FEMA have awarded Alabama $46 million, more than any other state, for CSEPP. Table 1 shows that as of March 1995 Alabama had spent only one-third of the $46 million and that Calhoun County, whose share of the $46 million is $30.2 million, had spent only one-fifth of its money. Alabama and its counties have not been able to spend most of the CSEPP funds allocated to them because (1) FEMA, state, and local officials cannot agree on specific requirements for major capital projects and (2) FEMA has not provided Alabama or Calhoun County officials permission to spend some of the funds. According to FEMA, the unexpended funds are mostly the result of Calhoun County’s refusal to initiate the CSEPP projects until the Army and FEMA agree to all of the county’s demands related to specific requirements. According to Calhoun County EMA, the agency does not initiate actions that do not conform to CSEPP guidance and could be detrimental to providing maximum protection to the public. When disputes related to specific requirements occur, there is no established approach for negotiating an agreement among federal, state, and local officials. More than 83 percent of Alabama’s unexpended funds are associated with four projects: 800-MHz communications system, collective protection of special facilities, tone alert radios, and personal protective equipment. On the basis of CSEPP-funded studies, Calhoun County EMA concluded in 1990 that the county’s conventional communications system did not meet CSEPP requirements. In 1992, the Army and FEMA determined that every CSEPP jurisdiction should have a functioning communications system connecting the Army installation, state EMA, and immediate response zone counties. The immediate response zone is the area generally extending approximately 6 to 9 miles around the storage site and the area considered at the greatest risk from a chemical release. The Army and FEMA approved a CSEPP 800-MHz communications system for Alabama in 1993 and authorized $8.8 million and $4.4 million in fiscal years 1994 and 1995, respectively. FEMA subsequently authorized an additional $3 million, bringing total funding for the system to $16.2 million. The communications system is an integrated, simulcast network with 20 channels that operate at a frequency of 800-MHz. The system will enable Alabama emergency workers to communicate inter- and intra-agency without having to wait for a channel to clear if someone is using it. The system is also the platform to simultaneously activate sirens and tone alert radios. In the authorization, FEMA also said the precise number of radios, their distribution, and follow-on radios would be decided by negotiations among FEMA, state, and county officials. Despite several years of studying, meeting, and negotiating, Alabama does not have an integrated 800 MHz communications system for CSEPP. Federal, state, and local officials did not agree on the number and distribution of the 800-MHz radios until April 23, 1996. In addition, FEMA officials decided to place a $1-million repeater tower and some radios in Talladega County’s precautionary zone. The “precautionary zone” is the area beyond 21 to 30 miles from the storage site and, under most conditions, beyond where CSEPP activities are required and where a repeater tower would be located. However, Calhoun and St. Clair county officials believe placing the tower and radios in the precautionary zone does not comply with program guidance. Some equipment will be nearly 50 miles from the Anniston Army Depot. As a result, Calhoun EMA, which is managing the contract for the 800-MHz system, was reluctant to award the contract. According to FEMA, the 800-MHz communications system is not in place because Calhoun County EMA refused to initiate work on the contract until the county’s demand for additional radios was met. According to the Calhoun EMA Director, his agency only supports projects that provide goods, services, and equipment in compliance with CSEPP guidance. On April 23, 1996, federal, state, and county officials met to resolve the issues that were delaying the implementation of the CSEPP 800-MHz project in Alabama. At the meeting, federal officials agreed to provide additional 800-MHz radios to Alabama and Calhoun and Talladega counties. Calhoun County EMA awarded the 800-MHz contract on May 30, 1996. According to Calhoun EMA, the contractor has 16 months from the contract award date to manufacture and install the communications system. In 1989, the Oak Ridge National Laboratory concluded that, in the event of an accidental release of chemical agent, a chemical plume could cover sections of Calhoun County’s immediate response zone in 1 hour. (See app. II for a description of the potential distribution of the hazard from a chemical release.) Oak Ridge also concluded in the 1989 report that evacuation was not recommended for the general population in Anniston’s immediate response zone and recommended expedient sheltering. According to another Oak Ridge National Laboratory draft report in 1991, Calhoun County residents would take 5 hours and 45 minutes to evacuate the greater Anniston area. The Oak Ridge’s estimate is the clearance time required for 100 percent of the vehicles to evacuate the area during bad weather at nighttime. On the basis of the Oak Ridge studies, Calhoun County EMA officials believe that it would be impossible to safely evacuate everyone. However, according to a senior official from the Oak Ridge National Laboratory, Calhoun County officials should not rely on the results of the 1991 draft report for planning purposes because the (1) report was never finalized and (2) changes in road conditions and demographics since 1991 may have affected the results of the draft report. To shelter the people they cannot evacuate, Calhoun County EMA officials believe collective protection is the best option. According to Calhoun EMA officials, “collective protection” is a combination of (1) a filtered overpressurized air system and (2) adequate food, water, and medical supplies to house a selected number of people up to 3 days in a closed facility. However, Army officials believe Calhoun EMA’s shelter time estimate of 3 days is excessive and that a chemical plume would pass over the area in 3 to 12 hours. The facilities to be provided with collective protection include schools, hospitals, jails, community centers, and public buildings that are within walking distance of homes and businesses. The Army Edgewood Research, Development and Engineering Center has completed a study to validate procedures for sheltering residents in a variety of housing types and identify a less burdensome and costly way to protect citizens in place. The draft report is dated December 8, 1995, and comments are being incorporated for publication of the final report. Although FEMA allocated Alabama $4.2 million for positive pressurization, county officials are reluctant to accept the allocation because they disagree with FEMA’s selection of facilities and funding amount. In September 1995, Calhoun County EMA provided federal officials a suggested list of 55 facilities for collective protection. FEMA officials selected 21 facilities from the list on the basis of the location and type of facility but did not discuss their selection with Calhoun EMA officials. According to county officials, five of the facilities FEMA selected were not their highest priority. In addition, FEMA only provided enough funding for 8 to 10 hours of support rather than the 3 days requested by the county. As a result, as of April 19, 1996, county officials had not accepted the allocation. According to FEMA, the agency has not received a formal rebuttal or request from Calhoun County to change this authorization. In 1992, the Army and FEMA agreed that every CSEPP location should have a functioning alert and notification system for communities in the immediate response and protective action zones. Tone alert radios are indoor alert and notification devices that will be placed in homes, schools, hospitals, jails, nursing homes, and businesses in the zones. The radios are capable of providing alerting signals and instructional messages about appropriate protective actions. In fiscal year 1993, FEMA allocated Alabama $900,000 to conduct a demographics survey to determine the requirements for tone alert radios and $4.3 million for the radios, with the stipulation that the funds not be released until the survey was completed. FEMA required the demographics survey to determine the number of residences and institutions requiring tone alert radios before they were purchased and installed. Because the Alabama EMA has not completed the demographics survey, FEMA has not released the funds. (See app. III for a discussion of Alabama EMA’s management of the demographics survey.) According to Alabama EMA officials, they are close to awarding a contract for the survey with the Argonne National Laboratory and plan to submit their contract proposal to the governor’s office in June 1996. After the contract is awarded, the demographics survey should take 6 to 9 months to complete. Personal protective equipment has been considered a critical response requirement for several years. In July 1994, the Argonne National Laboratory concluded there was a potential for the aerosol deposition of agents off post from a chemical stockpile accident at Anniston. The deposition creates the requirement for personal protective equipment. “Personal protective equipment” consists of portable respirators, protective suits, gloves, boots, and hoods. Because of their traffic, decontamination, health, and other critical response duties at the periphery of the chemical plume, local CSEPP emergency workers may find themselves in danger of contamination from an unexpected shift in the chemical plume. In fiscal year 1994, Calhoun County EMA requested funding for personal protective equipment. FEMA deferred the request until CSEPP funds became available in fiscal year 1995. At this time, FEMA transferred $850,000 to Alabama EMA for personal protective equipment with the condition the agency was not authorized to purchase equipment until the Occupational Safety and Health Administration completed an evaluation of available civilian protective equipment. The Occupational Safety and Health Administration completed its evaluation in late 1995. However, Calhoun EMA officials believe they need additional funding for Army-provided equipment, protective components for decontamination teams, and medical examinations for local emergency workers. A draft document produced by the Centers for Disease Control and Prevention suggests that emergency workers who wear personal protective equipment complete annual medical examinations. In late 1995, the Army initiated a needs assessment study to calculate new equipment requirements for Alabama and Kentucky. Alabama EMA officials assume that any additional personal protective equipment funding will be withheld pending the outcome of the assessment. According to FEMA, there is nothing preventing Calhoun County EMA from purchasing the approved equipment, but the county has refused to initiate work on the project until its demand for additional funding is approved. According to Calhoun County EMA, the agency is ready to issue a contract for the civilian respirators and protective suits after the requirements for medical examinations are defined and related funds are provided by FEMA. The Army is slow to achieve the desired results in Alabama because CSEPP’s (1) management roles and responsibilities are fragmented and unclear, (2) planning guidance is imprecise and incomplete, (3) officials at the federal level are too involved in the management of certain local projects, (4) budget process lacks teamwork, and (5) financial controls are ineffective. These weaknesses have resulted in time-consuming negotiations and delays in implementing projects critical to emergency preparedness. The Army and FEMA formed the CSEPP Core Team to facilitate communication with state and county officials. However, the Core Team does not function as intended. The Army’s and FEMA’s management responsibilities are not well-defined; there is no clearly defined protocol for communicating with any of the management groups. As a result, state and county EMA officials are uncertain about federal roles and responsibilities, and often find themselves trying to interact with two or more officials from the CSEPP Core Team, FEMA headquarters, and the FEMA regional office. For example, a Calhoun EMA official recently contacted a FEMA Core Team member to discuss the unresolved issue about the distribution of 800 MHz radios. The Core Team member told the county official to use the state’s chain of command and direct his inquiries through Alabama EMA and the FEMA regional office. In some cases, county EMA officials have vented their frustrations to the Army Program Manager for Chemical Demilitarization and to Members of Congress. In commenting on a draft of this report, FEMA said that CSEPP has a well-defined and long-established protocol for intergovernmental communications. Specifically, according to FEMA, information flows back and forth along the following protocol: However, FEMA’s protocol does not recognize the role and responsibilities of the CSEPP Core Team. According to the Core Team’s charter, dated January 6, 1995, the Core Team is the focal point for accountability of the program and coordinates and integrates on- and off-post activities. The Core Team was established, in part, to streamline procedures, improve responsiveness to state and local agencies, and enhance the overall budget process. We believe that FEMA’s illustration supports our observation that the role and responsibilities of the CSEPP Core Team are not clearly understood by state and county officials. The Army and FEMA’s planning guidance, by design, allows states and counties flexibility to enhance their local emergency preparedness programs to address the different risks at the stockpile sites. In commenting on a draft of this report, FEMA said that too much precision in the guidance would limit CSEPP’s ability to change with improvements in technology and emergency management techniques. However, as a result of its imprecise nature, the guidance is often interpreted differently by federal, state, and county officials. In other cases such as emergency medical services, reentry, and restoration, the guidance has not been completed. CSEPP’s guidance on communication systems states that radios should go to public safety agencies. At one time, FEMA officials interpreted this to mean only agencies responding immediately to the chemical emergency. On the other hand, Calhoun EMA officials interpret the guidance to include law, fire, rescue, and other public safety agencies responding to a chemical emergency, as well as governmental, medical, educational, and other special agencies. County officials point out that CSEPP guidance goes far beyond public safety agencies. In February 1996, after extensive negotiations, FEMA tentatively agreed to fund radios for agencies defined as quasi-public safety agencies. These agencies include Calhoun County Road Department, Anniston Public Works, and Anniston Water Works. Federal, state, and local officials did not agree on the final number and distribution of the 800-MHz radios until April 23, 1996. In another example, Calhoun County EMA officials provided five pages of references to CSEPP guidance to justify their request for 24-hour staffing of their emergency operations center. However, the guidance does not provide a firm position on the requirement for 24-hour staffing. County officials’ justification is based primarily on the 8-minute window to respond to a chemical emergency. The officials believe the county needs to have 24-hour staffing for its operations center to meet the 8-minute alert and notification requirement. If an incident occurred when the center was closed, it would take a minimum of 30 minutes for an employee to travel to the center and initiate the alert and notification process. The Army policy is to implement 24-hour staffing of the depot’s emergency operations center when disposal operations begin and not to fund 24-hour staffing of local centers. In commenting on a draft of this report, the Army said that Calhoun EMA should consider less costly options, such as using the county’s 911 emergency center, to initiate its alert and notification process. According to Calhoun County EMA, there are safety concerns about the location of the county’s existing 911 center in the immediate response zone. In addition, Calhoun EMA attempted to relocate and consolidate the county’s 911 emergency center with the EMA emergency operations center in the early 1990s, but did not receive any support from the Army or FEMA. The need for 24-hour staffing is still an ongoing issue with federal, state, and Calhoun County officials. Local officials are also dissatisfied with FEMA’s inconsistent interpretation of CSEPP guidance. For example, the St. Clair County EMA Director commented to us about FEMA’s inconsistent budget decisions. FEMA denied her request for alert devices for the county’s volunteer fire department because the department was not in the protective action zone and does not comply with CSEPP guidance. In contrast, she points out that Talladega County is receiving a repeater tower and radios for its precautionary zone, outside of CSEPP guidance. According to FEMA officials, they are obtaining a waiver to CSEPP guidance for Talladega’s tower. In other cases, CSEPP guidance is not complete. Program officials originally planned to complete all program guidance and standards by September 1989. However, they have not yet completed their guidance on emergency medical services or reentry and restoration procedures. As a result, local communities lack formal guidance to help them prepare their plans and determine their requirements for medical services, reentry, and restoration. According to FEMA officials, the guidance has been distributed in draft form pending resolution of outstanding issues. They believe that the outstanding issues should not preclude the states and counties from using the drafts for daily planning. However, Calhoun County EMA and other CSEPP participants do not consider FEMA’s drafts as final planning guidance. FEMA has said that the states are in the best position to determine CSEPP priorities on a statewide basis and balance local requirements against the needs of all affected counties. However, our work shows that in certain cases, FEMA officials become involved in the management of local projects to the point of making specific decisions on requirements. This level of involvement has contributed to disagreements and time-consuming negotiation on projects. For example, according to Calhoun EMA officials, FEMA never consulted with the county on their selection of the 21 facilities to be collectively protected and selected 5 facilities that county officials would prefer to be protected at a later date. In another example, FEMA officials had Talladega County EMA officials take them by helicopter to view the proposed sites for additional sirens. In Calhoun County, the same FEMA officials videotaped the locations where county officials said they needed sirens. With respect to the 800-MHz communications project, FEMA officials specified where the radios will be located by each agency in Calhoun County. In commenting on a draft of this report, FEMA said that the past and present scrutiny by the Congress and us has resulted in the agency’s instituting stricter controls to ensure that it does not authorize unnecessarily elaborate or unreasonable funding requests. We believe that once the Army and FEMA approve and allocate funds for a CSEPP project, state and local agencies are in the best position to implement and manage the project. Similarly, FEMA also concludes in its comments that the states are in the best position to determine program priorities on a statewide basis and balance local requirements against the needs of all CSEPP counties. According to Calhoun County EMA, FEMA sometimes places unacceptable conditions on the county’s use of CSEPP funds. For example, in September 1995, FEMA allocated Calhoun County $11,400 to complete the purchase of three mobil emergency road signs, with the following conditions: no vehicles would be provided to move the signs, no additional funding would be provided for maintenance, and Calhoun County would be accountable for the signs. Calhoun EMA rejected the funding because of the conditions. The agency reported that FEMA’s conditions were unprecedented, undesirable, and unproductive. According to Calhoun EMA officials, the county does not have available vehicles to move the signs. According to state and county officials, the budget process lacks teamwork. County officials told us they have little or no influence on the budgetary process other than to make the initial request and that FEMA’s rationale for budget decisions is not fully explained to them. Alabama EMA officials said that federal officials do not understand the state’s concept of operations. For example, FEMA allocated Alabama EMA funds in fiscal year 1996 to purchase laptop computers for local public information officers to use every day and take to the joint information center during a chemical emergency. However, the intent of this allocation differs from Alabama EMA’s concept of operations, which provides for local public information officers to remain in their counties’ operations centers. The state’s concept of operations provides for county liaisons in the joint information center to handle county affairs. As a result, Alabama EMA officials plan to request that FEMA reallocate these funds to the county EMAs. Similarly, Calhoun EMA officials said that the funding process lacks teamwork and that federal officials do not understand the county’s concept of operations. FEMA deferred funding for several local projects that county officials believe should have been funded sooner. For example, the county did not receive funding for personal protective equipment until 1995—more than 6 years after the program’s inception. In another instance, Alabama and Calhoun EMA officials concluded in 1992 that Calhoun County lacked the infrastructure to treat and care for all evacuees but FEMA did not provide funding for host counties until fiscal year 1996. In addition, according to Calhoun EMA officials, FEMA may not have the personnel with the technical expertise to adequately assess local budget requests. For example, the FEMA regional official, who reviews Alabama budgets, said that he did not have the technical background to assess requirements for automation information systems and did not fully understand Calhoun County’s collective protection concept. The Army’s financial management of CSEPP has not been effective in controlling the growth in costs. The Army’s current cost estimate for the program has increased by 800 percent over the initial cost estimate of $114 million in 1988. In commenting on a draft of this report, the Army said that the initial estimate was made prior to defining the program’s scope, requirements, and time frames. The Army and FEMA have already spent $350.5 million and estimate the program will cost $1.03 billion. In addition, almost $157.3 million (44.9 percent) of the expenditures have been for federal management, contracts, and Army installations. According to the Army, some of these expenditures were for computer hardware and software provided to state and local emergency management agencies and for emergency preparedness projects at Army installations at the local level. In our previous work, we concluded that Army’s and FEMA’s management of CSEPP needed improvements to ensure that (1) local communities could effectively respond to a chemical emergency, (2) officials have accurate financial information to identify how funds are spent, and (3) program goals are achieved. In 1994, we reported that communities near the stockpile sites lacked critical items to respond to a chemical emergency, including operational communications systems, alert and notification devices, decontamination equipment, complete automated information systems, and personal protective equipment. For example, Pine Bluff, Arkansas, and Pueblo, Colorado, did not have sirens installed and most other stockpile sites did not have tone alert radios. According to the Army, Pine Bluff now has an operational siren system. In 1995, we reported that program officials lacked accurate financial information to identify how funds were spent and ensure that program goals were achieved. For example, Arkansas had reprogrammed $413,000 in unobligated funds to construct office space without FEMA’s approval, and Kentucky and Washington had unexpended CSEPP balances of $4.4 million and $2.4 million, respectively. Army and FEMA officials subsequently stated that they are working to improve CSEPP’s financial management. For example, the Army restructured the overall management of CSEPP and established the centralized CSEPP Core Team. In addition, the Army and participating states developed life-cycle cost estimates for CSEPP in 1995 to facilitate DOD’s oversight of the program’s escalating costs. Notwithstanding these actions, the federal financial management of CSEPP is still weak. Specifically, records on expenditure data are limited; allocation data differ among FEMA, Alabama EMA, and county EMAs; and FEMA maintains large unexpended balances of funds for Alabama and Calhoun County. In response to our 1995 report on CSEPP, DOD reported that (1) it was not cost-effective for federal program managers to account for actual CSEPP expenditures after the initial allocations were made, (2) discrepancies in allocation data among management levels were not indications of weak financial management, and (3) existence of unexpended balances that are 2 years old was not poor management. Although the progress of CSEPP in Alabama has been hampered by management weaknesses at the federal level, some state and local actions have contributed to the delay in implementing projects critical to emergency preparedness. For example, Alabama EMA spent more than 2 years trying to contract for a demographics survey, which will serve as the basis for determining the requirements for the tone alert radios and developing critical planning documents. In addition, Calhoun County EMA has been reluctant to initiate CSEPP projects until federal officials agree to the county’s requirements. In September 1993, FEMA allocated Alabama $900,000 to conduct a demographics survey of counties in the immediate response zone. The survey was intended to serve as the basis for determining the requirements for the tone alert radios, selecting host counties in Alabama, and developing critical planning documents. Alabama EMA spent more than 2 years trying to contract for a demographics survey and the survey has still not begun. Because Alabama EMA lacked contracting and legal personnel, the agency wanted a former consultant to manage the contract for the demographics survey and other planning studies. Initially, Alabama EMA spent 2 years trying to hire and pursue a sole-source contract with the former consultant, but the Alabama Personnel Board denied the agency’s request for a merit position and, due to liability insurance issues, the contract was never awarded. In October 1995, Alabama EMA requested FEMA’s assistance with managing the contract. In response, FEMA contacted the Argonne National Laboratory. In December 1995, Argonne submitted a draft contract proposal to the state EMA. The agency sent Argonne’s proposal to its six CSEPP counties for their review. Initially, Calhoun EMA was reluctant to participate because the contract did not provide for specific tasks, products, time frames, and a reasonable means of relief if the specifications were not met. In March 1996, Alabama EMA officials told us they had concurrence from all counties and planned to move forward with the contract. Agency officials submitted their contract proposal for approval to the Alabama Legislative Review Committee on May 28, 1996, and plan to submit the proposal to the governor’s office in June 1996. The purpose of the initial contract is for Argonne to develop statements of work for the first three planning projects: (1) the demographics survey, (2) evacuation time estimates, and (3) a traffic management plan. After the contract is awarded, the demographics survey should take 6 to 9 months to complete. On May 9, 1996, the Director of Calhoun County EMA reported that his agency had not concurred with the state’s moving ahead with the total proposed contract with the Argonne National Laboratory because the proposal still lacks specific requirements. The Director hopes that the lack of specificity his agency is concerned about will be laid out in subsequent contractual efforts with Argonne. Federal, state, and other county officials believe that Calhoun EMA is often uncooperative and that its actions have a negative effect on the progress of CSEPP in Alabama. Alabama EMA’s correspondence with Calhoun EMA often note that the county’s lack of teamwork consumes time and delays the progress of the program in Alabama. However, in commenting on a draft of this report, Calhoun County EMA Director disagreed with the federal, state, and other county officials’ assessment that some of his agency’s actions have slowed the progress of the program. The Director reported that Calhoun EMA has an obligation to the citizens of the county to ensure maximum protection and that he fully supports his agency’s prior decisions and actions regarding CSEPP issues. In fiscal year 1992, FEMA allocated Alabama $1.2 million for a siren system in and around Anniston and, subsequently, asked Calhoun County EMA to manage the contract for the system. As part of the contract, Calhoun EMA officials purchased four sirens and one activation control panel, which are still county property, for Anniston Army Depot. During the project, Calhoun EMA officials installed four of the county’s sirens on the depot but kept the control panel. County EMA officials concluded there was no need for the depot to have a control panel to activate the off-base siren system and justified keeping the panel on the basis of a local statute prohibiting the transfer of county property to the federal government. As a result, Anniston Army Depot could not activate the four sirens it received or the off-base sirens. According to Army officials, the depot plans to return the four sirens to the county and install its own sirens. The Army estimates that the upgrade and addition of sirens for the depot will cost $88,000. Calhoun County EMA also manages the contract for the CSEPP 800-MHz communications system in Alabama. In a memorandum dated October 18, 1995, after a meeting where Calhoun EMA officials declined to negotiate on the distribution of radios, an Alabama EMA official said it was a mistake to allow Calhoun County EMA to manage the contract. The official concluded that Calhoun EMA officials were unable or unwilling to look after the interests of other stakeholders in the program. However, in commenting on a draft of this report, the Calhoun County EMA Director disagreed with the state official’s assessment that Calhoun EMA was unable or unwilling to consider the interests of others in the program. The Director said that all Alabama CSEPP entities, as well as federal agencies, will directly benefit or have already benefitted from the county’s actions. Because of 12 major deficiencies it has identified in the program, Calhoun County EMA opposes the Army’s environmental permit application to construct Anniston’s disposal facility until it receives a written commitment from the Army to support the county’s emergency preparedness requirements or provide acceptable alternatives. According to Calhoun EMA, correcting these long-standing deficiencies is critical for the county to adequately respond to a chemical stockpile emergency. (The 12 major deficiencies are described in app. III.) In addition, Calhoun EMA officials question the Army’s ability to maintain its current level of emergency support because of the decision during the base realignment and closure process to close Fort McClellan in Alabama. Previously, Fort McClellan was to provide medical, fire, decontamination, and transportation support to Anniston Army Depot. According to Alabama DEM officials, the department does not plan to oppose the environmental permit on the basis of Calhoun EMA’s concerns. They believe that the Army has made adequate arrangements to replace Fort McClellan’s emergency response capabilities. If a conflict between DEM and Calhoun County should exist at the time a decision on the environmental permit is due, state laws allow the governor of Alabama to override local communities’ opposition in an emergency situation. According to DEM officials, the chemical stockpile weapons are considered to be a risk and, therefore, an emergency situation. We received written comments on a draft of this report from DOD, FEMA, and Calhoun County EMA. All of the agencies agreed that there has been a lack of progress in implementing CSEPP in Alabama; however, each expressed different views on the extent to which their actions contributed to the delay. The major concerns raised by each agency and our evaluations are presented here. The comments of DOD and FEMA are presented in their entirety in appendixes IV and V, respectively, along with our evaluation of specific points. They also provided technical clarifications and, where appropriate, we incorporated them in our report. The Director of Calhoun County EMA also provided technical clarifications, which we incorporated in our report. We did not reproduce the Director’s comments because they were technical in nature and their length and format made them difficult to reprint. DOD agreed with our assessment that the lack of progress in implementing CSEPP in Alabama relates to management weaknesses. However, DOD did not agree that federal agencies were primarily responsible. DOD suggested that a more balanced assessment would include the roles of federal, state, and local governments. In our draft report, we concluded that the lack of progress of Alabama’s CSEPP was primarily the result of management weaknesses at the federal level and that state and local actions also slowed the program. It was not our intent to leave the impression that the delay in Alabama’s CSEPP was solely the result of management weaknesses at the federal level. We have revised the final report to eliminate the reference to primarily and to more clearly attribute the lack of progress to federal management weaknesses and actions by state and local agencies. However, it is important to note that the problems experienced in Alabama’s CSEPP are likely to continue until an effective approach is developed for reaching timely agreements among federal, state, and local officials on specific requirements for projects. Even though other agencies are involved, CSEPP is an Army program and, as such, its progress and stewardship of CSEPP resources is ultimately the Army’s responsibility. FEMA reported that it had serious concerns about our conclusions and the tone of the report. Specifically, the agency stated that the draft report did not (1) incorporate information supporting FEMA actions and (2) adequately assign blame to Calhoun County EMA for many of the delays in the program. FEMA was concerned that all of the problems were attributed to federal mismanagement; in FEMA’s view Alabama EMA and Calhoun County EMA clearly shared responsibility for many of the delays. In response to FEMA’s comments, we incorporated additional information describing the agency’s actions in the report. Our draft report recognized that state and local actions, including Calhoun County, contributed to the lack of progress in Alabama’s CSEPP. However, it was not our intent to attribute the lack of progress solely to federal management weaknesses, and we revised the final report to eliminate the reference to federal weaknesses as the primary cause. The Director of Calhoun County EMA agreed with our assessment that Calhoun County is not fully prepared to respond to a chemical stockpile emergency and also reported that the county is not adequately prepared to recover from the effects of chemical contamination. In addition, the Director concurred with our assessment that the lack of progress in Alabama CSEPP is primarily the result of management weaknesses at the federal level, but said that our draft report should have focused less on management weaknesses at the state and local levels. The Director disagreed with our assessment that some of the county’s actions have slowed the progress of the program in Alabama. He reported that Calhoun County EMA has an obligation to the citizens of the county to ensure maximum protection and that he fully supports his agency’s prior decisions and actions regarding CSEPP issues. However, as discussed in the report, we believe that some of Calhoun EMA’s actions have contributed to the lack of progress in Alabama’s CSEPP. We obtained information from the Army and FEMA on CSEPP policies, guidance, procedures, and projects. We also interviewed officials and analyzed data given to us by officials from the Army Program Manager for Chemical Demilitarization; Anniston Army Depot; FEMA headquarters and region IV; Alabama EMA and DEM; and Calhoun, Clay, Cleburne, Etowah, St. Clair, and Talladega counties. To assess the funding and progress of Alabama’s and Calhoun County’s emergency preparedness programs, we examined a variety of federal, state, and county planning and funding documents and reconciled data among the Army, FEMA, state, and counties. To assess the status of Alabama’s and Calhoun County’s programs, we compared selected projects with program guidance and requirements and determined whether the projects complied with program goals, benchmarks, and time frames. To assess the effectiveness of the federal, state, and county management, we reviewed the Army’s and FEMA’s management structure and guidance and compared them with state and local requirements and concerns. For those critical projects not yet completed, we identified and analyzed the reasons for their delay. We also documented and analyzed the impact of (1) state and county EMAs’ involvement in the funding process, (2) the Army’s and FEMA’s feedback on the budget process and partial funding of projects, and (3) slow disbursements of funds. To assess Calhoun County EMA’s opposition to the Army’s environmental permit application, we reviewed the permitting requirements and application process and determined the status of the county’s 12 major deficiencies. Our review was conducted from November 1995 to April 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairmen of the Senate Committees on Armed Services and Appropriations and the House Committees on National Security and Appropriations, the Secretaries of Defense and the Army, the Directors of FEMA and the Office of Management and Budget, and other interested parties. We will make copies available to others upon request. Please contact me at (202) 512-8412 if you or your staff have any questions. Major contributors to this report are listed in appendix VI. Army major contracts (over $100,000) A variety of accidents associated with the chemical stockpile weapons can occur at the storage site or disposal facility or in transit. The distribution of the hazard from these accidents is based on a number of factors, including how much agent is released, how it is released, the duration of the release, the meteorological conditions, and the topography. In general, the risks from any release decreases as the distance away from the release point increases. As a result, the level of planning decreases and type of planning changes as the distance from the release site increases. CSEPP planning zones are partitioned into three territories: the innermost zone is the immediate response zone, the middle zone is the protective action zone, and the outermost zone is the precautionary zone. (See fig. II.1.) The demographics survey is 1 of 12 planning studies that the Army and FEMA provided the Alabama Emergency Management Agency (EMA) $1.5 million in fiscal years 1992, 1993, and 1994 to implement. A demographics survey would identify the size, density, and characteristics of the population in the state’s immediate response zone. Demographics data are critical to other CSEPP projects because their requirements will be based on these data. However, the survey had not started as of May 28, 1996. According to FEMA, Alabama EMA received adequate funding for the demographics survey in fiscal year 1992 and any delays encountered in contracting for the survey resulted from the difficulties Alabama EMA experienced rather than from any involvement on the part of the federal government. Because Alabama EMA did not have the expertise to manage the contracts for the 12 studies, including the demographics survey, the agency pursued a person for 2 years in attempts to hire and contract with him to serve as the contract manager for the 12 studies. The Alabama Personnel Board denied the agency’s request for a merit position. The agency then pursued the person through a sole-source contract. Alabama EMA officials told us that a sole-source contract was justified because the individual previously worked as a consultant for the agency and had extensive knowledge of the program. State officials gave up the pursuit for a short time when the individual could not meet the liability insurance requirements imposed by Alabama Finance Department’s Risk Management Division. This person then went to work for Ketron Corporation, and Alabama EMA officials tried to hire him again believing he could get the necessary liability insurance through the corporation. However, by September 1995, negotiations with Ketron fell through. In October 1995, Alabama EMA requested FEMA’s assistance with contracting for the demographics survey. FEMA contacted Argonne National Laboratory and requested its services. In December 1995, Argonne submitted a draft contract proposal to the state EMA. The Alabama EMA sent the proposal to its six CSEPP counties for review. Initially, Calhoun County EMA informed the agency that it was reluctant to participate in the contract because the proposal did not provide for specific tasks, products, time frames, and a reasonable means of relief if provisions are not met. In March 1996, Alabama EMA officials said that all county EMAs concurred with the proposed contract and they plan to move forward with negotiations. Agency officials submitted their contract proposal for approval to the Alabama Legislative Review Committee on May 28, 1996, and plan to submit the proposal to the governor’s office in June 1996. The purpose of the initial contract is for Argonne to develop statements of work for the first three planning projects: (1) the demographics survey, (2) evacuation time estimates, and (3) a traffic management plan. After the contract is awarded, the demographics survey should take 6 to 9 months to complete. In commenting on a draft of this report, the Director of Calhoun County EMA said that his agency had not concurred with the state’s moving ahead with the total proposed contract with the Argonne National Laboratory because the proposal still lacks specific requirements. The Director hopes that the lack of specificity his agency is concerned about will be laid out in subsequent contractual efforts with Argonne. This deficiency will be alleviated when the Argonne National Laboratory completes the 12 planning studies. The evacuation time estimate study is 1 of the 12. Although funds were allocated in fiscal year 1993, Alabama communities still do not have tone alert radios. Tone alert radios are indoor alert and notification devices, that will be placed in homes, schools, hospitals, jails, nursing homes, and businesses in the immediate response and protective action zones. These warning devices are to be activated by the 800-megahertz (MHz) communications system to warn people of a chemical emergency and provide voice instructions on what to do. Until the radios are in place, according to Calhoun EMA officials, local citizens cannot be adequately warned of a chemical stockpile emergency. In fiscal year 1993, FEMA allocated Alabama EMA $4.3 million for tone alert radios with the stipulation that funds would not be released until the agency had completed a demographics survey to determine the number of residences and institutions needing the radios before they are purchased and installed. Calhoun EMA cannot purchase tone alert radios because the demographics survey is not completed. According to FEMA, even if the tone alert radios had been purchased when initially funded, they would have remained unusable because Calhoun County EMA delayed implementation of the 800-MHz communications system needed to activate the radios. In addition, on April 12, 1996, an Alabama EMA official told us that FEMA was in the process of revising the standards for the tone alert radios. Table III.1 shows the breakdown of funding for the radios in fiscal year 1993. Calhoun EMA officials said FEMA has not allocated enough funding to meet the county’s requirement for tone alert radios. The initial funding estimate was based on obtaining 30,000 radios. However, county EMA officials now estimate the county will need approximately 50,000 radios. Personal protective equipment is needed to provide protection for emergency workers responding to a chemical emergency. According to CSEPP guidance, personal protective equipment is required in any situation where there is a possibility that emergency personnel will encounter a chemical agent during the performance of their duties. Personal protective equipment consists of portable respirator, protective suit, gloves, boots, and hood. According to Calhoun County EMA officials, emergency workers cannot adequately respond to a chemical emergency until they are provided basic protection. Because of their assigned traffic, decontamination, health, and other critical response duties at the periphery of the chemical plume, local emergency workers may find themselves in danger of contamination from an unexpected shift in the plume. In July 1994, the Argonne National Laboratory concluded there was a potential for aerosol deposition of a chemical agent off-post. The deposition creates the requirement for personal protective equipment. According to Calhoun EMA, local emergency workers who might normally help during a chemical emergency would have to evacuate if they did not have personal protective equipment. According to the Army, the typical public safety official should not be located in the predicted hazard area. However, the Army and FEMA allocated Alabama $850,000 for personal protective equipment in 1995. FEMA allocated Alabama $850,000 with the condition the agency would not purchase the equipment until the Occupational Safety and Health Administration completed its ongoing evaluation. Although the Occupational Safety and Health Administration had completed its evaluation at the end of 1995, personal protective equipment requirements in Alabama are still uncertain. According to Calhoun EMA officials, $780,000 is sufficient to purchase the required 1,148 sets of equipment. However, county EMA officials believe they need additional funding for Army-provided equipment, protective components for decontamination teams, and medical examinations for local emergency workers. A draft document produced by the Centers for Disease Control and Prevention suggested that emergency workers who wear personal protective equipment complete annual medical examinations. Just recently, the Army initiated a needs assessment study to determine requirements for Alabama and Kentucky. Alabama EMA officials assume any additional personal protective equipment funding will be withheld pending the outcome of the new needs assessment. Table III.2 breaks down FEMA’s funding for personal protective equipment in fiscal year 1995. According to Army and FEMA officials, Alabama and Calhoun County EMAs have been authorized since December 1995 to purchase the baseline equipment with the funds already authorized. They do not understand why the agencies have not acted more aggressively in obtaining the equipment. In commenting on a draft of this report, FEMA said that there is nothing preventing Calhoun County EMA from purchasing the approved equipment but the county has refused to initiate work on the project until its demand for additional funding is approved. According to Calhoun County EMA, the agency is ready to issue a contract for the civilian respirators and protective suits when requirements for medical examinations and related funding are established and provided by FEMA. According to Calhoun County EMA officials, local citizens do not know where to evacuate in case of a chemical emergency. Parents are especially concerned about their children and demand to know where their children will be in the event county schools are evacuated. Regardless, Alabama EMA officials believe FEMA’s recent selection and funding of Lee, Jefferson, and Madison counties as reception and host counties essentially settled Calhoun EMA’s concern. Host counties in Alabama are required to receive, decontaminate, medically screen, treat, and shelter an estimated 110,000 evacuees in case of a chemical emergency. The state EMA initially suggested some Calhoun County residents evacuate to Georgia. FEMA rejected this request and suggested the state study the option of sending evacuees to safe locations in the protective action zone. According to FEMA, the decision not to expand the program into Georgia was based on sound fiscal management. However, the counties in the protective action zones are rural, and do not have adequate infrastructure to process evacuees. Therefore, Alabama and Calhoun County EMA officials recommended that Lee, Jefferson, and Madison counties, which have the necessary infrastructure to provide mass care, serve as reception and host counties. In March 1996, FEMA approved the state’s selection of host counties. The annual costs, mostly for planning and preparation activities, are estimated to range from $50,000 to $60,000 for each county. In commenting on a draft of this report, FEMA said that reception and mass care facilities have been identified and CSEPP officials are in the process of working with the host counties. Because of FEMA’s recent approval of funds for host counties, according to Calhoun County EMA, Alabama participants can start working toward meeting the CSEPP requirement for reception and mass care facilities. Calhoun County EMA officials said that they first proposed the concept of collective protection about 4 years ago, but no one from the Army or FEMA ever discussed the idea with them. Collective protection provides pressurized shelter with an air-filtering system and enough food, water, and supplies to house a selected number of people up to 3 days. On March 25, 1996, Alabama EMA transferred $4.2 million to Calhoun County for collective protection projects. In 1989, Oak Ridge National Laboratory concluded that, in the event of an accidental release of chemical agent, the chemical plume could cover segments of Calhoun County’s immediate response zone in 1 hour. Oak Ridge also concluded in the 1989 report that evacuation was not recommended for the general population in Anniston’s immediate response zone and recommended expedient sheltering. According to another Oak Ridge National Laboratory 1991 draft report, it would take 5 hours and 45 minutes to evacuate the residents in the greater Anniston area. The estimate is the clearance time required for 100 percent of the vehicles to evacuate the area during bad weather at nighttime. On the basis of the Oak Ridge studies, Calhoun County EMA officials believe it would be impossible to safely evacuate everyone from the chemical plume. To shelter the people they cannot evacuate, county officials believe collective protection is the best option. However, according to a senior official from the Oak Ridge National Laboratory, Calhoun County officials should not rely on the 1991 draft report for planning purposes because the (1) report was never finalized and (2) changes in road conditions and demographics since 1991 may have affected the results reported in the draft. According to the Army, Calhoun EMA must be planning to evacuate the entire immediate response zone and believe that a more prudent action would be to evacuate only those portions of the county that would be at risk. Calhoun County EMA’s collective protection concept involves both building protection systems and community shelters. County EMA officials believe building protection systems will be needed in hospitals, schools, nursing homes, jails, and other facilities that cannot be quickly evacuated. This system consists of a small enclosed room that folds out within a larger room and contains an air filtration system and adequate food, water, sanitary, and medical supplies. Community shelters would include large facilities containing an air filtration system and provisions. The shelters would be located so that residents could walk to them during a chemical emergency. Alabama EMA officials told us more research and data are needed to make any rational decision on Calhoun County’s proposal for collective protection. The Army Edgewood Research, Development and Engineering Center has completed a study to validate procedures for sheltering residents in a variety of housing types and identify a less burdensome and costly way to protect citizens in place. The draft report is dated December 8, 1995, and comments are being incorporated for publication in the final report. On September 12, 1995, FEMA allocated Alabama $4.2 million for positive pressurization projects in Calhoun County. On March 25, 1996, Alabama EMA transferred the $4.2 million authorization to Calhoun County. However, positive pressurization is just one portion of Calhoun County’s concept of collective protection. The county’s concept combines filtered over pressurized air and the support of food, water, and medical supplies to house specific numbers of people up to 3 days. As a result, Calhoun EMA officials believe the $4.2 million allocation is too little. They estimate that the county will require about $67.6 million for collective protection—$16 million for building protection sites and $51.6 million for the community shelters. The Army believes that a chemical plume would pass over the area in 3 to 12 hours and that Calhoun EMA’s shelter time estimate of 3 days is excessive. On the basis of the type of facilities, distance from the storage site, potential to support nearby communities and available funding, FEMA selected 21 facilities in Calhoun County for positive pressurization. However, according to Calhoun EMA officials, FEMA officials never coordinated their selection of the 21 facilities with them. Although county officials provided FEMA a suggested list of 55 facilities for collective protection, they disagree with 5 of the 21 facilities selected. They believe other facilities in the county have a greater need for collective protection. As a result, county officials would prefer protecting 5 facilities selected at a later date and replace them with 5 facilities considered higher priorities. According to FEMA, the agency has not received a formal rebuttal or request from Calhoun County to change this authorization. According to Army and FEMA officials, funds will be allocated in the future to pressurize additional facilities in the county. After several years of studying and meeting, Alabama still does not have an integrated communications system. On the basis of CSEPP-funded research completed in 1990 and 1991, Calhoun EMA officials decided that the existing conventional communications system did not meet CSEPP integrated requirements. In 1992, Army and FEMA officials agreed that every CSEPP jurisdiction should have a functioning communications system connecting the Army installation, state EMA, and counties in the immediate response zone. In May 1993, FEMA approved the 800-MHz communications system for CSEPP in Alabama. The 800-MHz communications system is an integrated, simulcast network with 20 channels that operate at a frequency of 800-MHz. The CSEPP system will provide Alabama and Calhoun and Talladega counties with a critical capability of communicating inter- and intra-agency without having to wait for a channel to clear if someone is using it. The system can also be used as the platform to simultaneously activate sirens and tone alert radios. Initially, federal officials anticipated local EMAs would jointly acquire and maintain the 800-MHz system. According to Alabama EMA officials, they wanted to handle the contract but FEMA officials allowed Calhoun County to manage the contract. However, according to Calhoun EMA officials, Alabama EMA could not put together the contract so FEMA officials asked the county to manage the contract. Following are instances that show the history of the growth in costs for the CSEPP 800-MHz communications system in Alabama: FEMA provided Alabama $8.8 million for the baseline system and $4.4 million to expand the system and purchase additional radios in fiscal years 1994 and 1995, respectively. According to the authorization letter, the funds were considered “not to exceed” limits for the project. The letter also declared that the precise number of radios, their distribution, and follow-on radios would be determined by negotiations between FEMA, state, and county officials. In June 1995, FEMA authorized an additional $1,034,426 for the placement of a second communications tower in Talladega County. The letter also said that any negotiated reductions in the system’s cost would be applied to additional field equipment at the discretion of FEMA, state, Calhoun, and Talladega officials. In August 1995, FEMA provided an additional $2 million for more equipment and radios bringing the total amount available for the 800-MHz system to $16.2 million. Calhoun EMA officials announced during a meeting in October 1995 that with the additional $2 million they could obtain the required communication equipment plus 1,187 extra radios and Calhoun EMA intended to keep all the extra radios. According to other program officials, they attempted to negotiate with Calhoun EMA officials regarding the additional radios, but Calhoun officials would not negotiate. According to Calhoun EMA officials, they tried to discuss the distribution of the additional radios, but Talladega County officials left the meeting. In a memorandum describing the meeting, an Alabama EMA official said it was a mistake for Calhoun County to manage the contract. The official concluded Calhoun EMA officials were unable or unwilling to look after the interests of other stakeholders in Alabama. In commenting on a draft of this report, the Calhoun County EMA Director disagreed with the state EMA official’s assessment that his agency was unable or unwilling to consider the interests of others in the program. He said that all Alabama CSEPP entities either have or will directly benefit from the county’s actions related to CSEPP. According to Alabama and Calhoun County officials, the number and distribution of radios were tentatively negotiated in December 1995. However, FEMA, state, and county officials continued to disagree about the number of radios needed by first responders until April 23, 1996. In addition, FEMA officials decided to place a $1-million communications tower and some radios for Talladega County in its precautionary zone. Some equipment would be nearly 50 miles from Anniston Army Depot. Calhoun and St. Clair county officials believe placing the equipment in the precautionary zone does not comply with program guidance. As a result, Calhoun EMA officials were reluctant to award the contract. On March 15, 1996, the Calhoun County Commission Chairman sent a letter to the Program Manager for Chemical Demilitarization expressing his concerns about the 800-MHz system and recommended that the Army reevaluate FEMA’s distribution of radios. On April 23, 1996, federal, state, and county officials met to resolve the issues that were delaying the implementation of the 800-MHz project in Alabama. At the meeting, federal officials agreed to provide additional 800-MHz radios to Alabama and Calhoun and Talladega counties. In return, Calhoun County EMA awarded the 800-MHz contract on May 30, 1996. According to the Calhoun EMA, the contractor has 16 months from the contact award date to manufacture and install the communications system. According to the Army, Calhoun EMA’s claim that the county does not have a sufficient communications system to adequately respond to a chemical stockpile emergency implies that the county is not prepared to respond to other hazards—earthquakes, tornadoes, hazardous material incidents, etc. The Army concluded that, until CSEPP provided funding for the county’s communications system, Calhoun EMA was unable to provide basic emergency protection to its citizens. According to the Army, Calhoun County has not provided any funding to upgrade its local communications system. According to FEMA, the CSEPP 800-MHz communications system is not in place because Calhoun EMA refused to initiate work on the contract until the county’s demand for additional radios was met. According to the Calhoun County EMA Director, his agency only supports projects that provide goods, services, and equipment complying with CSEPP guidance. Federal, state, and Calhoun County officials differ on the need for 24-hour staffing of the county emergency operations center. The center serves as the location where responsible officials gather during a chemical emergency to direct and coordinate operations, communicate with officials from other jurisdictions in the field, and formulate protective action decisions. The Army policy is to implement 24-hour staffing of the depot’s emergency operations center when disposal operations begin and not to fund 24-hour staffing of local centers. Alabama EMA officials believe the Army should staff the depot’s emergency operations center during both storage and disposal operations. State officials told us the current lack of 24-hour staffing at the depot’s center results in less than adequate immediate response capability during nonworking hours and places local citizens at unnecessary risk. CSEPP guidance requires Anniston Army Depot 5 minutes from the initial detection of an actual or likely chemical agent release to notify local points of contact of the release, its emergency notification level, and recommended protective actions. Calhoun EMA officials believe they should staff their center 24 hours a day. Currently, Calhoun County emergency operations center is staffed only during normal working hours, 24 percent of the time. County EMA officials believe this would present a problem if there were a chemical emergency during the other 76 percent of the time when the center is empty. Calhoun EMA officials believe this is unacceptable when it takes a minimum of 30 minutes for agency employees to reach the center and begin activating the alert and notification process. According to CSEPP guidance, the time that elapses from the chemical accident to the decision to warn the public of the danger is of paramount importance to the success of the public alert and notification system. The guidance also requires the outdoor alert and notification system be capable of providing an alerting signal and instructional message within 8 minutes from the time a decision is made that the public is in danger. County EMA officials plan for a response time of 8 minutes—5 minutes to make a protective action decision and 3 minutes to alert and notify the public. According to county officials, if an emergency occurs while the center is empty, the lack of any capability to quickly activate the alert and notification system places local citizens at risk. Calhoun EMA officials have proposed three ways to resolve the 24-hour staffing issue with CSEPP funds: Provide Calhoun EMA additional people to staff its emergency operations center 24 hours a day. According to the Director of Calhoun County EMA, the current staff’s job descriptions do not provide for shift rotations to allow them to operate the center full time. Consolidate the county’s 911 emergency center and CSEPP operations center. Currently, the 911 center is located in another facility in the immediate response zone, an area that would be evacuated during a chemical emergency. Require the Army to administer the immediate response operations and initiate the alert and notification system. Army and FEMA officials state that there is no need for Calhoun County to have a 24-hour emergency operations center on the basis of Anniston’s risk assessment. The risk assessment concludes that the greatest risk of a chemical accident is during normal handling and maintenance activities. The Army plans to staff Anniston’s emergency operations center 24 hours a day when disposal operations begin. Until then, Anniston has a duty officer in charge 24 hours a day. In the unlikely event a chemical emergency was to occur, Army officials would contact Calhoun County’s 24-hour 911 emergency center, which would notify the local emergency response agencies. In commenting on a draft of this report, the Army said that Calhoun County EMA should consider less costly and equally effective alternatives to 24-hour staffing of the county’s CSEPP operations center, such as using the county’s 911 emergency center to initiate the alert and notification process. In addition, FEMA believes that the cost of 24-hour staffing of the CSEPP emergency operations center out weighs the benefits in light of available alternatives, ranging from using the county’s current 911 emergency system to using the off-post warning system. FEMA officials also recommend that the 911 center stay in the immediate response zone and that its building be overpressurized to allow the center to operate during a chemical emergency and be responsible for the initial alert and notification actions. According to FEMA, Calhoun County refuses to consider reasonable alternatives adopted by other counties participating in CSEPP. However, Calhoun County EMA questions the feasibility of the Army’s and FEMA’s concept, without additional analysis, of using the county’s 911 emergency center to initiate a CSEPP response. Alabama and Calhoun County EMA officials believe FEMA does not provide adequate support and money for local public awareness programs. Calhoun officials cite the county’s $9,000 allocation in fiscal year 1995 for public awareness activities as one of the reasons for their concern. In addition, they note that the county has over 60 public schools, a university, 3 hospitals, 5 nursing homes, and approximately 120,000 people. Alabama EMA officials said that they agreed with the county on this issue. Army and FEMA officials said that Calhoun EMA officials did not consider funds allocated to pay for salary of the county’s public information officer in their $9,000 figure. Federal officials also recognize that 1995 was a lean year for CSEPP. In contrast to the funding for fiscal year 1995, Calhoun County received over $150,000 for its public awareness program, but less than requested, in fiscal years 1994 and 1996. (See table III.3.) In commenting on a draft of this report, FEMA said that Calhoun County’s requests for funds do not professionally support the public affairs mission of informing the public of how to respond in the case of a chemical stockpile emergency. For example, FEMA reported that some of the county’s requests were intended to fund frisbees, key chains, baseball caps, T-shirts, and pencils. According to Calhoun County EMA, these public awareness items comply with CSEPP guidance, which provides that each CSEPP jurisdiction consider (1) using a variety of methods to communicate with the public and (2) developing promotional items for distribution at community fairs, shopping malls, and public meetings. According to Calhoun EMA officials, additional sirens are needed to adequately warn the public in case of a chemical emergency at Anniston Army Depot. Currently, the county has 43 sirens. According to Alabama EMA officials, they have supported the county’s position on this issue for several years, pending the on-site assessment of the current siren system. Calhoun EMA officials believe they need at least 19 additional sirens to adequately warn the public of a chemical emergency. The immediate response zone has dead spots, where the population cannot hear the sirens, and the protective action zone has special population areas that are not covered by the current system. County officials said they saved $102,947 from their negotiations for the initial siren contract to pay for some of the additional sirens. However, FEMA is withholding the funds pending a site survey and a new site assessment and sound propagation study. Calhoun County EMA supports the requirement for the site assessment and sound propagation study, but questions why the assessment and study are required only for Calhoun EMA and not for other CSEPP entities. FEMA reallocated Calhoun County $128,000 for a new sound propagation study and additional sirens in fiscal year 1995. In addition, FEMA reported that it would authorize the expenditure of existing funds to purchase additional sirens if the study validates the requirement. According to Alabama EMA, FEMA has been slow in taking action to resolve Calhoun County’s concern that the current siren system is inadequate to warn the public of a chemical stockpile emergency. According to Calhoun EMA officials, the agency’s ability to respond and recover from a chemical emergency depends on its automated information system. County officials identified several items they believe are required to sustain or enhance their automated capabilities. They include remote automated workstations for county officials, additional projectors, a back-up server, and optical jukebox. In fiscal years 1995 and 1996, Calhoun EMA requested more than $1 million for automated data processing equipment. The Army and FEMA approved $79,700 for automation equipment in fiscal year 1995 and $201,000 in fiscal year 1996. According to Calhoun EMA officials, inadequate automation capabilities are still an unresolved issue for the county. According to FEMA, the necessary equipment for the Federal Emergency Management Information System has been authorized for purchase for Alabama. FEMA said that Calhoun County EMA was insisting on equipment that exceeds the automation requirements for the county. Calhoun EMA officials said they need 19 remote automated workstations for local officials from the County Commission, the County Health Department, the American Red Cross, mayors’ offices, hospitals, and several other groups. The workstations are estimated to cost about $8,000 each. According to county EMA officials, these workstations would allow local officials to train and participate in daily CSEPP operations and operate from their offices during a chemical emergency if they could not travel to the county’s emergency operations center. According to state EMA officials, they believe procurement and maintenance costs are too high for the county’s workstation concept, especially when too many other higher priority projects are not fully funded. In fiscal year 1995, Army and FEMA officials rejected the workstation concept stating it provides for unnecessary automation countywide. In fiscal year 1996, Calhoun EMA reported that remote stations are required to ensure that daily operations are carried out and to increase the county’s preparedness, response, and recovery capabilities. Army and FEMA officials again rejected the funding, stating that the other local agencies could provide data to Calhoun EMA’s data entry clerk for input to the county’s information system. According to Calhoun EMA officials, this would be difficult because the agency’s one data entry clerk is already overworked. FEMA funded six screens for Calhoun County’s operations center, but only three projectors. According to Calhoun EMA officials, three projectors are not enough during a chemical emergency. In addition, county officials said the current projectors need to be replaced because of the inadequate funding allocated for repair and maintenance. The projectors are operated daily and have more than 4,000 hours of use, compared with the recommended maximum of 1,700 hours. In fiscal year 1995, FEMA and Alabama EMA officials said Calhoun County’s request for three additional projectors was not adequately justified. FEMA officials concluded that the county already has the required number of projectors. In fiscal year 1996, state officials changed their position and agreed with the county’s request for three new projectors if the county traded in the used ones. However, FEMA still rejected Calhoun’s request for funds. FEMA officials recommended that the existing projectors be used in moderation (not daily) and adequately maintained. In addition, FEMA officials said funding the county’s six screens was an oversight on their part and only three screens were necessary. In response, Calhoun EMA officials said their county has the greatest response requirement of any other county and therefore, requires a greater number of spatial displays. According to Calhoun County EMA officials, a backup server is required in case the primary server crashes. Calhoun EMA documents indicate that the primary server has crashed or locked up several times and that on one occasion, the server was down for about a month. In fiscal years 1995 and 1996, the state EMA concurred with the county’s requests for a backup server on the basis of program guidance. FEMA officials rejected the requests stating that Alabama EMA, Anniston Army Depot, or Talladega County would have servers attached to their areawide network, which could serve as backups. However, according to Calhoun EMA officials, if the county server goes down, they cannot hook up to other servers at the state EMA, Anniston, or Talladega County. In addition, Calhoun County officials said the other servers cannot perform as Calhoun’s backup because the other automated systems do not have the county’s requirements or databases. Calhoun EMA officials told us that other required automated data processing items are also unfunded or partially funded. For example, the county EMA requested $63,000 for an optical jukebox to provide on-line mass data backup and storage. However, FEMA and state EMA officials rejected the quoted price stating that the county could use less expensive storage equipment. As a result, FEMA allocated Calhoun County $24,000 for the optical jukebox on December 13, 1995. However, according to county officials, their initial request was based on a vendor’s quoted price for the item and federal officials did not seem to understand that the county could not purchase the item with less money. Although Army and FEMA officials originally planned to complete all CSEPP planning guidance and standards by September 1989, planning guidance for emergency medical services, reentry, and restoration procedures remains uncompleted. As a result, local communities lack formal guidance to help them prepare their plans and determine their requirements for these emergency response issues. According to FEMA officials, the guidance is scheduled to be issued mid-1996. On June 27, 1995, the Centers for Disease Control and Prevention published in the Federal Register its recommendations for medical preparedness guidelines for communities near the chemical stockpile storage sites. The Army reported that the recommendations were available to all locations for use. According to FEMA, the guidance has been distributed in draft form pending resolution of outstanding issues. The agency concluded that the outstanding issues should not preclude the states and counties from using the drafts for daily planning. However, Calhoun County EMA and other CSEPP participants do not consider FEMA’s drafts as final planning guidance. The following are GAO’s comments on the letter from the Office of the Assistant to the Secretary of Defense for Atomic Energy. The letter was received on May 28, 1996. 1. It was not our intent in our draft report to leave the impression that the delay in Alabama’s CSEPP was solely the result of management weaknesses at the federal level. We have revised the final report to eliminate the reference to primarily and to more clearly attribute the lack of progress to federal management weaknesses and actions by state and local agencies. It is important to note that the problems experienced in Alabama’s CSEPP are likely to continue until an effective approach is developed for reaching timely agreements among federal, state, and local officials on specific requirements for projects. 2. In the draft of this report, we stated that the Army had taken some encouraging steps to improve the management and oversight of the Chemical Stockpile Disposal Program. For example, the Army restructured the overall management of CSEPP and established a centralized office to streamline procedures, improve responsiveness to the states and counties, and improved the budget process. However, we found little evidence that these steps had any significant effect on the federal management of CSEPP in Alabama. For example, during this review, we found that records on expenditure data are limited; allocation data differ among FEMA, Alabama EMA, and county EMAs; and FEMA maintains large unexpended balances of funds for Alabama and Calhoun County. The following are GAO’s comments on the letter from the Associate Director for Preparedness, Training and Exercises, FEMA. The letter is dated May 29, 1996. 1. It was not our intent to leave the impression that the delay in Alabama’s CSEPP was solely the result of management weaknesses at the federal level. We have revised the final report to delete references to primarily and more clearly state that federal management weaknesses and state and local actions have contributed to the delay. However, until the Army and FEMA take steps to delineate their roles and responsibilities, complete and clarify CSEPP’s planning guidance, reduce their involvement in state and local management of projects, and implement effective financial controls, federal, state, and local officials will continue to disagree on specific CSEPP requirements and time-consuming negotiations on projects in Alabama are likely to continue. 2. See comment 1. 3. We revised the report to show that some of FEMA’s expenditures support the entire CSEPP community, including the development of program guidance, training courses, and computer software. However, almost 45 percent of all CSEPP funds have been for federal management, contracts, and military installations such as the Anniston Army Depot. Specifically, $190.4 million (54.3 percent) was allocated to the state and counties, $157.3 million (44.9 percent) was allocated to the Army and FEMA, $1.1 million (0.3 percent) was allocated to other entities, and $1.8 million (0.5 percent) is unallocated. In our 1995 report on CSEPP’s financial management weaknesses, we said that allocated funds at four of the eight storage sites were generally used for priority items and other critical CSEPP projects. However, because of weaknesses in FEMA’s financial management and reporting, we were unable to provide a complete picture of how program funds were spent at the other four storage sites and that the program was susceptible to fraud, waste, and abuse. In addition, we did not report that CSEPP funds were effectively allocated. On the contrary, we reported that critical items needed by local communities to adequately respond to a chemical stockpile emergency were not operational or had not been purchased. 4. We revised the final report to more clearly state that some of Alabama EMA’s and Calhoun County EMA’s actions have contributed to the lack of progress in Alabama’s CSEPP. However, we do not agree with FEMA’s position that the unexpended funds are mostly the result of Calhoun County EMA’s refusal to initiate CSEPP projects until the Army and FEMA agree to all of the county’s demands. The delays experienced in Alabama’s CSEPP are likely to continue until an effective approach is developed for reaching timely agreements among federal, state, and local officials on specific requirements for projects. 5. We revised our report to reflect FEMA’s position that the 800-MHz communications system is not in place because Calhoun County EMA refused to initiate work on the contract until the county’s demand for additional radios was met. However, we disagree with FEMA’s statement that the overall scope of the 800-MHz communications project was resolved in 1993. Since 1993, the Army and FEMA allocated $1 million and $2 million for additional equipment and radios in June 1995 and August 1995, respectively. As recently as April 23, 1996, FEMA authorized additional radios for Alabama and Talladega and Calhoun counties. It appears that all the disagreements about the project may have been resolved on April 23, 1996, when Army and FEMA officials agreed to provide additional 800-MHz radios to Alabama and Talladega and Calhoun counties. Calhoun County EMA officials awarded the 800-MHz contract on May 30, 1996. According to Calhoun EMA officials, the contractor has 16 months from the contact award date to manufacture and install the communications system. The 800-MHz project is an example in which Calhoun County EMA delayed implementation of the project until it received enough radios, in its opinion, to help ensure maximum protection for the citizens of the county. In addition, Alabama and Talladega County benefited from Calhoun EMA’s efforts in that they also received additional radios. In summary, we question FEMA’s conclusion that Calhoun County EMA wrongfully delayed the 800-MHz project because the county insisted on a system that exceeded CSEPP requirements; after 3 years of negotiations, FEMA itself agreed to fund the county’s request. Similar problems experienced with the 800-MHz project are likely to continue in Alabama until an effective approach is developed for reaching timely agreements among federal, state, and local officials on specific requirements for projects. 6. We revised the final report to include that, according to FEMA, the agency has not received a formal rebuttal or request from Calhoun County to change the authorization for the collective protection project. We also added to the report that Army officials believe Calhoun EMA’s shelter time estimate of 3 days is excessive and that a chemical plume would pass over the area in 3 to 12 hours. However, our concern with this project was that FEMA officials did not discuss their selection of facilities to be protected with local officials and selected five that they would prefer to be protected at a later date. In addition, according to Calhoun EMA, FEMA did not (1) provide enough funding for the supplies requested by the county and (2) discuss FEMA methodology to estimate the average cost of $200,000 to protect each facility. Finally, as a result of CSEPP’s fragmented management structure, there was a 6-month lapse between FEMA headquarters’ authorization and Calhoun County’s receipt of it. 7. We revised our report to include FEMA’s position that there is nothing preventing Calhoun County EMA from purchasing the approved personal protective equipment but that the county has refused to initiate work on the project until its demand for additional funding is approved. As discussed previously, Calhoun County EMA is ready to issue a contract for the civilian respirators and protective suits when the requirements for medical examinations are defined and related funds are provided by FEMA. Although FEMA allocated funds for personal protective equipment in fiscal year 1995, federal and local officials are still negotiating specific requirements. The problems experienced in Alabama’s CSEPP are likely to continue until an effective approach is developed for reaching timely agreements on specific requirements among federal, state, and local officials. 8. We revised our report to include the protocol for intergovernmental communications as described by FEMA. However, FEMA does not recognize the role and responsibilities of the CSEPP Core Team in its protocol. According to the Core Team’s charter, dated January 6, 1995, the team is the focal point for accountability of the program and coordinates and integrates on- and off-post activities. The Core Team was established, in part, to streamline procedures, improve responsiveness to state and local agencies, and enhance the overall budget process. Because of differences similar to these, we continue to believe that the role and responsibilities of the CSEPP Core Team are not clearly understood by state and county officials. In addition, we disagree with FEMA’s statement that CSEPP has had a long-established protocol for communications. Army and FEMA officials routinely communicate with local officials without complying with the protocol described by FEMA. During this review, FEMA officials conducted on-site inspections of the CSEPP siren system in Alabama and routinely contacted county officials outside of FEMA’s stated protocol. 9. According to FEMA, to the extent that CSEPP guidance is unclear, such flexibility is necessary to meet the diverse functional, technical, and geographical needs of CSEPP and the ill-defined maximum protection mandate of the Chemical Stockpile Disposal Program. We believe that without clear and complete program guidance, disagreements and time-consuming negotiations on projects in Alabama are likely to continue. In May 1996, we reported similar concerns about FEMA’s ambiguous criteria for its disaster assistance program. We revised the report to show that CSEPP guidance has been distributed in draft to state and county agencies pending resolution of outstanding issues. FEMA officials believe that the outstanding issues should not preclude the states and counties from using the drafts for daily planning. However, Calhoun County EMA officials do not consider FEMA drafts as final planning guidance. In addition, Alabama EMA officials said the program still needs to resolve numerous problems with reentry and restoration issues and that the continuous changes and redirection of the program have diverted resources away from protecting the public and the environment. Clay County EMA officials in Alabama told us that there is a general lack of clear guidance for CSEPP. In addition, Etowah County EMA officials said that CSEPP standards and guidance were changed whenever Army and FEMA officials wanted to change them, without regard to the needs of local governments. CSEPP has had a working definition of maximum protection since 1991. CSEPP Policy Paper Number 1, entitled Definition of Maximum Protection, states that the most important objective of the emergency preparedness and implementation process is the avoidance of fatalities to the maximum extent possible should an accidental release of chemical agent occur. The policy paper states that this objective can be achieved through (1) the establishment of comprehensive emergency planning and preparedness programs and (2) preventive measures designed to render the chemical stockpile less susceptible to both internally and externally generated accidents. The Assistant Associate Director in FEMA’s Office of Technological Hazards signed the policy paper on May 6, 1991. 10. We believe that the inability to reach agreement on specific projects is due, in part, to federal officials’ being too involved in the management of local projects. Once the Army and FEMA approve and allocate funds for a CSEPP project, state and local agencies are in the best position to implement and manage the project and federal involvement in the project should be minimal. According to Alabama EMA officials, they have discussed the problem related to Army’s and FEMA’s micromanagement of CSEPP with FEMA officials. These officials said that the current CSEPP process does not allow state directors flexibility in managing their emergency preparedness programs. The issue of FEMA’s involvement in the management of local projects was also raised by the Director of Calhoun County EMA on July 13, 1995, before the Procurement Subcommittee, House Committee on National Security. The Director testified that CSEPP projects were hampered by micromanagement at the federal level. 11. We believe that the inability to reach timely agreements on project and funding requirements indicates that the CSEPP budget process is not working effectively. As discussed in the report, state and county officials told us that the CSEPP process lacks teamwork. For example, Etowah County EMA officials in Alabama told us that the agency did not have an influence on the CSEPP budget process and that the agency very seldom receives a response from the Army or FEMA on substantive issues. Similarly, according to St. Clair County EMA officials, the county has no influence in the CSEPP budget process. 12. We revised the final report to recognize that some progress in CSEPP has occurred in Alabama. However, communities near Anniston Army Depot are not fully prepared to respond to a chemical stockpile emergency, and Alabama and six counties have not been able to spend $30.5 million, 66.4 percent of the $46 million allocated to enhance their emergency preparedness. Alabama and its counties have not been able to spend most of the CSEPP funds allocated to them because (1) FEMA, state, and local officials cannot agree on specific requirements for major capital projects and (2) FEMA has not provided Alabama or Calhoun County officials permission to spend some of the funds. 13. As discussed above, we have revised the report to more clearly state that Calhoun County EMA’s actions have contributed to the delay of Alabama’s CSEPP. However, we do not agree with FEMA’s position that the unexpended funds are mostly the result of Calhoun County EMA’s refusal to initiate CSEPP projects until the Army and FEMA agree to all of the county’s demands. Disagreements and time-consuming negotiations on CSEPP projects in Alabama are likely to continue until an effective approach is developed for reaching timely agreements on specific requirements. Lee A. Edwards, Assistant Director Terry D. Wyatt, Evaluator Fredrick W. Felder, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Army's Chemical Stockpile Emergency Preparedness Program (CSEPP) for Alabama and Calhoun County, Alabama, focusing on: (1) the status and funding of CSEPP in these areas; (2) the impact of federal, state, and local management on Alabama's program; and (3) Calhoun County's opposition to the chemical stockpile disposal facility that the Army plans to build at the Anniston Army Depot. GAO found that: (1) eight years after CSEPP's inception, Alabama communities near Anniston are not fully prepared to respond to a chemical stockpile emergency because they lack critical items; (2) Alabama and six counties have not spent $30.5 million, 66.4 percent of the $46 million allocated to enhance emergency preparedness; (3) the unexpended funds are associated primarily with four projects for which federal, state, and local officials have not agreed on specific requirements: (a) a CSEPP 800-megahertz emergency communications system; (b) equipment and supplies to protect people in public buildings; (c) indoor alert and notification devices for public buildings and homes; and (d) personal protective equipment for emergency workers; (4) citing these four projects and eight other areas as major emergency preparedness deficiencies, Calhoun County Emergency Management Agency (EMA) opposes a state environmental permit for the construction of the disposal facility until it receives a written commitment from the Army to support the county's emergency preparedness requirements or provide acceptable alternatives; (5) the lack of progress in Alabama's CSEPP is the result of management weaknesses at the federal level and inadequate action by state and local agencies; (6) management weaknesses at the federal level are fragmented and unclear roles and responsibilities, incomplete and imprecise planning guidance, extensive involvement in the implementation of certain local projects, lack of team work in the budget process, and ineffective financial controls; (7) these weaknesses have resulted in time-consuming negotiations and delays in implementing projects critical to emergency preparedness; (8) Alabama EMA spent more than 2 years trying to contract for a demographics survey, which will serve as the basis for determining the requirements for the tone alert radios and developing critical planning documents; (9) the survey has not started as of May 28, 1996; (10) Calhoun County EMA has been reluctant to initiate CSEPP projects until federal officials agree to the county's requirements; (11) the situation in Alabama may not be unique; since 1994, GAO has reported that CSEPP is not working the way it was intended; (12) GAO's work has shown that although some progress has been made, communities near the eight chemical weapons storage sites in the United States are not fully prepared to respond to a chemical emergency, financial management is weak, and costs are growing; (13) the problems experienced in Alabama's CSEPP are likely to continue until an effective approach is developed for reaching timely agreements among federal, state, and local officials on specific requirements for projects; and (14) developing this approach should rest with the Army.
NMFS—operating through its headquarters, five regional offices, and six regional Fisheries Science Centers—and the eight Councils are responsible for managing approximately 470 fish stocks in federal waters across five geographic regions of the country (see fig. 1 for the NMFS region and Council boundaries). Federal waters generally extend from 3 to 200 nautical miles off the coast of the United States. In fiscal year 2016, NMFS’ budget for its fisheries science and management activities, such as conducting stock assessments and developing fisheries management guidance, was approximately $536.7 million, according to NOAA budget documents. NMFS has overall responsibility for collecting data on fish stocks and ocean conditions and for generating the scientific information necessary for the conservation, management, and use of marine resources. The agency’s six regional Fisheries Science Centers are the primary entities responsible for performing this work, and they collaborate with a variety of partners, such as coastal states, academics, other nations, and members of the fishing industry in doing so. For many fish stocks, the regional Fisheries Science Centers analyze the collected data to conduct stock assessments to estimate, among other things, the size of the population of a fish stock (i.e., the stock’s abundance) and other population dynamics. In addition, stock assessments contain information on reference points that can be used to inform management decisions. NMFS provides the results of its stock assessments and other analyses, as appropriate, to the Councils for use in implementing their respective fisheries management responsibilities. The Councils are responsible for a number of steps in the fisheries management process. In particular, the Councils develop and amend fishery management plans for fish stocks, based on guidelines developed by NMFS. Fishery management plans identify, among other things, the conservation and management measures that will be used to manage a fishery, such as fishing equipment restrictions, permitting policies, and restrictions on the timing or location of permissible fishing. The Magnuson-Stevens Act requires the conservation and management measures in fishery management plans to be based on the best scientific information available. The Councils submit proposed plans and plan amendments to NMFS, which is responsible for determining if they are consistent with the Magnuson-Stevens Act and other applicable laws, and for issuing and enforcing final regulations to implement approved plans. In implementing fishery management plans, the Councils are responsible for determining the maximum size of each fish stock’s allowable harvest. This is generally done by developing annual catch limits for each fish stock, that is, the amount of fish that can be harvested in the year. Figure 2 presents an overview of the federal fisheries management process. According to the 2014 Third National Climate Assessment, a number of physical and chemical changes to the oceans have been observed or are expected to occur as a result of climate change, largely attributable to increasing concentrations of greenhouse gases in the atmosphere, such as carbon dioxide. For instance, surface temperatures for the ocean surrounding the United States and its territories warmed by more than 0.9 degrees Fahrenheit over the past century, according to the 2014 assessment. Changes in ocean temperature have varied, with the oceans off the coasts of Alaska and parts of the northeastern United States, for example, warming more rapidly than other areas. The 2014 assessment notes that warming has several consequences and can lead to a number of other physical changes in the ocean, such as the thermal expansion of sea water, which may contribute to rising sea levels. Increases in ocean surface temperatures may also alter ocean circulation by reducing the vertical mixing of water that brings nutrients to the surface and oxygen to deeper waters, which could affect the availability of nutrients and oxygen for marine life in different locations, according to the 2014 assessment. Increasing levels of carbon dioxide in the atmosphere have also contributed to chemical changes in the oceans. According to the National Research Council, scientists estimate that the world’s oceans have absorbed approximately 30 percent of the carbon dioxide emitted by human activities over the past 200 years. As atmospheric concentrations of carbon dioxide increase, the amount of carbon dioxide in the oceans also increases. The increased uptake of atmospheric carbon dioxide is resulting in chemical changes in the oceans, including a decrease in the average pH of surface ocean waters (making seawater more acidic) and a reduction in the availability of minerals needed by many marine organisms to build shells and skeletons, according to the National Research Council. These chemical changes, known as ocean acidification, may pose risks for some marine species and ecosystems. For the purposes of this report, our discussion of the effects of climate change refers to both physical and chemical changes in the oceans. Broadly defined, risk management is a strategic process for helping decision makers assess risk, allocate finite resources, and take action under conditions of uncertainty, such as when faced with incomplete information or unpredictable outcomes that may have negative impacts. Risk management is an inherent part of fisheries management, as fisheries managers often make decisions based on incomplete data and in the face of uncertainty. Accounting for the potential effects of climate change injects an additional source of uncertainty and risk into the fisheries management process. The federal government has recognized the need to account for climate change risks in its planning and programs and has called on agencies to take certain actions. For example, the President issued Executive Order 13653 in 2013, directing federal agencies to develop or update comprehensive climate change adaptation plans that among other things identify climate-related impacts on and risks to an agency’s ability to accomplish its missions, operations, and programs and describe the actions the agency will take to manage climate risks. Subsequently, the Department of Commerce’s 2014 Climate Change Adaptation Strategy set a goal to incorporate climate information in the department’s resource management programs and policies and to take action to reduce vulnerabilities and increase resilience of marine and coastal natural resources. We have previously reported on risk management in the context of climate change and identified risks that climate change may pose in a variety of areas relevant to the federal government, such as infrastructure and federal supply chains. We found that leading risk management guidance recommends a sequence of activities that begins, in part, with identifying risks. Specifically, the International Organization for Standardization’s standards on risk management recommend that organizations such as federal agencies develop, implement, and continuously improve a framework for integrating risk management into their overall planning, management, reporting processes, and policies. These standards also state, among other things, that risk management should be a part of decision making and that it should be based on the best available information. We found that NMFS and the Councils have general information about the types of effects climate change is likely to have on federally managed fish stocks, but information about the magnitude and timing of effects for specific fish stocks is limited, based on the responses NMFS and the Councils provided to our questionnaire, our analysis of NMFS and Council documentation, and our interviews with NMFS and Council officials. In addition, NMFS and the Councils identified several challenges they face to better understand the effects of climate change, such as determining the extent to which a change in a fish stock’s abundance or distribution is caused by climate change, natural variation in the oceans, or other human or environmental factors. Through our analysis, we found that NMFS and the Councils have general information about the types of effects climate change is likely to have on fish stocks, but information about the magnitude and timing of effects for specific fish stocks is limited. In general, the types of effects climate change is likely to have include changes in fish stock abundance and distribution and the timing or location of biological events such as spawning (the process by which fish reproduce), according to NMFS and Council officials. These effects will not be uniform across fish stocks but rather will likely vary, with the abundance of some stocks being negatively affected and the abundance of other stocks increasing or not being affected, according to NMFS documentation. Similarly, potential shifts in distribution will vary, as some fish stocks may respond to changing ocean temperatures by moving north or to deeper waters in search of the water temperatures they are accustomed to, but shifts in other directions may occur as well. Some NMFS and Council officials said that changes in the timing and location of biological events are also expected to vary between fish stocks. For example, some fish stocks may spawn at different times or in different locations under warmer ocean conditions. In some instances, the abundance or distribution of fish stocks may shift in response to changes in ocean habitats occurring because of a changing climate. For example, reductions in seasonal sea ice cover and warmer ocean surface temperatures may open up new habitats in polar regions that could lead to shifts in distribution for some fish species, according to the 2014 Third National Climate Assessment. Furthermore, warming ocean temperatures and higher acidity levels also affect the health of coral reefs, which provide essential habitat for many fish stocks. The 2014 assessment reported, for example, that scientific research indicated that 75 percent of the world’s coral reefs were under threat from the effects of climate change and local stressors, such as overfishing, nutrient pollution, and disease. When water is too warm, corals will expel the algae living in their tissues, causing the coral to turn completely white—this phenomenon, known as coral bleaching, can cause coral to die or become more susceptible to disease, which can subsequently decrease their capacity to provide shelter and other resources for reef-dependent fish and other ocean life, according to the 2014 assessment. We found that NMFS and the Councils have limited scientific information about the magnitude and timing of potential climate change effects for most fish stocks they manage. For example, officials from one Council reported that they have very little scientific information specific to how climate change is currently affecting the fish stocks in their region, or how those stocks may be affected in the future, with the exception of some anecdotal information. NMFS and Council officials explained that scientific information is often lacking to quantify the magnitude and timing of effects for most individual fish stocks and that their ability to project future effects at the stock level is generally limited. For instance, NMFS and Council officials in the Alaska region reported that they have information sufficient to project potential climate change effects on abundance for 3 of the 35 primary fish stocks they manage in their region. And, as reflected in the examples below, for those stocks where they are able to project potential effects, they are uncertain about the full range of effects those stocks might experience. Northern Rock Sole The northern rock sole is a flatfish with both eyes on one side of its head. Northern rock sole live on the ocean floor and prefer a sandy or gravel ocean bottom. In the United States, northern rock sole are found from the Puget Sound through the Bering Sea and the Aleutian Islands. Northern rock sole are cooperatively managed by the National Marine Fisheries Service’s (NMFS) Alaska region and the North Pacific Fishery Management Council. In 2014, the commercial harvest for the rock sole fishery (which includes northern rock sole and southern rock sole) was valued at approximately $18.2 million, according to NMFS data. Northern rock sole. NMFS officials in the Alaska region said that their research on northern rock sole has shown that the juvenile fish may have increased odds of surviving to become adults under warmer ocean conditions, which could have a positive effect on their abundance. NMFS officials said that the research indicates that northern rock sole can adjust their diets to survive in a variety of habitats and therefore may be able to adapt to ecosystem changes such as warming ocean temperatures more easily than other fish species. In general, NMFS and Council officials from the Alaska region said that they do not anticipate the effects of climate change on the abundance of northern rock sole to be significant, but acknowledged that it is unknown how warming ocean conditions may affect the timing or location of the fish’s life cycle events, such as spawning. Walleye Pollock Walleye pollock are a member of the cod family and have a speckled coloring that helps them blend with the seafloor to avoid predation. They are a schooling fish distributed broadly in the North Pacific Ocean and in the Bering Sea, with the largest concentrations in the United States being found in the eastern Bering Sea. Walleye pollock are cooperatively managed by the National Marine Fisheries Service’s (NMFS) Alaska region and the North Pacific Fishery Management Council. The pollock fishery is the largest commercial fishery by volume in the United States, and in 2014, its commercial harvest was valued at approximately $399.9 million, according to NMFS data. Walleye pollock. NMFS officials in the Alaska region said that they can project some future effects of warming ocean temperatures on the abundance of walleye pollock. For instance, the officials told us that research indicates warming ocean temperatures could lead to a mismatch between the size of the pollock population and the availability of food sources for pollock to consume. Unusually warm winters in the Alaska region in the early 2000s caused seasonal ice to retreat earlier and farther than normal, which led to reductions in the amount of zooplankton available for young pollock to eat, according to NMFS officials. When young fish do not have sufficient food in their first year of life, the likelihood they will survive is reduced, and according to NMFS officials, the reduction in zooplankton led to a reduction in the pollock population. The officials did not attribute the warmer ocean temperatures during this time to climate change, but they said this event helped them forecast potential future climate effects by providing an opportunity to observe how pollock respond to changes in the environment similar to those anticipated from climate change. However, NMFS officials we interviewed said that making further predictions about the magnitude of potential climate change effects on the abundance and distribution of pollock is difficult because of the limited amount of climate-related information specific to the species. Agency officials told us that additional research is under way to better understand the effects of warmer ocean temperatures on pollock. To better understand which fish species may be most vulnerable to climate change, NMFS initiated an effort to systematically assess the vulnerability of marine species to a changing climate. Specifically, in 2015, NMFS developed a methodology for conducting climate vulnerability assessments for marine species using a combination of quantitative data, qualitative information, and expert opinion. The agency has used this methodology to complete an assessment for marine fish and invertebrate species that commonly occur in the Greater Atlantic region, which it published in February 2016. As of August 2016, NMFS had climate vulnerability assessments under way for species in its other four regions, according to agency officials. The assessment in the Greater Atlantic region found that approximately half of the 82 species assessed in the region were estimated to have a high or very high vulnerability to climate change, with the other half having low or moderate vulnerability. The assessment defined climate vulnerability as the extent to which the abundance or productivity of a species could be affected by climate change and natural long-term variability in ocean conditions. Similarly, more than half of the species were estimated to have a high or very high potential to shift their distribution in response to projected changes in the climate. The assessment also provided a summary of the existing scientific knowledge about the expected effects on different species in the region but did not quantify the magnitude of the expected effects for individual species. NMFS officials told us that the assessment was not designed to provide that type of quantitative information. Instead, the officials said the information provided by the assessments could be used to help determine which species should be the subject of additional research to help quantify potential climate effects in the future. The vulnerability assessment completed in the Greater Atlantic region provides illustrations of the range of ways that species may be affected by climate change, as reflected in the following examples. Atlantic Cod Atlantic cod have a large head, blunt snout, and a distinct barbel (a whisker-like organ, like on a catfish) under the lower jaw. Cod live near the ocean floor along rocky slopes and ledges and prefer to live in cold water. In the United States, cod range from the Gulf of Maine to Cape Hatteras, North Carolina, and are most commonly found off the coast of Cape Cod, Massachusetts, and in the western Gulf of Maine. Cod are cooperatively managed by the National Marine Fisheries Service’s (NMFS) Greater Atlantic region and the New England Fishery Management Council. In 2014, the Atlantic cod commercial harvest was valued at approximately $9.4 million, according to NMFS data. Atlantic cod have been an important commercial fish stock based on the historical volume and value of the fish caught in the Northeast. Atlantic cod. According to the NMFS climate vulnerability assessment, the abundance of Atlantic cod is likely to be negatively affected by warming ocean temperatures in the Northeast. Specifically, the assessment indicated that warmer ocean temperatures may be linked with a lower number of juvenile cod that survive and grow to a size sufficient to enter the fishery each year. The assessment also found that continued ocean warming could produce less-favorable habitat conditions for cod in the southern end of its range. Atlantic cod have experienced a decline in abundance in recent decades, and warming ocean temperatures may have contributed to this decline, according to NMFS and Council officials in the Greater Atlantic region. The officials further indicated, however, that the extent to which changing temperatures played a role in the cod decline is unclear because it is difficult to isolate this factor from other contributing factors, such as overfishing. Black Sea Bass Black sea bass are usually black with a slightly paler belly and a dorsal fin that is marked with a series of white spots and bands. Black sea bass commonly inhabit rock bottoms near pilings, wrecks, and jetties. Along the East Coast, black sea bass are divided into two fish stocks for management purposes. The stock found north of Cape Hatteras, North Carolina, is cooperatively managed by the National Marine Fisheries Service’s (NMFS) Greater Atlantic region, the Mid-Atlantic Fishery Management Council, and the Atlantic States Marine Fisheries Commission. The southern stock is cooperatively managed by NMFS, the South Atlantic Fishery Management Council, and the Atlantic States Marine Fisheries Commission. In 2014, the black sea bass commercial harvest was valued at approximately $8.6 million, according to NMFS data. Black sea bass. The distribution of the northern stock of black sea bass, historically found in the Mid-Atlantic, has shifted northward in recent decades, according to NMFS officials in the Greater Atlantic region. This trend is likely to continue as ocean temperatures warm and become more favorable for the fish, according to the NMFS climate vulnerability assessment. The assessment also indicated that the abundance of black sea bass in more northern areas will likely increase as temperatures warm and more spawning occurs. According to NMFS and Council officials and other fisheries stakeholders, changes in the abundance and distribution of black sea bass could present a challenge for fisheries managers because commercial fishing rights have been allocated to states based on historical catch data that may not reflect where the fish are found in the future. American Lobster The American lobster is a crustacean with a large shrimp-like body with eight legs and two claws. Lobster live on the ocean floor and are most abundant in coastal waters from Maine through New Jersey, and offshore from Maine through North Carolina. The lobster fishery has northern and southern stocks that are cooperatively managed by the National Marine Fisheries Service’s (NMFS) Greater Atlantic region and the member states of the Atlantic States Marine Fisheries Commission. In 2014, lobster’s commercial harvest was valued at approximately $567.3 million, which made it one of the most economically valuable fisheries in the country that year, according to NMFS data. American lobster. The overall effect of climate change on lobster abundance is estimated to be neutral, as population decreases in the southern portion of its range have been offset by increases in the north, according to the NMFS climate vulnerability assessment. NMFS officials in the Greater Atlantic region said that these population changes are believed to be driven in large part by warming ocean temperatures. Specifically, officials said that as ocean conditions in southern New England have become less favorable to lobsters because of increasing ocean temperatures, the abundance of the southern stock has declined. In contrast, the abundance of the northern stock has increased as temperature changes in the Gulf of Maine (where waters are generally colder) have produced more favorable conditions for lobster. NMFS has not been able to quantify the magnitude of expected future changes in lobster abundance in the region, however, in part because it is difficult to accurately forecast changes in ocean temperatures more than a few months out, according to agency officials. Additionally, NMFS officials indicated that changing ocean conditions can affect the timing of biological events in the lobster’s life cycle, which may subsequently alter the seasonal abundance of the stocks in a way that may cause disruptions to the fishing industry. For instance, in 2012, warmer-than- usual ocean temperatures in the Gulf of Maine resulted in American lobsters growing to market size earlier than usual, according to NMFS documentation. As a result, lobsters were harvested early, but some lobster processing facilities were not yet ready to receive them, which led to an abundance of supply for the fishermen and a drop in the market price they received for their harvest. Through the questionnaire responses and our interviews, NMFS and Council officials identified several challenges to better understand the existing and anticipated effects of climate change on fish stocks as well as efforts they are taking to address some of these challenges, including: Understanding how climate change may affect fish stocks. NMFS and most of the Councils indicated that a significant challenge they face is better understanding the relationship between changes in ocean conditions and the processes that drive how fish will react to those changes. NMFS officials explained that it can be difficult, for instance, to understand how water temperature changes may affect the biology of specific fish stocks or may indirectly affect their habitat and interactions with other species within that habitat. Officials from one NMFS region said that they do not have a sufficient understanding of the processes that drive fish stock productivity— including their birth, growth, and death rates—in their region, or how those processes may be affected by climate change. Officials from several other NMFS regions and Councils shared similar views and noted that this challenge limits their ability to understand the overall effects that climate change may have on different fish stocks. Availability of baseline data. NMFS and several of the Councils reported that insufficient baseline data at times limit their ability to predict the effects of various changes in the climate on fish stocks. Baseline data include oceanographic information, such as water temperature in different regions and depths; information about a fish stock, such as temperature preference and spawning history; and ecological information, such as the type of plants or animals available as a food source for a particular stock or its role in a food web. For example, NMFS officials told us that sea surface temperature data are widely available in most regions but that data on subsurface temperatures are limited. The officials attributed the difference in the availability of these two types of data to the ability to use satellites to measure sea surface temperatures, whereas the collection of subsurface temperature data is more difficult and requires measurements to be taken directly from the ocean. NMFS officials said that the lack of subsurface temperature data limits their ability to research and determine potential effects of changing temperatures on bottom-dwelling fish stocks, such as the American lobster. The officials told us that NOAA has increased its efforts to utilize technologies such as ocean gliders—autonomous underwater vehicles used to collect ocean data—to track subsurface ocean conditions and collect additional baseline data. For example, in 2014 NOAA partnered with other scientists to launch two ocean gliders in the Gulf of Alaska to collect data for 5 months on water temperature, salinity, and dissolved oxygen, among other things. Summer Flounder Summer flounder are flatfish, with both eyes located on the left side of the body when viewed from above, with the dorsal fin facing up. Adult summer flounder spend most of their lives on or near the seafloor, burrowing into the sand. In the United States, summer flounder are found from the Gulf of Maine through North Carolina. The stock is cooperatively managed by the National Marine Fisheries Service’s (NMFS) Greater Atlantic region, the Mid-Atlantic Fishery Management Council, and the Atlantic States Marine Fisheries Commission. Summer flounder are one of the most sought after commercial and recreational fish along the Atlantic coast, and in 2014, the commercial harvest was valued at approximately $32.3 million, according to NMFS data. Distinguishing between climate change and other factors. It can be challenging to determine whether a change in a fish stock’s abundance or distribution is caused by climate change; natural variation in the oceans; or other human or environmental factors, such as overfishing or pollution, according to NMFS and some Council officials. Isolating the effects of climate change from other factors is difficult, and the complexities of trying to understand the ways in which these factors interact also present challenges, according to the officials. For example, NMFS officials in the Greater Atlantic region said that populations of summer flounder have increased in recent years after experiencing significant declines from overfishing in the 1970s and 1980s, and that summer flounder are being found in greater numbers in northern areas, such as the Gulf of Maine. The officials said that changes in fishing levels are likely driving the increase in summer flounder abundance and their increased presence in northern areas, but also suggested that warming ocean temperatures may have played a role. Modeling capabilities. NMFS and most of the Councils identified limitations in climate and fisheries modeling capabilities as a challenge to better understand the effects of climate change on fisheries. For example, according to NMFS officials, one important step to improving the ability to project the effects of climate change on specific fish stocks will be to downscale global ocean climate models, such as models of changes in ocean temperatures, to more regional and local levels that can then be used to assess climate effects on the fish stocks that inhabit those locations. Agency officials described this type of information as foundational knowledge that is generally not yet available across the NMFS regions, in part because it is resource intensive to develop. The officials said that NMFS considers the development of downscaled climate models that can be used to support fisheries management to be a critical need, and that efforts are under way to develop such models in some NMFS regions. For example, NMFS is funding a project to help downscale global climate models for some of the major rivers and estuaries in the Northeast, such as the Chesapeake Bay, that are crucial habitats for many commercial, recreational, and protected species. Resources. NMFS and most of the Councils identified constrained resources as a challenge to expanding climate-related data collection and analysis efforts. For instance, according to NMFS officials at one Fisheries Science Center, developing climate science for fisheries management requires extensive new modeling to assess and project current and future fishery conditions, including how fish stock abundance and distribution may change under changing physical and chemical ocean conditions. However, the officials said that staff capacity to conduct this work is limited because of existing modeling demands for stock assessments. Similarly, officials from one Council told us that additional staff and resources would be required to incorporate climate-related information into their work. NMFS has developed a strategy to help it and the Councils incorporate climate information—such as information on changes in ocean temperatures and acidity levels and the risks to fish stocks associated with those changes—into the fisheries management process, but is in the early stages of implementing the strategy. Through our analysis, we found that NMFS and the Councils have generally not incorporated climate information into the fisheries management process to date because the information they have on the effects of climate change on most fish stocks has not been sufficient. However, recognizing the importance of further developing climate information and incorporating it into the fisheries management process, NMFS published its NOAA Fisheries Climate Science Strategy in August 2015. The Strategy is intended to support efforts by the agency and its partners to increase the production, delivery, and use of climate information in managing fish and other living marine resources. According to the Strategy, failing to adequately incorporate climate change considerations into fisheries management and conservation efforts could cause those efforts to be ineffective, produce negative results, or miss opportunities. Determining how information on the effects of climate change can be incorporated into the fisheries management process is a key question that NMFS must address, according to the Strategy. The Strategy lays out a national framework that is to be regionally tailored and implemented. NMFS officials said that the agency’s regions (including its regional offices and regional Fisheries Science Centers) will have primary responsibility for implementing the Strategy, and those offices are in the process of developing regional action plans to describe how they will do so. NMFS has directed its regions to develop regional action plans that identify specific actions each region will take over the next 5 years to implement the Strategy and include the region’s assessment of its priorities and available resources, among other things. As of July 2016, four of NMFS’ five regions had released draft versions of their regional action plans for public comment, and agency officials said that they expect all of the regions to finalize their plans by October 2016. The officials said that NMFS headquarters has provided input on the draft plans and will work with the NMFS Science Board to review and approve the final plans. The Strategy recognizes the importance of incorporating climate information into the fisheries management process but does not provide specific guidance on how this is to be done. For example, the Strategy notes the importance of factoring climate information into stock assessments but does not include specific guidance on how such information should be incorporated into developing the assessments. Historically, fish stock assessments primarily considered the effects of fishing when estimating the abundance of individual fish stocks and have not accounted for ecosystem factors, including effects from changes in the climate. This traditional approach has been effective for assessing present and historical abundance levels but may not be effective in forecasting future levels because it does not account for effects related to changing environmental conditions, according to NMFS documentation. NMFS and Council officials said that factoring climate information more widely into stock assessments in the future will be important, given the changing climate conditions and particularly because stock assessments often provide the scientific basis for management decisions, including setting annual catch limits. The Strategy also states that including climate-related data to inform reference points (the limits or targets used to guide management decisions), where appropriate, is critical to avoid misaligned management targets for fish stocks, but the Strategy does not provide specific guidance on how this is to be done. In addition, the Strategy calls for NMFS to complete climate vulnerability assessments for all regions as an initial priority action but does not specify how NMFS regions and the Councils should use the results of the assessments to inform management decisions or incorporate them into the fisheries management process. Without developing guidance on how climate information is to be incorporated into specific aspects of the fisheries management process, NMFS does not have reasonable assurance that all of its regions and the Councils will consistently factor climate-related risks into fisheries management. Under federal standards for internal control, agencies are to clearly document internal controls, and the documentation is to appear in management directives, administrative policies, or operating manuals. In addition, according to the International Organization for Standardization, for risk management to be effective, it is important for information on risks to be included as a part of decision making. Moreover, developing guidance would be consistent with actions NMFS has taken for other parts of its mission, according to the agency’s Climate Change Coordinator. The official noted that in January 2016, NMFS developed guidance on using climate change information in the agency’s Endangered Species Act decisions, which could serve as a model for the fisheries management process. The guidance provides direction on how climate information should be incorporated into Endangered Species Act management decisions and what types of climate information should be used, among other things. NMFS has not yet developed similar guidance for fisheries management because doing so was not previously considered to be an immediate priority given the agency’s limited information on the anticipated effects of climate change on fish stocks and the near-term focus of most fisheries management decisions, according to NMFS’ Climate Change Coordinator. The official said, however, that as knowledge about the effects of climate change on fisheries has progressed over time, NMFS has found an increased and more pressing need to begin preparing for these effects in the near term. By developing guidance on how the NMFS regions and Councils are to incorporate climate information into different parts of the fisheries management process, NMFS may help ensure consistency in how its regions and the Councils factor climate-related risks into fisheries management decision making. In addition, the Strategy lays out overall objectives to help identify the agency’s climate information needs and help better ensure effective management in a changing climate, but does not contain agency-wide performance measures to track progress in achieving the Strategy’s objectives. Specifically, the Strategy lays out seven interrelated, priority objectives (see table 1). According to NMFS officials, the agency recognizes the potential benefit of developing a comprehensive set of agency-wide performance measures for the Strategy, but it has not specified when or how it may do so because it is first focusing on completing the regional action plans. NMFS has directed its regions to include performance measures in their regional action plans outlining how they will implement the Strategy, and agency officials told us that once the regional action plans are finalized in October 2016, NMFS plans to revisit whether to develop agency-wide performance measures for the Strategy. We have previously reported on the importance of agencies using performance measures to track progress in achieving their goals and to inform management decisions. Moreover, the Government Performance and Results Act of 1993, as amended, requires federal agencies to establish performance goals and related performance measures to track progress in annual agency performance plans, among other things. While these requirements apply at the departmental level (e.g., Department of Commerce), we have previously found that they also serve as leading practices in component agencies, such as at NMFS, to assist with planning for individual programs or initiatives that are particularly challenging. This is also in line with federal standards for internal control, which identify the establishment and review of performance measures as a control activity that can help ensure that management’s directives are carried out and that actions are taken to address risks, among other things. According to NMFS guidance, the regional action plans are to identify performance measures to document progress in implementing the Strategy at the regional level. In reviewing the four draft regional action plans that had been released as of July 2016, we found that three of the draft plans contained proposed performance measures. Specifically, we found that the measures each of the three regions proposed included some key attributes of successful performance measures that we have previously identified, such as being aligned with the Strategy’s objectives and having limited overlap (see app. II for the full list of key attributes). However, most of the measures did not contain other key attributes. For example, nearly all of the proposed measures contained in the three draft plans did not include measurable targets, another key attribute of successful performance measures. In some cases, the proposed measures identified quantitative data to be tracked—such as the number of stock assessments or annual catch limits that include climate information—but there were no numerical targets identified that could serve as a means for assessing progress. We have previously found that including measurable targets in performance measures helps in assessing whether performance is meeting expectations. NMFS officials said that they recognized the importance of developing meaningful performance measures. By incorporating key attributes associated with successful performance measures in the final performance measures developed for the plans and assessing whether agency-wide performance measures may also be needed, NMFS may be in a better position to determine the extent to which the objectives of the Strategy overall are being achieved. Changing environmental conditions in the oceans, such as warming temperatures, could affect the abundance and distribution of federally managed fisheries in ways that may pose risks to the communities and industries that depend on harvesting the affected fish. Such risks also have potential implications for the federal government’s fiscal exposure because commercial fishery failures may result in the federal government providing fishery disaster assistance. NMFS has taken steps to help it and the Councils better understand how climate change may affect the fish stocks they manage, even as they face challenges related to the availability of data and resources, among others. Recognizing the importance of incorporating climate considerations into the fisheries management process, NMFS has also developed its NOAA Fisheries Climate Science Strategy to help increase the production, delivery, and use of climate information. However, NMFS is in the early stages of implementing the Strategy and has not developed guidance to specify how climate information is to be incorporated into different parts of the fisheries management process, such as in developing stock assessments or reference points. Developing such guidance would align with federal standards for internal control and may help NMFS ensure consistency in how its regions and the Councils factor climate-related risks into fisheries management decision making. Additionally, NMFS has directed its regions to include performance measures in their regional action plans outlining how they will implement the Strategy, but the proposed measures in the draft plans we reviewed did not include some key attributes of successful performance measures, such as containing measurable targets. Moreover, NMFS has not developed agency-wide performance measures to assess progress in meeting the Strategy’s overall objectives, instead choosing to wait to complete the regional action plans before determining whether such measures may be necessary. By incorporating key attributes associated with successful performance measures in the final performance measures developed for the plans and assessing whether agency-wide performance measures may also be needed, NMFS may be in a better position to determine the extent to which the objectives of the Strategy overall are being achieved. To help NMFS and the Councils incorporate climate information into the fisheries management process and better manage climate-related risks, we recommend that the Secretary of Commerce direct NOAA’s Assistant Administrator for Fisheries to take the following two actions: Develop guidance to direct the NMFS regions and Councils on how climate information should be incorporated into different parts of the fisheries management process. In finalizing the regional action plans for implementing the NOAA Fisheries Climate Science Strategy, (1) incorporate the key attributes associated with successful performance measures in the final performance measures developed for the plans and (2) assess whether agency-wide performance measures may be needed to determine the extent to which the objectives of the Strategy overall are being achieved, and develop such measures, as appropriate, that incorporate the key attributes of successful performance measures. We provided a draft of this report for review and comment to the Department of Commerce. In written comments (reproduced in appendix III), the Department of Commerce and NOAA agreed with our recommendations. NOAA stated that the report provides a good summary of climate-related effects on the nation’s fishery resources and the challenges to understanding and responding to climate effects in fisheries management. NOAA agreed with our recommendation to develop guidance on how climate information should be incorporated into different parts of the fisheries management process, and outlined several ongoing and planned efforts to help address the recommendation. For example, NOAA stated that NMFS recently released a policy and related agency directives intended to help clarify if and when fishing allocations may need to be reviewed or adjusted, such as when fish stocks shift distribution in response to a changing climate. In addition, NOAA agreed with our recommendation to incorporate key attributes associated with successful performance measures in the final performance measures included in the regional action plans for the Strategy and to assess whether agency-wide performance measures for the Strategy may be needed. NOAA stated that NMFS will review the draft regional action plans and take action to ensure that the performance measures in the final plans include key attributes of successful measures. Following completion of the regional action plans, NOAA indicated that NMFS will also assess the need for agency-wide measures to track and evaluate achievement of the Strategy’s objectives and will develop such measures as appropriate. NOAA also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Commerce, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix IV. This report examines (1) information the National Marine Fisheries Service (NMFS) and the Regional Fishery Management Councils (Council) have about the existing and anticipated effects of climate change on federally managed fish stocks and challenges they face to better understand these effects and (2) efforts NMFS has taken to help it and the Councils incorporate climate information into the fisheries management process. For both objectives, to understand the legal framework for fisheries management in the United States, we reviewed the Magnuson-Stevens Fishery Conservation and Management Act of 1976, as amended, which governs marine fisheries management in federal waters, including both commercial and recreational fishing. We also reviewed our past work on fisheries management, climate change adaptation, and risk management. To examine information NMFS and the Councils have about the effects of climate change on federally managed fish stocks and the challenges they face to better understand these effects, we reviewed scientific studies and documentation prepared by NMFS, the Councils, and others such as academics. These documents included studies about the existing and anticipated effects of climate change on the ocean and particular fish stocks and key fisheries management documents, such as stock assessments and fisheries management plans. We also developed a questionnaire of open-ended questions about the information NMFS and the Councils have on the effects of climate change on the fish stocks they manage and any challenges they face to better understand these effects, among other things. We disseminated the questionnaire to the regional administrator of each NMFS region and the executive director of each Council. All five regions and eight Councils responded to the questionnaire. To characterize officials’ views presented in the questionnaire responses, two analysts analyzed the narrative responses to these questions and identified key themes. In describing these responses in the report, we attribute the information to NMFS and Council officials. We conducted interviews, either in person or by telephone, with officials from two NMFS regions (Greater Atlantic and Alaska) and representatives of four Councils (New England, Mid-Atlantic, South Atlantic, and North Pacific) and the Atlantic States Marine Fisheries Commission about their understanding of the effects of climate change on fish stocks and any challenges they face to better understand these effects. We selected these entities to interview because they are involved in managing fisheries in regions of the country that are expected to experience significant effects from climate change, according to NMFS documentation and our discussions with agency officials. We also interviewed representatives from 11 stakeholder organizations representing fishing, conservation, and Alaska Native interests to obtain a broader range of views on the existing and anticipated effects of climate change on fish stocks and the efforts of NMFS and the Councils to better understand these effects. We selected these entities to reflect geographic diversity and different types of involvement in fisheries management issues. We analyzed a nongeneralizable sample of seven fish species as case studies from NMFS’ Greater Atlantic and Alaska regions. These species were the American lobster, Atlantic cod, black sea bass, northern rock sole, spiny dogfish, summer flounder, and walleye pollock. We selected these species to reflect variation in the value and volume of commercial harvests, based on NMFS’ commercial fisheries data from 2008 to 2013, and a range of potential effects of climate change. We assessed the reliability of NMFS’ commercial fisheries data and found the data to be sufficiently reliable for the purpose of selecting our case study species. The case study species are intended to provide illustrative examples and cannot be generalized to the entire universe of fish species. For each of our case study species, we reviewed applicable stock assessments and fisheries management plans. We also obtained written information from the applicable NMFS Fisheries Science Centers and Councils in the regions on the extent to which each species is expected to change its distribution, abundance, or the timing or location of biological events as a result of the effects of climate change. In addition, we interviewed NMFS scientists involved in studying our case study species at NMFS’ Alaska Fisheries Science Center and the Northeast Fisheries Science Center. We used the information collected through these case studies to provide examples in the report. To examine the efforts NMFS has taken to help it and the Councils incorporate climate information into the fisheries management process, we analyzed the questionnaire responses provided by the NMFS regions and Councils and reviewed agency and Council documentation, including planning documents, analyses, and reports with information on the fisheries management process. As part of our interviews with officials from NMFS’ headquarters, regional offices, and regional Fisheries Science Centers, as well as representatives from the Councils and nonfederal stakeholders, we discussed the steps taken to assess climate- related risks to fisheries and the extent to which climate information has been incorporated into different parts of the fisheries management process. We focused our analysis on the process for setting annual catch limits and related management actions, because developing these limits is required for all fisheries managed by the Councils and NMFS considers it a major part of the fisheries management process. In performing this work, we examined NMFS’ NOAA Fisheries Climate Science Strategy (Strategy) to assess the agency’s plans for increasing the production, delivery, and use of climate information for managing fisheries and other marine resources. In doing so, we first reviewed the draft Strategy (released in January 2014) and stakeholder comments on the Strategy submitted during the public comment period. We used this analysis to help identify potential issues stakeholders had about the Strategy’s design and proposed implementation, and to develop follow-up questions for NMFS and Council officials. We reviewed the final version of the Strategy after it was released in August 2015 and assessed the Strategy—as well as the four draft regional action plans for implementing the Strategy that had been released as of July 2016—against leading practices in agencies’ strategic planning and performance measurement identified in our previous work based on the requirements of the Government Performance and Results Act of 1993, as amended. We also compared NMFS’ efforts to incorporate information on climate- related risks in the fisheries management process to leading risk management guidance developed by the International Organization for Standardization and GAO’s Standards for Internal Control in the Federal Government. Measuring performance allows organizations to track the progress they are making toward their goals and gives managers critical information on which to base decisions for improving their programs. Table 2 presents a summary of nine key attributes of successful performance measures identified in our prior work, including the potentially adverse consequences if they are missing. All attributes are not equal, and failure to have a particular attribute does not necessarily indicate that there is a weakness in that area or that the measure is not useful; rather, it may indicate an opportunity for further refinement. In addition to the contact named above, Alyssa M. Hundrup (Assistant Director), Stephen D. Secrist (Assistant Director), Mark Braza, Alicia Puente Cackley, John Delicath, Heather Dowey, Karen Howard, Benjamin T. Licht, Dan C. Royer, Jeanette M. Soares, Levine Thomas, Joseph Dean Thompson, Jason Trentacoste, Sarah Veale, and Joshua Wiener made key contributions to this report.
NMFS and the Councils manage commercial and recreational marine fisheries that are critical to the nation's economy. The effects of climate change may pose risks to these fisheries that could have economic consequences for the fishing industry and coastal communities, according to the 2014 Third National Climate Assessment. GAO was asked to review federal efforts to address the effects of climate change on federal fisheries. This report examines (1) information NMFS and the Councils have about the existing and anticipated effects of climate change on federally managed fish stocks and challenges to better understand these effects and (2) efforts NMFS has taken to help it and the Councils incorporate climate information into fisheries management. GAO analyzed responses to its questionnaire from all NMFS regions and the Councils, analyzed seven nongeneralizable fish species selected to reflect variation in the potential effects of climate change, reviewed relevant documentation, and interviewed NMFS and Council officials. The Department of Commerce's National Marine Fisheries Service (NMFS) and eight Regional Fishery Management Councils (Council) have general information on the types of effects climate change is likely to have on federally managed fish stocks but limited information on the magnitude and timing of effects for specific stocks. They also face several challenges to better understand these effects, based on GAO's analysis of NMFS and Council questionnaire responses, NMFS and Council documentation, and interviews with NMFS and Council officials. For example, NMFS officials said that northern rock sole may adapt to warming ocean temperatures more easily than other fish species, but it is unknown how such temperatures may affect the timing of the fish's life cycle events, such as spawning. NMFS and Council officials identified several challenges to better understand potential climate change effects on fish stocks, including determining whether a change in a stock's abundance or distribution is the result of climate change or other factors, such as overfishing in the case of Atlantic cod. NMFS developed a climate science strategy in 2015 to help increase the use of climate information in fisheries management. The strategy lays out a national framework to be implemented by NMFS' regions but does not provide specific guidance on how climate information should be incorporated into the fisheries management process. An NMFS official said that developing such guidance has not been an agency priority, but as knowledge on climate change progresses there is a more pressing need to incorporate climate information into fisheries management decision making. Developing such guidance would align with federal standards for internal control and may help NMFS ensure consistency in how its regions and the Councils factor climate-related risks into fisheries management. In addition, NMFS has not developed agency-wide performance measures to track progress toward the strategy's overall objectives, a leading practice. NMFS officials said they are waiting to finalize regional action plans for implementing the strategy before determining whether such measures may be necessary. GAO reviewed the proposed measures in NMFS' draft regional action plans and found that they aligned with some key attributes of successful performance measures. But, most of the measures did not contain other key attributes, such as measurable targets. By incorporating key attributes when developing performance measures and assessing whether agency-wide measures may also be needed, NMFS may be in a better position to determine the extent to which the objectives of its strategy overall are being achieved. GAO recommends that NMFS (1) develop guidance on incorporating climate information into the fisheries management process and (2) incorporate key attributes of successful performance measures in the regional action plans and assess whether agency-wide measures for the climate science strategy may be needed. The agency agreed with GAO's recommendations.
DOE oversees and implements its major cleanup projects through agreements with contractors who operate the nuclear weapons research and production sites and the cleanup projects at those sites. Some of EM’s cleanup projects are located at DOE sites administered by the National Nuclear Security Administration, a separately organized agency within DOE. EM’s major cleanup projects involve efforts to clean up sites where nuclear weapons were produced and production waste stored. EM’s cleanup projects handle a wide array of waste types and levels of radioactivity and hazardous constituents, and can involve multiple activities to, among other things, retrieve, characterize, treat, package, store, transport, and dispose of the waste, as well as disassemble, treat, package, store, transport, and dispose of the contaminated containers or processing lines/equipment used for weapons production or for storing or treating the waste. Multiple EM cleanup projects can occur at a single DOE site responsible for a multitude of other noncleanup-related activities. The cleanup projects are organized generally around similar waste types and activities. For example, the soil and water remediation activities at each site are organized under one umbrella, as are the nuclear facility decontamination and decommissioning projects, and the radioactive liquid tank waste projects, among others. EM generally manages these similar work activities, grouped into a category known as a Project Baseline Summary, through numerical designations; for example, all activities for soil and water remediation are grouped under Project Baseline Summary 30. (See app. II for additional information on the 10 DOE major cleanup projects reviewed.) Unlike construction projects, which are funded on a line item basis, cleanup projects receive funding through operating funds designated for each DOE site. In 2003, EM began applying project management principles contained in DOE Order 413 to these cleanup projects in order to apply more discipline and rigor in planning and expending these project funds, among other things. A cleanup project can cost several billion dollars and its life cycle can span several decades. EM divides the life cycle baselines for its major cleanup projects into three distinct parts––prior year costs, near term (usually a 5- year period), and out year (through project completion). Life cycle costs for each project range from a low of almost $1.7 billion to over $44 billion, and some projects might not be completed until after 2050. (See app. III for detailed information on the life cycle baseline costs for the 10 projects we reviewed.) EM applies different approaches to managing these wastes, depending on the type and extent of contamination and the state or federal regulatory guidelines and milestones it needs to comply with. DOE has agreements with state and federal regulators to clean up sites, and the agreements lay out a framework for determining the cleanup standards to be met. Furthermore, because all projects have a certain degree of uncertainty, such as not fully knowing the condition of buried waste containers, EM needs to plan for this uncertainty and identify ways to prevent serious disruption to projects should problems arise. To address this uncertainty, DOE Order 413 requires project managers to identify contingency funds that may be needed to cover potential cost increases stemming from a variety of project risks, including technical complexities, regulatory issues, and funding shortfalls. Although EM project managers build contingency funding into their near-term and out-year estimates, EM management does not generally include funding in its budget requests to cover contingency for cleanup projects until after it is actually needed to address a problem; therefore, EM contingency for cleanup projects has been referred to as “unfunded contingency.” To be effective, program managers need information on program deliverables and on the progress made in meeting them. One method that can help program managers track this progress is EVM data. These data include, for example, detailed information on budgeted costs and actual costs for work scheduled and work performed, as well as forecasted costs at project completion. Among other things, EVM data can be used to compare (1) budgeted costs to actual costs and (2) the value of work accomplished during a given period with the value of work scheduled for that period. By using the value of work completed as a basis for estimating the cost and time to complete a project, EVM data should alert program managers to potential problems sooner than expenditures alone can. As a key management tool, EVM has evolved from an industrial engineering concept to a government and industry best practice to better oversee programs. Both OMB and DOE Order 413 require the use of EVM. OMB Circular A-11, part 7, requires the use of an integrated EVM system across an entire program to measure how well the government and its contractors are meeting a program’s approved cost, schedule, and performance goals. The American National Standards Institute and the Electronic Industries Alliance have jointly established a national standard for EVM systems. Recognizing the benefits of having these national standards, OMB states in its 2006 Capital Programming Guide that major acquisitions that require product development are to require that contractors use an EVM system that meets the American National Standards Institute guidelines. In addition, DOE Order 413 requires that projects with total cleanup costs of $50 million or more use an EVM system that complies with industry standards and is certified by DOE’s OECM to comply with these standards. GAO also has developed EVM best practices that, when followed, can help project managers consistently develop and analyze EVM data to gain a complete and accurate understanding of project status. Among other things, our guidance on EVM states that (1) EVM data should not have data errors and anomalies that may skew and distort the EVM analysis, and (2) information such as staffing levels and the root causes of and corrective actions for cost and schedule variances should be reported through the EVM system. Nearly all the cleanup projects we reviewed have had cost increases and schedule delays in the life cycle baseline, as much as $9 billion for one project, and schedule delays of as much as 15 years for two projects. These cost increases and schedule delays occurred primarily because the previous baselines for these projects had schedule assumptions that were not linked to technical or budget realities, and other assumptions also proved to be overly optimistic. The estimated costs of the 9 of the 10 DOE major cleanup projects we reviewed have significantly exceeded original estimates, as table 1 shows. As the table shows, estimated costs increased from a minimum of $139 million for one project to more than $9 billion for another project. The smallest dollar and percentage increase—$139 million, or 9 percent— occurred at Los Alamos’ soil and water remediation project, which is focused on cleaning up known or suspected chemical and radiological contamination in addition to treating soil and groundwater that was contaminated by this waste. This project, however, is expected to further increase its life cycle cost estimate. The largest dollar increase among the 10 major projects—more than $9 billion—was for Hanford’s radioactive liquid tank waste project, which is expected to remove, treat, and dispose of more than 56 million gallons of high-level radioactive waste in 177 underground storage tanks. In fact, the other radioactive liquid tank waste project, at Savannah River, registered the second largest dollar increase— almost $7 billion. However, the largest percentage increase—about 64 percent—occurred at Oak Ridge’s nuclear facilities decontamination and decommissioning project. Table 2 shows that 8 of the 10 projects we reviewed experienced delays in scheduled project completion, ranging from 2 years to 15 years. As table 2 shows, the shortest delay is at Hanford’s nuclear material stabilization and disposition project, while the longest delays—15 years— also are at Hanford: the soil and water remediation and the solid waste stabilization and disposition projects. The changes in schedule and costs occurred primarily for two reasons. First, initial project baselines were built on accelerated schedules that were not always linked to technical capabilities or available budgets, although EM has begun to tie its new baselines to anticipated funding. Second, the initial baselines included other assumptions that did not hold true, including conditions on the ground at the sites, expected completion dates for related construction projects, and activities that would be included in projects’ scopes of work. The initial baselines for 8 of the 10 major projects we reviewed contained schedules that were influenced by an EM-wide effort to accelerate the office’s cleanup work. In 2002, EM management worked with its sites and regulators to create new, earlier milestones for completing key cleanup projects and for closing entire sites to reduce the public health and environmental risks posed by the waste at these sites. Before this effort, some of the major cleanup projects were not estimated to complete work until the 2030s and 2040s. Under the accelerated schedules, four projects’ completion dates were moved up by 15 years or more, as was the case for the radioactive liquid tank waste stabilization and disposition project at the Hanford site; its completion date was moved from 2048 to 2028. The baselines containing the accelerated schedules—those generally created between 2003 and 2006—tied their work scope and funding assumptions to the completion dates and not necessarily to available cleanup technologies. For example: Solid waste stabilization and disposition project at Idaho. To meet its accelerated completion date of 2012—down from 2018—DOE’s Idaho National Laboratory assumed its Advanced Mixed Waste Treatment Plant could process nuclear waste at a rate of about 8,500 cubic meters per year—more than 50 percent faster than the rate of about 5,400 cubic meters per year demonstrated when DOE established the baseline. At the time, because the plant had only recently begun operating, project staff lacked confidence that they could meet the processing rate. Moreover, the independent team reviewing the baseline reported that the rate was optimistically high. Nevertheless, DOE proceeded with the initial baseline, increasing the amount of unfunded contingency in its baseline and attempting to meet the optimistic rate by providing the contractor with performance incentives. Still, the processing rate has fallen short of baseline assumptions—it is currently roughly 6,000 cubic meters per year. To reflect this more realistic rate, DOE subsequently revised its baseline, adding 4 years to the project schedule and increasing costs by about $450 million. Radioactive liquid tank waste stabilization and disposition project at Savannah River. This project, in part, combines high-level radioactive waste stored in tanks at the Savannah River Site with melted glass and places it in canisters ultimately to be sent to a federal repository for disposal. DOE directed that the project’s completion date be accelerated, from 2035 in its early planning documents to 2019 in the initial baseline. In order to make that date, according to project officials, they included some assumptions in the initial baseline they knew at the time would be difficult to realize. Specifically, they assumed that the project’s waste processing facility could produce canisters consisting of up to 49 percent high-level waste—with the remaining space filled with melted glass—when at the time it had not been able to produce a canister containing more than 42 percent high-level waste with an existing technology while remaining within the acceptance criteria for the federal repository. Those criteria dictate specific characteristics, including durability and leachability for the glass-waste mixtures in the canisters. DOE has since adjusted these assumptions—the current waste processing plan assumes the canisters will contain 34 percent to 38 percent high-level waste using the existing technology—contributing to the overall cost increase and schedule delay for this project. These early baselines also were not tied to expected funding. According to several senior EM officials, before April 2007, project directors were instructed to create cost baselines to meet the accelerated schedules and their regulatory milestones without regard for the likely funding the projects could expect to receive. Consequently, the funding assumptions in the projects’ baselines were higher than the amount of funding DOE requested each year. According to a senior EM budget official, these shortfalls required project managers to continually adjust cost and schedule baselines as projects moved work activities into the out years to accommodate the lower funding levels. For example, according to site officials at Oak Ridge, when DOE did not request the full amount of funding in the nuclear facility’s decontamination and decommission project’s initial baseline, the project could not complete all the work as planned. Project managers responded by pushing work activities into the out years, which contributed, in part, to the project’s overall cost increase and schedule delay. Similarly, as noted in a recent DOE internal audit, according to Los Alamos officials, funding has not been sufficient to meet the site’s regulatory commitments, and has been a concern since 2003, when the site manager said he was concerned that appropriate resources had not been identified to conduct the necessary environmental restoration activities. According to EM managers, they have implemented changes to the way baselines are created that address these problems. In April 2007, EM changed its policy for creating project baselines. Instead of tying baselines to the accelerated schedules and regulatory commitments with unconstrained funding, EM limited funding for its sites, directing that all future baselines be based on expected budget numbers generated for each site. For three of the projects we reviewed, this change in direction resulted in deferral of work and schedule delays because the new funding levels represented significant reductions in what projects were planning on receiving, and these projects were low on EM’s priority list. For example, Hanford’s solid waste stabilization and disposition project’s funding was reduced to the point where it will receive minimal funding for the next 4 years in order to allow full funding of Hanford’s decontamination and decommissioning project at River Corridor, a higher priority. During this period, to comply with the funding levels provided, the project will maintain minimum activities to safeguard materials and will not advance its waste processing goals. As a result, according to project officials, life cycle costs for this project increased in some part to reflect a longer schedule and the additional costs of having to hire and train new workers in the future to complete a job that already was underway. Not all sites have implemented these changes, however. EM’s direction to all sites to create their baselines tied to the funding profile outlined in the June 2007 policy memo has not been applied to two of the major cleanup projects. The Hanford radioactive liquid tank waste stabilization and disposition project—the most expensive cleanup project—and the Los Alamos soil and water remediation project have not aligned their baselines with the funding targets. The Hanford project’s baseline was validated just before the policy change took place and, for the period between 2009 and 2030, the baseline contains about $2.6 billion more than the funding targets. Similarly, EM approved the baseline for the Los Alamos project even though it was not aligned with the funding targets. The baseline identifies a projected funding shortfall each year through 2012 that peaks at a cumulative $236 million in 2010. This shortfall does not include an additional $947 million in unfunded contingency. At the same time EM approved the baseline, it directed project managers at the site to change the baseline to bring its costs in line with the targets. Another likely contributing factor to the cleanup projects’ cost increases and extended schedules is DOE’s practice of not including contingency funding in its annual budget requests for EM’s cleanup projects. Specifically, EM has requested enough funding for its cleanup projects to ensure a 50 percent likelihood of completing the projects within the total estimated project costs. However, the requested amount generally has not included contingency funding, which project managers may have to use in order to complete a project on time by addressing risks that materialize during cleanup. For example, in 2007, the radioactive liquid tank waste project at Hanford had an unexpected spill of 85 gallons of radioactive material from one of its storage tanks; this spill required shutting down waste retrieval operations for 11 months in order to clean up the spill. Even though the retrieval operations represent a small percentage of the overall work scope ongoing at the project, the accident added at least $8 million to the retrieval cost for that one tank. Furthermore, in accordance with EM policy, projects are expected to account for the costs of such potential risks by increasing the amount of unfunded contingency in their near-term and life cycle baselines. Because funding for that contingency is not included in the budget request, however, increasing the amount of contingency funding in the near-term baseline is largely a paperwork exercise that has no active impact on preventing or solving problems or anticipating actions that could offset demonstrated slow progress. According to a December 2007 report by the National Academy of Public Administration, EM’s practice of not funding contingency for its cleanup projects has meant that EM has not had additional funding available to address emergency problems when they arise and therefore has either taken money from another project or extended the schedule of the work into future fiscal years to manage them. Furthermore, according to EM officials, by providing enough funding for its projects to ensure that they have a 50 percent chance of meeting their project cost and schedule baselines, EM recognizes that 5 of the 10 major projects are likely to miss their cost and schedule goals. In contrast, DOE funds its construction projects at a level that reflects a greater probability of success—80 percent—an amount that reflects the industry standard for such projects. According to senior EM officials, EM does not fund contingency for its cleanup projects because allotting enough funds to cover the costs of risks that may not materialize would constrain the amount of work EM could perform for the money it receives each year. However, in accordance with a recommendation from the National Academy of Public Administration, EM is evaluating its practice of not including contingency funding in its budget requests for cleanup projects. For most of the projects we reviewed, EM included assumptions in its baselines that (1) did not represent the conditions at some of the major projects, (2) did not sufficiently anticipate delays in the completion of related construction projects, and (3) the scope of work activities to be accomplished would not increase. Correcting these assumptions often led to changes in the scope of work, higher costs, and extended schedules. First, for four of the projects we reviewed—Oak Ridge’s nuclear facility decontamination and decommissioning project, Idaho’s solid waste stabilization and disposition project, and Savannah River and Hanford’s radioactive liquid tank waste stabilization and disposition projects—site conditions were worse than project staff originally estimated, leading to significant changes to the life cycle baseline. For example, at the Oak Ridge project, because a 1940s-era building was far more contaminated and deteriorated than first estimated, DOE changed its cleanup plan and implemented a more extensive—and therefore more expensive—approach to tearing down the building. After a worker fell through a weakened floor, the contractor had to first reinforce the building’s structure so that contaminated equipment could be removed safely. Primarily because project officials did not accurately anticipate the site conditions or the types of work activities necessary to safely conduct the work—despite multiple estimates generated by the contractor, DOE, and the Army Corps—this project’s costs increased by $1.2 billion and significant amounts of work were delayed, extending the completion date by 9 years, to 2017. Similarly, the initial baseline for the radioactive liquid tank waste stabilization and disposition project at Hanford assumed that 99 percent of the waste contained in the 177 storage tanks could be removed by using only one type of technology to retrieve the tank waste. However, DOE subsequently determined that almost half of the tanks contained a hardened layer of waste that could not be removed with the chosen technology and therefore a second technology was needed to remove this waste. Correcting the optimistic assumptions—adding the second technology and re-estimating the costs of retrieving waste from the tanks based on field experience gained––increased the baseline by more than $2 billion. Second, delays in completing related construction projects directly contributed to schedule delays––and corresponding cost increases—for four of the cleanup projects we reviewed. Three of these projects are at the Hanford site in Richland, Washington. The initial baselines for these projects included assumptions that the major construction project there— the Waste Treatment Plant (WTP)—would be ready to begin operations in 2011. In 2006, DOE extended the WTP construction completion date by 5 years, resulting in schedule extensions for three cleanup projects. The major cleanup project that will run the WTP—the radioactive liquid tank waste stabilization and disposition project—had to increase its life cycle cost estimate by about $4.8 billion and extend its schedule by 10 years in order to safely maintain the waste storage tanks while the treatment plant is being built and to operate the plant for additional years, among other things. Similarly, in response to the WTP delay, the schedules for the solid waste stabilization and disposition project and the soil and water remediation project were extended by 15 years—increasing costs by more than $4 billion combined. These projects cannot complete their missions until the WTP has finished processing all of the liquid waste in the storage tanks. According to the currently approved baselines, the liquid tank waste project will complete its operations in 2042, and activities under the latter two projects are not expected to be completed until 2050. However, as we recently reported, DOE has acknowledged that the start of waste treatment operations will be delayed by at least 8 years (from 2011 to 2019), not 5 years, which will likely affect further these projects’ costs and schedules. Third, for three of the projects we reviewed, increases in work scope—the activities required to complete the project—contributed to cost increases and schedule delays. For example, a major contributor to the more than $3 billion cost increase and at least 9-year schedule delay at the nuclear materials stabilization and disposition project at Savannah River was DOE’s approval of a new initiative in 2006 that added additional amounts of nuclear materials for the project’s facilities to disposition, including materials from other DOE sites. Those facilities were originally scheduled to complete their mission in 2007—the new scope extended the mission until 2019. Similarly, Savannah River’s other major cleanup project— radioactive liquid tank waste stabilization and disposition—also had significant scope added. Under a law passed in 2004, DOE determined that the salt waste in its tanks is not high-level waste and therefore can be disposed of at the site instead of in a geologic repository. The law required DOE to consult with the Nuclear Regulatory Commission when making this determination. According to DOE, this consultation and the resulting changes to the cleanup process added significant scope to the project, causing DOE to lengthen the estimated time to close the 49 tanks at the site. According to EM, most of the cost increases and schedule delays experienced by the major cleanup projects were the direct result of unrealized aggressive planning assumptions. EM has since recognized that project baselines must be based on realistic technical and regulatory assumptions and be planned on the basis of realistic out year budget profiles. However, it appears that the practice of incorporating optimistic assumptions into project baselines has not yet been eliminated. As we recently reported, some of the underlying assumptions in the baseline for the Hanford radioactive liquid tank waste project may be overly optimistic. For example, DOE assumes that the tanks will remain viable throughout what has become a protracted waste treatment process, with some tanks expected to remain in service more than 60 years longer than originally anticipated. This extended operation raises the risk of tank failure and leaks to the environment. The baseline also assumes that emptying single-shell tanks will proceed significantly faster than it has to date. Hanford project management officials have since acknowledged that the ambitious retrieval schedule might not be achievable and are adjusting their planning estimates. While DOE has several mechanisms in place to help manage cleanup projects, including independent reviews, performance information systems, guidance, and performance goals, it has not always used them to effectively manage major cleanup projects’ scopes, costs, and schedules. OECM’s independent reviews of the baselines, meant, among other things, to provide reasonable assurance that the project’s work activities can be accomplished within the stated cost and schedule, have not done so for four of the projects we reviewed. Instead, these baselines were significantly modified shortly after approval. As a result, the usefulness of the independent baseline reviews is questionable when significant baseline changes occur very shortly after the reviews are completed, as the following discussion illustrates. The advanced mixed waste treatment project under Idaho’s solid waste stabilization and disposition project. OECM’s 2006 independent review accurately noted that the project baseline submitted for validation for the treatment plant included an unrealistic rate for processing waste—more than 50 percent faster than the rate demonstrated at the time the baseline was established. In response, project officials proposed correcting the problem primarily by increasing the amount of unfunded contingency in the baseline, a move that reflected common practice within EM, and OECM officials approved this action and validated the baseline. As the panel predicted, the project’s actual processing rate after its baseline was validated was slower than expected. Within 7 months of OECM’s validation of the near-term baseline, project officials proposed modifying it. DOE had to defer the activities that the contractor was not able to accomplish in the near term, extending the project life cycle by about 4 years and increasing costs by about $450 million. We believe that DOE’s approval of increasing unfunded contingency as a corrective action for an unrealistic processing rate was ineffective. Although DOE also attempted to increase the processing rates through contractor performance incentives, we believe DOE should have revised the baseline using a more realistic processing rate to calculate baseline cost and schedule before validating it. Oak Ridge’s nuclear facility decontamination and decommissioning project. Significant cost increases began 2 years before OECM’s independent validation of the project in 2006, and have continued to increase. Specifically, life cycle costs for the project were estimated at $1.8 billion in 2004—the beginning of the project’s previous near-term baseline—with expected project completion by fiscal year 2008. By August 2006, when OECM completed its review of the baseline and issued its validation recommendation, life cycle costs for the project had grown to about $2.2 billion and project completion was extended by about 1 year. However, roughly 1 year after OECM validated the baseline, EM revised it again, adding about $800 million in costs and delaying project completion by an additional 8 years. EM justified the change because, among other things, it wanted to adjust the baseline to conform to new funding targets as directed by DOE in June 2007 and to account for other changes it needed to make in its approach to decontaminating the building. Los Alamos soil and water remediation project. In March 2008, EM approved an independent review of this project and the associated baseline although it expected that the baseline would change. According to the EM memorandum approving the baseline, changes in EM’s priorities and funding plans were likely to necessitate changes to the Los Alamos project’s baseline, and the project was directed to submit a baseline change that would align the baseline with funding targets. OECM officials also acknowledged that their independent review of the baseline was based on assumptions that would likely not prove to be true. Specifically, OECM’s review assumed that the project would receive the full funding needed even though DOE’s funding targets at the time were below the funding levels needed to comply with the state cleanup agreement. As a result, project officials expect that the estimated life cycle costs of nearly $1.7 billion will increase substantially during 2008 but could not tell us the extent of the cost and schedule change until they receive DOE’s new funding commitments for the project. Hanford’s radioactive liquid tank waste stabilization and disposition project. The most significant cost increase—more than $9 billion— occurred about 2 years after DOE’s initial independent review and approval of this project. The project’s baseline was first approved in 2004, with life cycle costs expected to be about $22 billion and completion scheduled for 2032. However, in 2006, life cycle costs increased to about $31 billion—not including an additional $8.6 billion in unfunded contingency—and the completion date was extended by 10 years, to 2042. Project officials expect the baseline will require another update and independent review in 2009 to reflect anticipated changes as a result of the project’s new contractor and because of changes resulting from ongoing negotiations with state regulators over regulatory agreement milestones. In addition to changes to the baselines soon after the independent reviews, DOE has recently relaxed standards used for conducting these reviews. In 2003, DOE issued standard operating procedures for conducting independent reviews—primarily of construction projects. These procedures stated that baselines should be considered, once approved, as set in concrete. The EM-OECM 2005 protocol—and its 2007 update—for cleanup projects replaced the standard operating procedures and directed OECM to validate only the near-term baseline for cleanup projects while reviewing the life cycle estimate “for reasonableness.” In this way, EM and OECM sought to acknowledge what they believe are the greater uncertainties present in the out-years of a cleanup project compared with a typical construction project. However, within a year of the 2007 protocol, OECM had changed its approach for EM cleanup projects from validating baselines to “certifying” them, which is a more limited statement of assurance than validation. Specifically, according to OECM officials, certification means that the near-term baselines are reasonable if near- term baseline costs are funded as outlined in the baseline and contingency funds are provided as needed. The change is intended to reflect OECM’s belief that, because funding for cleanup projects is more uncertain than for construction projects, the same confidence level cannot, nor should, be applied to reviews of EM cleanup project baselines as it is applied to construction projects. Since EM headquarters does not consistently provide contingency funds for its cleanup projects, and half of the major projects have significant contingencies in their near-term baselines, the most likely result for projects experiencing problems is to extend schedules and increase life cycle costs. In commenting on a draft of this report, OECM stated it intends to go back to validating near-term baselines for cleanup projects, assuming, in part, that funding becomes more stable and EM gains greater experience managing near-term baselines. DOE managers depend on data about the performance of EM’s major cleanup projects to make informed decisions about how best to handle unexpected events and manage shifting priorities. DOE site and headquarters staff generate a number of regular reports to update senior managers on the status of these projects, both to justify making significant changes to project baselines and to request funding from Congress. Although these reports provide valuable information to managers on the progress of work at cleanup sites around the country, they do not consistently provide the key information needed to make fully informed management decisions about EM’s major cleanup projects. Specifically, (1) proposals for baseline changes do not consistently identify reasons for proposed changes or possible root causes that contributed to problems, (2) use of EVM data does not consistently conform to industry standards or GAO’s best practices, (3) quarterly reports do not always describe the impact of contractor performance on near-term or life cycle costs and schedules, and (4) reports to Congress on the status of and changes to major cleanup projects are limited to a small snapshot in time and do not provide information necessary for effective oversight. When a project reaches a point at which it is likely to miss the goals in its baseline, project managers are required to propose changes to the project’s cost, schedule, or scope baseline, a process that is akin to hitting the reset button. EM project managers request such a change by, among other things, documenting certain information in a Baseline Change Proposal report, including current approved costs and new proposed costs, proposed project start and end dates, and a justification for the changes. For the key change proposals we reviewed for the major cleanup projects, the information provided describing the changes and their impacts varied widely, with some projects providing little to no explanatory information about what led to the change and others explaining the causes of the changes in detail. For example, a change proposal for Hanford’s nuclear material stabilization and disposition project simply described the project’s scope of work and did not provide any explanation for why the project’s schedule was being delayed by 3 years, while a proposal from Savannah River’s radioactive liquid tank waste stabilization and disposition project included information on the causes for its cost and schedule changes, as well as the specific cost and schedule impacts of each cause. However, the change proposals we reviewed generally did not address the root causes that resulted in the changes to the baseline. For example, the Savannah River change proposal explained that almost $500 million of the total proposed cost increase was due to revising the strategy for finishing the project. However, the proposal did not explain why this strategy needed revision. In investigating the reason for this proposed revision, we determined that a robust strategy for finishing the project was not included in the original baseline because the project was directed to meet a completion date of 2025 and could not do so if it included the thorough closure strategy. Without including this kind of information in the proposals, it would be difficult for EM managers to effectively identify the true causes of the baseline changes, take steps to address them, and transfer any lessons learned to other projects. In addition, EM does not centrally gather and systematically analyze the narrative information in the baseline change proposals. We recognize that such information is not easily analyzed to identify common causes across projects. However, without such analysis, EM senior managers are potentially hindered in addressing problems collectively. One EM project management official agreed that having the ability to analyze the information in the change proposals across projects would be beneficial, but that his office had not yet made it a priority to collect this information because it was still addressing reliability issues with the data in the change proposals. EM has made some effort to identify root causes of its project management problems. It recently participated in a DOE-wide effort to identify root causes of project and contract management problems in response to GAO’s inclusion of DOE’s contract management on its high- risk list. However, DOE’s analysis was focused more on construction projects than EM cleanup projects. The report notes that the emphasis of the effort was on the capital line item—construction—projects, but that several of the issues identified also are applicable to other projects, including EM cleanup projects. According to one project participant from OECM, the participants discussed how some of the issues raised related to cleanup projects but they did not examine those projects as extensively as the construction projects. In commenting on a draft of this report, DOE explained that its analysis was based more on data from construction projects than EM cleanup projects because more data exist documenting DOE’s past project management deficiencies for construction projects since those projects have a longer history of a structured, disciplined management process. At three of the major cleanup projects––nuclear facilities cleanup at the Hanford Site’s river corridor cleanup project, solid waste stabilization and disposition at Idaho National Laboratory, and soil and water remediation at Los Alamos National Laboratory––we found several instances in which the use of EVM data did not conform to industry standards or our best practices. As a result, EM and site project managers using the data may be less able to make informed decisions to effectively manage these projects. Data anomalies. For all three projects, the EVM systems we assessed contained data errors or anomalies that could potentially distort the analysis of EVM data. Anomalies included, for example, reporting negative actual costs or reporting costs that are not tied to work scheduled or performed. The Los Alamos EVM data contained both types of these anomalies, which may have distorted the results of data analyses by as much as $34 million, preventing managers from understanding the true status of project performance. According to project officials, the anomalies occurred primarily because Los Alamos had initially assigned costs to a general account, and waited up to several months before assigning these costs to the correct specific work activities. In another case, in a significant number of instances the contractor at Hanford’s river corridor closure project reported costs incurred for work activities performed that had not been scheduled to start until future years, skewing the reported performance results. The contractor explained that these data anomalies occurred because it had performed work sooner than originally expected—and therefore the work was not incorporated into the project’s EVM planned schedule in the periods for which it was actually performed. Project officials at the site stated that they believe the EVM information, as reported, correctly represents the project’s status. As such, the summary-level EVM data seem to depict a favorable schedule performance in April 2008; however, our independent analysis of this data shows that when we removed the value of the work that was started and completed ahead of schedule, the remainder of the originally scheduled work was actually behind schedule in April 2008, and trends indicated that the variance was worsening. Data on the availability of staff to perform future work was not always developed. For one of the projects we reviewed, the EVM system lacked important information on staffing, contrary to GAO-identified best practices. DOE officials at Los Alamos’ soil and water remediation project told us they plan to begin asking for staffing information from the contractor, and contractor officials stated they are setting up a staffing report within their EVM system. Without this information, project managers lack important information necessary for ensuring that they have, or will have, an adequate number and type of staff to perform the upcoming scheduled work. Reliability of earned value systems is questionable. OECM has certified that the earned value system used to report performance for only one of the three systems we assessed meets the required industry standards. The EVM system used by the contractor operating the advanced mixed waste treatment project—a significant portion of the solid waste stabilization and disposition project at the Idaho National Laboratory—has not been reviewed by OECM to determine whether it is compliant with industry standards, and contractor officials stated they believed their system does not meet the standards. In addition, OECM was in the process of reviewing the system used by the contractor responsible for the soil and water remediation project at Los Alamos National Laboratory at the time of our review. As a result, these projects lack the necessary assurances that the EVM data were free of errors and anomalies that could skew and distort the EVM analyses. Once a system is certified as meeting the standards, regular surveillance is needed in order to ensure its continued compliance. Surveillance allows managers to focus on how well a contractor is using its EVM system to manage cost, schedule, and technical performance, and is important because it monitors problems with performance and the EVM data. If these kinds of problems go undetected, EVM data may be distorted and not meaningful for decision making. OECM’s surveillance program is under development: it recently hired one staff person to lead its surveillance efforts, and is developing a guide to better define its surveillance protocol. DOE also requires its sites to perform surveillance of EVM monthly contractor performance data, which includes developing EVM surveillance plans and conducting random EVM surveillance. Furthermore, EM managers do not appear to consistently gather or analyze EVM data to maximize the data’s benefits for project management. GAO best practices recommend that EVM system reports include thorough narrative explanations of any root causes of, or proposed corrective actions, for reported cost and schedule variances shown in the data. For the soil and water remediation project at Los Alamos, for example, EM did not require that this information be reported by its contractor. As a result, EM project managers at Los Alamos have not always received the information necessary for ensuring that effective corrective actions are implemented to prevent additional changes to the cost and schedule baselines. According to contractor officials, they reported information on root causes and corrective actions to EM routinely before fiscal year 2008, but DOE asked them to stop providing it. According to the project director for the soil and water remediation project at Los Alamos, the Los Alamos Site Office Assistant Manager had directed the contractor to not provide the variance reports as part of its project status reviews because the contractor’s explanation of these variance reports during scheduled meetings was taking several hours to review and wanted instead to use the available time to focus more on risk management and other project issues. However, according to this site official, the site office’s direction was not intended to discontinue all variance analysis reporting. Although the contractor discontinued including the variance analyses reports in its project status reviews, the project director stated that DOE continues to obtain information from the contractor by other means, such as cost performance reports and weekly contractor meetings at which DOE and the contractor discuss the root causes of variances that resulted in risks to meeting milestone compliance agreements. However, contractor cost performance reports we reviewed did not provide any narrative information on causes or corrective actions. Furthermore, the weekly contractor meetings discuss only certain root causes of the variances that resulted in risks to milestone compliance agreements and therefore are neither comprehensive nor documented. Because verbal information can easily be forgotten, lost, or misinterpreted, among other things, we believe that a written report would be a best practice. In addition, EM projects report their EVM data to headquarters managers at the project summary level, which can mask problems occurring in the project that more detailed reporting could reveal. At Idaho, in early 2008, EVM data showed the solid waste stabilization and disposition project was performing ahead of schedule and under cost, although major problems had occurred at the advanced mixed waste treatment project––the primary subproject. Without EVM reports that contain more specific detail, project managers at headquarters may not recognize that a problem is occurring until it becomes large enough to recognize at the summary project level of reporting. In addition, greater detailed information provided to managers earlier in the project potentially could allow for early intervention. Beyond more detailed reports, some project managers in the field and at headquarters have not always systematically reviewed or independently analyzed the EVM data they received, which also would help improve their understanding, as well as mitigate potential problems occurring within a project. At one site we visited, the DOE official receiving the data said he did not analyze the information before entering it into the EM headquarters database. In turn, headquarters EM project managers told us they also do not analyze the EVM data the projects report. One oversight official indicated he would prefer to analyze the information he receives from the projects but he did not have the time required to do so. A senior EM project management official told us that he recognizes this deficiency and is working to address it: EM intends to pilot a new software package that will allow managers to analyze EVM data. According to EM, the software will enable EM managers to drill down into the EVM data received from the contractors, thus improving their oversight capabilities. In addition, according to EM project management officials, EM has insufficient federal staff to conduct oversight, which is being addressed as part of an ongoing effort to improve project management. In commenting on a draft of this report, EM stated it also intends to provide additional EVM training for its analysts. In accordance with Order 413, EM senior managers, including the Assistant Secretary, receive quarterly updates on the status of the major cleanup projects. Two key reports are the quarterly project reviews (QPR), generated by EM project managers, and a quarterly project status report created by OECM. These reports contain contractor performance data and information about new or ongoing issues that need addressing at the sites, but do not always describe how contractor performance affects performance against the near-term or life cycle baselines. Without this information, managers cannot develop a comprehensive assessment of progress against agreed-upon goals. The QPRs and OECM quarterly reports we reviewed largely use EVM data to assess project performance, but these data only reflect performance against the current contract period. Current contract period start and end dates do not line up with the start and end dates of the near-term baselines for any of the major cleanup projects we reviewed, and contract goals have not always been tied to what would be necessary to meet near-term baseline goals. For example, we found the EVM data for Idaho’s solid waste stabilization and disposition project—including the advanced mixed waste treatment subproject—that was reported in the QPRs and OECM quarterly reports from early 2008 did not line up with the near-term baseline because the advanced mixed waste treatment project’s contract period was not the same as the near-term baseline period, which ends in 2012. EVM data for this project are reported as a combination of work done by two contractors: disposal of low-level and mixed-low-level waste, among other things, by the major site contractor, whose contract runs through 2012, and the advanced mixed waste treatment project operations contractor, who, in early 2008, was operating under a contract extension that expired in April 2008, 4 years shy of the end of the near-term baseline. In addition, according to project officials, the goal of processing 15,500 cubic meters of waste contained in that contract extension was not based on what was necessary to meet the near-term baseline goal of processing 65,000 cubic meters of waste by 2012, which was DOE’s commitment at the time of the extension. Since the advanced mixed waste treatment project’s activities make up about 75 percent of the cost baseline for the overall project, EVM data for this project as reported in the QPRs and OECM quarterly reports were not an accurate indicator of how the project was performing against the approved near-term baseline. DOE has further extended the advanced mixed waste treatment project contract through September 2009, and project officials explained the current extension is better linked to the current baseline, meaning EVM data reported should represent a better indication of performance against that baseline. In addition, although the QPRs we reviewed include data on current life cycle cost and schedule estimates, they do not always include information about changes to the schedule or scope, nor do they explicitly mention when a change to the baseline has been proposed. Instead, the QPRs generally present information on life cycle cost increases and provide comparisons to original baselines. QPRs also contain a schedule for each project detailing key milestones and expected end dates. However, when a change to a project completion date is made, the schedule shown in the QPR in most cases does not preserve the original completion date as a point of comparison. Similarly, there does not appear to be any mechanism in the QPR to present a change in a project’s scope of work, for example, a move of some work activities from the near term into the out years. As a result, the reports tell only that life cycle costs have increased, but corresponding changes to schedule and scope are not apparent. Furthermore, there is no clear place in a QPR for a project manager to mention that a baseline change proposal has been submitted to headquarters if the results of that proposal are not yet presented in the life cycle cost or schedule information in the report. Including mentions of pending change proposals may help ensure senior managers clearly understand the true state of a project’s performance. A key performance indicator used in OECM’s quarterly reports also may create the impression that a project is performing well overall when it is in fact encountering problems. As directed in the 2007 protocol for cleanup projects, OECM uses a traffic light indicator—red-yellow-green—as an at- a-glance way to highlight developing problems for DOE managers. This indicator is intended to represent expected performance against the approved near-term baseline and is based largely on EVM data. However, since projects encountering problems tend to manage those problems by moving work scope into the out years, the effects of problems occurring today show up as increases to out-year cost and schedule estimates and not as increases or delays in a near-term baseline. Therefore, a project rated “green” by OECM may simultaneously be experiencing increases in overall life cycle costs and delays in project completion. OECM officials agreed that it would be beneficial to present projected impacts of current performance on life cycle estimates wherever practical in its reports. DOE’s reports to Congress do not include key information that would aid oversight efforts, including the extent of and reasons for significant changes to near-term and life cycle baseline estimates, and the status of estimated life cycle costs. DOE’s annual budget request to Congress for fiscal year 2009 for EM included funding requests for each site and each project, as well as the funding appropriated in fiscal years 2007 and 2008. The budget request also contains, among other things, descriptive information about the sites and projects, including EM’s major cleanup projects, and about cleanup goals, regulatory frameworks, and key uncertainties. However, the request did not provide any project-specific life cycle costs or completion dates. In the previous three budget requests, EM had provided life cycle costs and planned completion dates for each project. Without this information, Congress cannot know what progress each project has made and the extent of work still needed, cannot understand how the project may be changing and has changed over time, and cannot know whether the project experienced problems since the previous budget request and the reasons for these problems. The absence of this information makes it more challenging to effectively oversee the department and its major cleanup projects. DOE has not been directed to provide such information about its major cleanup projects to Congress. In contrast, Congress has required the Department of Defense to report annually on its major defense acquisition programs—those costing $2 billion or more and typically consisting of a weapons system, such as Navy ships or fighter planes—or report quarterly when programs are experiencing significant cost increases or schedule delays. Congress established the reporting requirement to improve oversight of these defense programs by providing visibility and accountability for any growth in cost that may occur. Known as Selected Acquisition Reports, each annual report includes information on full life cycle program costs, unit costs—the cost per plane or ship—and the history of those costs. A quarterly report also includes reasons for any change in unit cost or program schedule since the previous report, information about major contracts under the program and reasons for any cost or schedule variances, and program highlights. In addition, the Department of Defense includes development and procurement schedules, with estimated costs through program completion, in its annual budget justification submissions to Congress. EM’s key policies for managing its cleanup projects—including developing project baselines, managing risk, and planning for contingency funding— are not consolidated but spread across various guidance documents and memos and provide contradictory and confusing information. Although Order 413 serves as the overarching policy document for project management, according to EM, the order contains requirements that are unnecessary or expensive and awkward to implement for cleanup projects. EM thus has issued numerous memos outlining the way in which its project managers should implement the order. See table 3 for a list of key memos we identified that contribute to project management guidance and policy for EM cleanup projects. As the table shows, rather than having a consolidated source for guidance, EM project managers must consult multiple sources to determine how to correctly create a baseline or calculate contingency funding for a project. Furthermore, some of EM’s guidance includes vague language and various exceptions to rules, which are likely to contribute to a project manager’s difficulty in determining how to implement EM policy. For example, according to the April 2007 protocol for cleanup projects, once a contract is awarded and a detailed near-term baseline is developed, a follow-up independent review will be required if the baseline (1) exceeds the previously validated near-term baseline costs by 15 percent or more, (2) increases the schedule by a year, or (3) modifies scope significantly. The first two conditions for requiring a follow-up review are tied to fairly precise numbers—15 percent and 1 year—although there could be some question as to whether these numbers are to be applied to the original or reset baseline calculations, especially for projects that have been extended multiple times. However, the protocol provides no parameters for determining when the third condition, a “significant” scope modification, has occurred. In addition, agency officials were not able to provide us with formal documentation of a significant shift in policy. As explained earlier, OECM recently shifted from validation to certification of the cleanup projects’ near-term baselines. In response to our request for documentation of the switch to certification, OECM provided us with an e-mail from an OECM official to a DOE Inspector General auditor that defined certification and explained the reasons for the change. According to this e-mail, the change was made to acknowledge OECM’s belief that EM cleanup projects should not be reviewed under the same standard as construction projects. The OECM official also directed us to DOE’s fiscal year 2009 budget request for an explanation of the new approach. While the budget request includes a description of baseline certification, it neither mentions that the certification is a departure from the previous policy, nor does the request serve as an adequate means of communicating a significant policy change. Furthermore, different guidance documents appear to be in conflict with one another. Specifically, EM’s 2006 memo outlining its policy on contingency funding explained that DOE’s risks associated with implementing a project are covered through contingency that is part of the “unfunded” portion of the baseline; that is, its funding is not requested or budgeted in advance of when it may be needed. However, a 2008 EM memo primarily concerned with explaining a new process for entering baseline changes into a database contains a description of the elements of a near-term baseline that includes a line for “other funded contingency,” which has been interpreted by some EM officials as including DOE contingency. If, according to the 2008 memo, some DOE contingency should be funded—requested in advance—that memo directly contradicts the guidance provided in the 2006 memo. However, although the 2008 memo states it is updating the baseline change process, it does not specifically state that it replaces any part of the 2006 memo. In part because of this confusion, project managers at cleanup sites have been implementing EM’s contingency policy differently. According to EM officials, recent independent reviews have alerted senior EM officials to this inconsistent implementation of the policy guidance. The review teams found that the project managers were using a variety of methodologies to calculate the contingency for their projects. As a result, according to one EM official with expertise in contingency, managers were likely underestimating the amount of contingency needed for their projects. To address this problem, EM senior managers directed the creation of a contingency implementation guide to provide a definitive interpretation of existing EM policy on contingency, and this guide is expected to be issued in September 2008. Furthermore, at least one of DOE’s policies—on independent reviews of cost estimates—is not being implemented at all. According to Order 413 and the April 2007 protocol, an independent cost estimate—a top-to- bottom, independent estimate that serves to cross-check a cost estimate developed by project officials—should be developed as part of the OECM review process for major projects when “complexity, risk, cost, or other factors create a significant cost exposure for the Department.” We believe that a review of a major cleanup project, given its level of expected spending over the near term, would meet the criteria for requiring an independent cost estimate. According to an OECM official, OECM has not performed an independent cost estimate for any of EM’s major cleanup projects, primarily because OECM lacks the resources required to perform this type of rigorous estimate for the projects. Instead, OECM has taken a less rigorous and less expensive approach in its reviews—examining cost estimates generated by the projects but not producing a separate estimate for comparison. According to DOE officials, it is addressing some of these guidance issues. By the end of September 2008, officials told us, DOE plans to replace its manual directing implementation of Order 413 with a series of 16 guides. The guides are expected to cover a range of project management issues, including risk management and contingency funding, with one guide providing direction on the management of EM cleanup projects. In addition to the guides, as part of an EM-wide effort to improve project performance, EM has issued 18 recommended priority actions that contain additional EM-specific requirements for cleanup projects. It is unclear whether the guides and priority actions are expected to supplant all other guidance, or whether they will adequately address the challenge project managers face in determining the most up-to-date, comprehensive guidance to be followed. According to EM senior managers, EM cleanup projects are significantly different from DOE’s construction projects in a number of ways. That is, it is harder in many instances to clearly define up-front requirements for cleanup projects, and there are more unknowns, especially since some of these projects are the first of their kind, with undefined scopes of work and significant risks scheduled many years into the future. Because of these differences and because it has said changing budget priorities may affect funding over time, DOE recently changed its performance goal—the amount of work to be accomplished and the cost margin for accomplishing that work—for EM cleanup projects to reflect a much larger margin of error than the performance goal set for construction projects. Before 2008, a major cleanup project was measured against the same goal as a construction project: achieve at least 100 percent of the scope of work in its baseline with less than a 10 percent cost increase over the life of the project. However, EM’s current cleanup project performance goal applies only to the near-term baseline, and the projects now are considered to be successful if they achieve at least 80 percent of the scope of work in their near-term baselines with less than a 25 percent cost increase. The new performance goal permits up to 20 percent of the scope of work to be deferred from the near term to out years, which creates a substantially greater risk that life cycle costs will continue to increase and that completion dates will be delayed. As a result, for example, under this goal the four major projects each expected to cost more than $2 billion in the near term could increase their costs by $500 million each over that period and be considered successful. Furthermore, because a directed change— defined as a change caused by DOE policy, or regulatory or statutory actions—already exempts projects from meeting the performance goals, creating a less stringent goal for EM cleanup projects further waters down the impact of having a performance goal in the first place. By lowering expectations for adhering to near-term baselines, DOE inadvertently may be creating an environment in which large increases to project costs become not only more common, but accepted and tolerated. EM is undertaking a number of efforts to improve its project performance and address long-standing problems. One such effort is EM’s “Best-in- Class” Project Management Initiative through which EM leadership has committed to improving project performance. Under the initiative, EM contracted with the Army Corps to assess the current status of project management at EM headquarters and its offices. Using the Army Corps’ analysis, EM identified a set of challenges it faced in executing its mission, which resulted in the creation of the 18 priority actions for it to undertake to address the challenges and implement its initiative. Those priority actions include, among others, completing DOE’s project management guide, which is expected to bring all project management guidance documents under one umbrella document; establishing standard reporting formats for project updates produced by project managers, including QPRs; implementing new project management software packages, including those for EVM analysis; and better integrating its project and contract management activities. EM has developed a set of implementing steps and a summary of expected benefits for each priority action. According to EM, 10 of the priority actions are being implemented in fiscal year 2008, and 5 of those are scheduled to be completed by the end of that fiscal year. It appears that execution of the priority actions would create new tools and potentially enhance existing ones in EM’s effort to improve its cleanup projects’ performance. According to EM, full implementation of the priority actions will address many EM project management problems and deficiencies. However, since the actions are still being implemented, it is too soon to determine their effectiveness. In addition, EM officials acknowledged that the actions they are implementing to improve the management of EM’s overall cleanup efforts, including their Best-in-Class initiative and actions being taken in response to the 2007 National Academy of Public Administration report have not been formally documented into a specific, corrective action plan that includes performance metrics and completion milestones. These officials agreed that such a comprehensive plan would demonstrate a more integrated and transparent commitment to improving the management of EM’s cleanup projects. Cleaning up the nuclear weapons complex is a technically challenging and risky business. Even as DOE works to gain control of and better manage its major nuclear waste cleanup projects, cost increases and project delays continue to mount. Specifically, life cycle costs for EM’s major cleanup projects have increased by cumulative $25 billion over the past few years and schedules have been extended by a combined total of more than 75 years, primarily because DOE had to adjust the optimistic baselines it created to accommodate the realities it has encountered at its cleanup projects. Given the cost and complexity of the major nuclear cleanup projects, it is critically important that DOE fully use the tools it has developed— independent reviews, performance information systems, guidance, and performance goals—to better ensure that projects stay within established parameters for scope of work, costs, and schedule. Independent baseline reviews to ensure that the work promised can be completed on time and for the estimated cost appear to be a useful planning tool, but the significant changes that have occurred within years or even months of the baseline reviews and validations indicate that implementation of these reviews has fallen short. Furthermore, EM’s site proposals for changes to cost and schedule baselines, quarterly performance reports, earned value data analysis and reports, and reports to Congress do not consistently provide accurate and comprehensive information on the status of projects, which undermines managers’ and Congress’s ability to effectively oversee projects and make timely decisions, such as targeting resources to particular projects or renegotiating cleanup milestones and other contract conditions. These problems are compounded by the lack of comprehensive and clear guidance for DOE project managers so that they consistently implement DOE management policies across the projects and EM’s recently relaxed performance goals establishing the acceptable baseline change parameters for major cleanup projects. Although DOE has identified a number of improvements it intends to make to its project management approach, it is still in the early stages of implementing these improvements, making it too soon to assess the effort’s full effect, and it has not yet formally documented all the improvements in a comprehensive corrective action plan. So that DOE can better manage its major cleanup projects and more fully inform Congress on the status of these projects, we recommend that the Secretary of Energy direct the Assistant Secretary for Environmental Management to take the following five actions: Include in its budget request to Congress life cycle baseline cost estimate information for each cleanup project, including prior year costs, estimated near-term costs, and estimated out-year costs. Develop an approach to regularly inform Congress of progress and significant changes in order to improve EM’s accountability for managing the near-term baseline and tracking life cycle costs. Similar to the Department of Defense’s Selected Acquisition Reports, which include annual information on full life cycle program costs, among other things, EM’s report, at a minimum should compare estimated near-term and life cycle scope, cost, and schedules with the original and subsequently updated baselines, and provide a summary analysis of root causes for any significant baseline changes. Expand the content of EM performance reports to describe the implications of current performance for the project’s overall life cycle baseline, including the near-term baseline cost and out-year cost estimate, using, when appropriate, valid earned value data that conform to industry standards and GAO-identified best practices. Consolidate, clarify, and update its guidance for managing cleanup projects to reflect (1) current policy regarding independent baseline reviews and (2) the results of DOE’s determination of the appropriate means for calculating and budgeting for contingency so that project managers can consistently apply it across nuclear waste cleanup sites. Consolidate all planned and ongoing program improvements, including those stemming from the Secretary’s contract and project management root cause analysis corrective action plan, the Best-in-Class initiative, and the 2007 National Academy of Public Administration report, into a comprehensive corrective action plan that includes performance metrics and completion milestones. Because independent baseline reviews have not always provided reasonable assurance of the stability of projects’ near-term baselines or the reasonableness of the life cycle baselines, we recommend that the Secretary of Energy direct the Director of the Office of Management to take the following action: Assess the Office of Engineering and Construction Management’s current approach and process for conducting baseline reviews of EM cleanup projects to identify and implement improvements that will better provide reasonable assurance that project work scope can be completed within the baselines’ stated cost and schedule. Consider including in the assessment process an analysis of past lessons learned and reasons for baseline changes, and an assessment of project affordability when conducting baseline reviews. We provided a draft of this report to DOE for its review and comment. DOE agreed with our recommendations but provided some suggested changes to them, which we incorporated as appropriate. In addition, DOE provided some specific comments on our draft report. First, DOE stated that the report should provide a more balanced and accurate portrayal of EM’s cleanup projects by including descriptions of ongoing initiatives, a number of which EM launched in recognition of the need for improvement, as well as providing better context of the challenges and constraints the department’s cleanup program faces. The draft report included a brief description of EM’s ongoing initiatives, including its Best-in-Class effort, and acknowledged many of the key challenges DOE faces while illustrating the factors contributing to changes in scope, cost, and schedule for its cleanup projects. We also acknowledged DOE’s ongoing initiatives and progress in a 2007 report on project management. In addition, DOE cited its successes in the cleanup of Rocky Flats and Fernald as evidence of its project management accomplishments. We commend DOE on its past performance in successfully cleaning up these sites, which has resulted in some lessons learned that DOE can apply to other cleanup efforts, as we reported in 2006. Nevertheless, we found in this review that DOE has not always effectively used its management tools to help oversee the scopes of work, costs, and schedules for its present major cleanup projects. Second, DOE stated that our draft report appears to confuse the term “baseline.” It noted that there is only one project baseline—the near-term baseline approved by EM senior management—for which DOE should be held accountable. Our use of the term “baseline” in this report conforms to EM’s guidance documents indicating a project’s “lifecycle baseline” is composed of its prior year, near-term, and out-year costs. In addition, we disagree with DOE’s assertion that it should be held accountable only for a project’s near-term baseline. As we state in this report, since projects encountering problems have tended to manage those problems by moving work scope into the out years, the effects of problems occurring today show up as increases to out-year cost and schedule estimates and not as increases or delays in a near-term baseline. Therefore, if DOE’s performance is measured solely on the basis of the near-term baseline, potentially significant cost and schedule increases would not be accounted for or transparent. Third, DOE stated that one of our recommendations—to consolidate, clarify, and update its guidance for managing cleanup projects to reflect the results of DOE’s determination of the appropriate means for calculating and budgeting for project contingency—could be more specific, and it outlined three contingency options. These options include (1) increasing the amount of contingency funding for cleanup projects to an 80 percent confidence level, the level budgeted for construction projects; (2) creating a general contingency fund available for project managers at DOE headquarters to dispense as needed to manage project risks; and (3) continuing with the current approach of not including contingency funding for cleanup projects in its budget requests—funding cleanup projects at the 50 percent confidence level—and changing its recently established performance goal. We recognize that managing project contingency is an important issue, and in fact note in our report that DOE’s current approach is a likely contributing factor to cost increases and schedule delays for EM’s major cleanup projects. While we did not specifically assess these three options in our report, DOE should continue to study the lessons learned from managing and budgeting contingency and select the option that would provide contingency funds in an expedient manner to better mitigate the impacts of cleanup project changes while minimizing the amount of unused contingency funding left over at the end of the fiscal year. Finally, as part of the explanation of its third option for funding project contingency, DOE stated that GAO has agreed to its recently established performance goal—to accomplish at least 80 percent of the scope of work in the near-term baselines with less than a 25 percent cost increase. GAO has not agreed to this goal. As we state in this report, we are concerned with DOE’s new goal given that it is lower than the previous goal for cleanup projects and that DOE may inadvertently be creating an environment in which large increases to project costs become not only more common, but accepted and tolerated. DOE also provided detailed technical comments, which we have incorporated into our report as appropriate. DOE’s comments are reproduced in appendix IV. We are sending copies of the report to interested congressional committees, the Secretary of Energy, and the Director of the Office of Management and Budget. We will make copies available to others on request. In addition, the report will also be available at no charge on the GAO web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other staff contributing to the report are listed in appendix V. To determine the extent to which the cost, schedule, and scope baseline estimates for the Department of Energy (DOE) Office of Environmental Management’s (EM) cleanup projects have changed and the key reasons for these changes, we identified 10 major cleanup projects at 5 DOE sites. We first identified 9 major cleanup projects with current near-term cost estimates (usually a 5-year period) above $1 billion, the DOE threshold for major cleanup projects. In addition, to include those projects that could potentially become major projects because of cost growth, we reduced the threshold to $900 million and identified another project, the Richland nuclear material stabilization and disposition project, which is estimated to cost between $900 million and $1 billion over the near term. We focused on these 10 major cleanup projects because of their significant cost–– combined estimated near-term costs of about $19 billion and combined life cycle costs estimated at more than $100 billion—and because they account for almost half of EM’s $5.5 billion fiscal year 2009 budget request. (See app. II for information on these projects.) To identify the factors that may hinder DOE’s ability to effectively manage these cleanup projects, we spoke with DOE project directors and contractor officials and reviewed project management documents for the 10 major cleanup projects we had identified. We conducted site visits to Idaho National Laboratory, Los Alamos National Laboratory, Oak Ridge Reservation, Savannah River, and Hanford, and analyzed project documentation—contracts, policy directives and memoranda, project management plans, DOE’s Office of Inspector General reports, independent reviews, project execution plans, risk management plans, quarterly project reviews, monthly project status reports, earned value management (EVM) surveillance plans, and project control documents prepared to guide and control formal changes to the baselines. For our analysis of projects’ scope, cost, and schedule data, we examined the initial baselines reported as of the most recent contract award or major contract modification (which occurred between 2004 and 2007) and compared these baselines with the updated baselines at the time of our review. Initial cost baselines are the estimated life cycle costs at the beginning of the new contract period for operation of the DOE site or associated projects or the major contract modification or extension, which typically coincided with the beginning of the projects’ current or previous near-term baseline. We also calculated the percentages of cost increases on the basis of constant 2008 dollars to make them comparable across projects and to show real increases in cost while excluding increases due to inflation. In addition, because EM now is reporting its life cycle cost and schedule estimates as ranges, we included these ranges in the report. However, because the upper ends of these ranges include unfunded contingency and EM does not include funding in its budget requests for this contingency, we report cost increases and schedule delays based on the lower ends of the ranges. We also analyzed contractor performance data to determine whether DOE major cleanup projects are consistently developing and analyzing accurate earned value data according to industry standards and best practices. We gathered and analyzed data produced by the EVM system used for one project at each of the following sites: Idaho National Laboratory, Los Alamos National Laboratory, and Hanford. Often, EVM systems differ depending on how the contractor chooses to implement the EVM approach. Because of these differences, we gathered and analyzed information on each EVM system on a case-by-case basis, according to the structure, reporting format, content, and level of detail, among other things, unique to each EVM system. We also considered the best practices developed by GAO for estimating and managing project costs to analyze the contractor EVM data. In addition, we spoke with DOE officials from EM and the Office of Engineering and Construction Management in Washington, D.C., and with representatives from LMI Government Consulting, which conducts external independent reviews of the projects for DOE, to obtain their perspective on how these projects are managed. Because we and others previously have expressed concern about the data reliability of a key DOE project management tracking database—the Project Assessment and Reporting System—we did not develop conclusions or findings based on information generated through that system. Instead, we collected information directly from project site offices and the contractors. We provided an interim briefing to the Subcommittee on Energy and Water Development, House Committee on Appropriations, on the status of our work on April 3, 2008. We conducted this performance audit from March 2007 to September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This project will characterize, treat, and ship approximately 64,000 cubic meters of transuranic waste that will ultimately be stored in the Waste Isolation Pilot Plant in New Mexico. Transuranic waste is radioactive waste containing more than 100 nanocuries of alpha-emitting transuranic isotopes per gram of waste with half-lives greater than 20 years, except for high-level radioactive waste. The transuranic waste that must be handled remotely through protective shielding, because it emits penetrating radiation, will be treated at the Radioactive Waste Management Complex. The project also will treat and dispose of a mixed low-level waste backlog and handle on-site low-level waste for disposal at the complex. The project will decontaminate and decommission approximately 500 facilities and remediate 160 sites in the East Tennessee Technology Park. This project includes the park’s two major buildings—the K-25 and K-27 gaseous process buildings—and requires the contractor to remove processing equipment and excess materials stored in the buildings, demolish building structures, and dispose of all associated wastes. The project will stabilize and dispose of enriched uranium materials and current and projected inventories of aluminum-clad spent nuclear fuel in H-Area facilities. It also will stabilize and dispose of highly enriched uranium solutions, miscellaneous fuels, plutonium residues, enriched uranium residues, and other materials DOE identifies that remain from the production of nuclear weapons. The project also will deactivate F-Area and H-Area facilities; and dispose of special nuclear materials in the K-Area Complex. The project will remove, treat, and dispose of 49 underground storage tanks holding a total of 37 million gallons of highly contaminated legacy waste This effort includes pretreating radioactive waste such as sludge and salt waste, vitrifying sludge and high- level waste at the Savannah River Site’s Defense Waste Processing Facility, and treating and disposing of low-level saltstone waste. The project will identify, investigate, and remediate, when necessary, areas with known or suspected chemical and radiological contamination attributable to past Laboratory operations. It will investigate and clean up (as needed) approximately 860 solid waste management units and areas of concern remaining from the original 2,129 sites spread over approximately 39 square miles. The protection of surface water and groundwater resources that may be impacted by these management units and past Laboratory operations also are within the scope of this project. The project will stabilize, package, and ship (to the Savannah River Site) nuclear materials and fuels used for the production of plutonium nitrates, oxides, and metal from 1950 through 1989 and now stored primarily in vaults in several facilities. The project will then clean and demolish the facilities. Solid Waste Stabilization and Disposition, Hanford, Washington (PBS 13C) The project will treat and store spent nuclear fuel, transuranic waste, mixed low-level waste, and low-level waste generated at the Hanford site and other DOE and Department of Defense facilities. It eventually will transfer and ship spent nuclear fuel elements and 1,936 cesium and strontium capsules to the proposed geologic repository in Nevada. The project also will operate, among other things, the (1) Waste Receiving and Processing Facility to process transuranic waste and low-level waste and (2) Central Waste Complex to store low-level and mixed-low-level waste and transuranic waste pending final disposition. The project will remediate contaminated groundwater. This effort involves characterizing the movement of radionuclides and chemicals (carbon tetrachloride, chromium, technetium-99, strontium, and uranium plumes); assessing the soil and groundwater characterization results; groundwater and risk assessment modeling; and operation of groundwater remediation systems among other related actions. Also known as the River Corridor Closure Project, this project will remediate 761 contaminated waste sites at the Hanford site near Richland, Washington, and decontaminate, decommission and demolish 379 surplus facilities that are adjacent to the Columbia River. This project also will dispose of material in the Environmental Restoration Disposal Facility. The project will retrieve, stabilize, treat, and dispose of 53 million gallons of radioactive mixed waste stored in 177 underground tanks at the Hanford site. The project also involves testing and implementing supplemental waste treatment methods; operating the Waste Treatment Plant; providing interim storage of immobilized waste planned for disposal in an offsite repository; receiving and disposing of immobilized low-activity waste on-site in near-surface disposal facilities; and closing tanks and tank farm facilities. In addition to the individual named above, Rudy Chatlos, Jennifer Echard, James Espinoza, Daniel Feehan (Assistant Director), Mike Gallo, Diane Lund, Mehrzad Nadji, Omari Norman, Brian Octeau, Christopher Pacheco, Leslie Pollock, Karen Richey, and Carol Herrnstadt Shulman made key contributions to this report.
The Department of Energy (DOE) spends billions of dollars annually to clean up nuclear wastes at sites that produced nuclear weapons. Cleanup projects decontaminate and demolish buildings, remove and dispose of contaminated soil, treat contaminated groundwater, and stabilize and dispose of solid and liquid radioactive wastes. Ten of these projects meet or nearly meet DOE's definition of major: costs exceeding $1 billion in the near term--usually a 5-year window of the project's total estimated life cycle. GAO was asked to determine the (1) extent to which the cost and schedule for DOE's major cleanup projects have changed and key reasons for changes, and (2) factors that may hinder DOE's ability to effectively manage these projects. GAO met with project directors and reviewed project documents for 10 major cleanup projects: 9 above the near-term $1 billion threshold, and 1 estimated to cost between $900 million and $1 billion over the near term. Nine of the 10 cleanup projects GAO reviewed had life cycle baseline cost increases, from a low of $139 million for one project to a high of nearly $9 billion for another, and life cycle baseline schedule delays from 2 to 15 years. These changes occurred primarily because the baselines we reviewed included schedule assumptions that were not linked to technical or budget realities, and the scope of work included other assumptions that did not prove true. Specifically, the schedules for 8 of the 10 projects were established in response to DOE's 2002 effort to complete cleanup work, which in some cases moved up project completion dates by 15 years or more. For example, to meet the 2012 accelerated completion date for its solid waste disposition project, DOE's Idaho National Laboratory assumed it would process waste at a rate that was more than 50 percent higher than the rate demonstrated at the time it established the baseline. When the laboratory could not meet that processing rate, DOE revised its baseline, adding 4 years and about $450 million to the project. Also, most of the 10 projects had cost increases and schedule delays because the previous baselines (1) had not fully foreseen the type and extent of cleanup needed, (2) assumed that construction projects needed to carry out the cleanup work would be completed on time, or (3) had not expected substantial additional work scope. DOE has not effectively used management tools--including independent project baseline reviews, performance information systems, guidance, and performance goals--to help oversee major cleanup projects' scope of work, costs, and schedule. For example, DOE's independent reviews meant to provide reasonable assurance that a project's work can be completed within the baseline's stated cost and schedule, have not done so for 4 of 10 projects. For one project, the baseline was significantly modified as little as 7 months after it had been revised and validated by the independent review, while other projects have experienced life cycle cost increases of as much as $9 billion and delays of up to 10 years, within 1 to 2 years after these reviews. In addition, although DOE uses several types of reporting methods for overseeing cleanup projects, these methods do not always provide managers with the information needed to effectively oversee the projects or keep Congress informed on the projects' status. For example, sites' proposals for changes to projects' cost and schedule baselines do not always identify possible root causes, and DOE does not systematically analyze the proposals for common problems across its projects. Therefore, DOE may be missing opportunities to improve management across projects. In addition, guidance for key management and oversight functions are spread across many different types of documents and are unclear and contradictory. As a result, project managers do not consistently implement this guidance, which may lead, for example, to problems in effectively managing risks across projects. Finally, DOE recently changed its goals for "successful" cleanup projects, reducing the amount of work and raising the allowable cost increases against the near-term baseline. DOE has initiated several actions to improve project management, but it is too early to determine whether these efforts will be effective.
Fiscal sustainability presents a national challenge shared by all levels of government. The federal government and state and local governments share in the responsibility of fulfilling important national goals, and these subnational governments rely on the federal government for a significant portion of their revenues. To provide Congress and the public with a broader perspective on our nation’s fiscal outlook, we developed a fiscal model of the state and local sector. This model enables us to simulate fiscal outcomes for the entire state and local government sector in the aggregate for several decades into the future. Our state and local fiscal model projects the level of receipts and expenditures for the sector in future years based on current and historical spending and revenue patterns. This model complements GAO’s long-term fiscal simulations of federal deficits and debt levels under varying policy assumptions. We have published long-term federal fiscal simulations since 1992. We first published the findings from our state and local fiscal model in 2007. Our model shows that the state and local government sector faces growing fiscal challenges. The model includes a measure of fiscal balance for the state and local government sector for each year until 2050. The operating balance net of funds for capital expenditures is a measure of the ability of the sector to cover its current expenditures out of current receipts. The operating balance measure has historically been positive most of the time, ranging from about zero to about 1 percent of gross domestic product (GDP). Thus, the sector usually has been able to cover its current expenses with incoming receipts. Our January 2008 report showed that this measure of fiscal balance was likely to remain within the historical range in the next few years, but would begin to decline thereafter and fall below the historical range within a decade. That is, the model suggested the state and local government sector would face increasing fiscal stress in just a few years. We recently updated the model to incorporate current data available as of August 2008. As shown in Figure 1, these more recent results show that the sector has begun to head out of balance. These results suggest that the sector is currently in an operating deficit. Our simulations show an operating balance measure well below the historical range and continuing to fall throughout the remainder of the simulation timeframe. Since most state and local governments are required to balance their operating budgets, the declining fiscal conditions shown in our simulations suggest the fiscal pressures the sector faces and are a foreshadowing of the extent to which these governments will need to make substantial policy changes to avoid growing fiscal imbalances. That is, absent policy changes, state and local governments would face an increasing gap between receipts and expenditures in the coming years. One way of measuring the long-term challenges faced by the state and local sector is through a measure known as the “fiscal gap.” The fiscal gap is an estimate of the action needed today and maintained for each and every year to achieve fiscal balance over a certain period. We measured the gap as the amount of spending reduction or tax increase needed to maintain debt as a share of GDP at or below today’s ratio. As shown in figure 2, we calculated that closing the fiscal gap would require action today equal to a 7.6 percent reduction in state and local government current expenditures. Closing the fiscal gap through revenue increases would require action of the same magnitude to increase state and local tax receipts. Growth in health-related costs serves as the primary driver of the fiscal challenges facing the state and local sector over the long term. Medicaid is a key component of their health-related costs. CBO’s projections show federal Medicaid grants to states per recipient rising substantially more than GDP per capita in the coming years. Since Medicaid is a federal and state program with federal Medicaid grants based on a matching formula, these estimates indicate that expenditures for Medicaid by state governments will rise quickly as well. We also estimated future expenditures for health insurance for state and local employees and retirees. Specifically, we assumed that the excess cost factor—the growth in these health care costs per capita above GDP per capita—will average 2.0 percentage points per year through 2035 and then begin to decline, reaching 1.0 percent by 2050. The result is a rapidly growing burden from health-related activities in state and local budgets. Our simulations show that other types of state and local government expenditures—such as wages and salaries of state and local workers, pension contributions, and investments in infrastructure—are expected to grow slightly less than GDP. At the same time, most revenue growth is expected to be approximately flat as a percentage of GDP. The projected rise in health- related costs is the root of the long-term fiscal difficulties these simulations suggest will occur. Figure 3 shows our simulations for expenditure growth for state and local government health-related and other expenditures. On the receipt side, our model suggests that most of these tax receipts will show modest growth in the future—and some are projected to experience a modest decline—relative to GDP. We found that state personal income taxes show a small rise relative to GDP in coming years. This likely reflects that some state governments have a small degree of progressivity in their income tax structures. Sales taxes of the sector are expected to experience a slight decline as a percentage of GDP in the coming years, reflecting trends in the sector’s tax base. While historical data indicate that property taxes—which are mostly levied by local governments—could rise slightly as a share of GDP in the future, recent events in the housing market suggest that the long-term outlook for property tax revenue could also shift downward. These differential tax growth projections indicate that any given jurisdiction’s tax revenue prospects are uniquely tied to the composition of taxes it imposes. The only source of revenue expected to grow rapidly under current policy is federal grants to state governments for Medicaid. That is, we assume that current policy remains in place and the shares of Medicaid expenditures borne by the federal government and the states remain unchanged. Since Medicaid is a matching formula grant program, the projected escalation in federal Medicaid grants simply reflects expected increased Medicaid expenditures that will be shared by state governments. These long-term simulations do not attempt to assume how recent actions to stabilize the financial system and economy will be incorporated into the federal budget estimates in January 2009. The outlook presented by our state and local model is exacerbated by current economic conditions. During economic downturns, states can experience difficulties financing programs such as Medicaid. Economic downturns result in rising unemployment, which can lead to increases in the number of individuals who are eligible for Medicaid coverage, and in declining tax revenues, which can lead to less available revenue with which to fund coverage of additional enrollees. For example, during the most recent period of economic downturn prior to 2008, Medicaid enrollment rose 8.6 percent between 2001 and 2002, which was largely attributed to states’ increases in unemployment. During this same time period, state tax revenues fell 7.5 percent. According to the Kaiser Commission on Medicaid and the Uninsured, in 2008, most states have made policy changes aimed at controlling Medicaid costs. Recognizing the complex combination of factors affecting states during economic downturns—increased unemployment, declining state revenues, and increased downturn-related Medicaid costs—this Committee and several others asked us to assist them as they considered a legislative response that would help states cope with Medicaid cost increases. In response to this request, our 2006 report on Medicaid and economic downturns explored the design considerations and possible effects of targeting supplemental assistance to states when they are most affected by a downturn. We constructed a simulation model that adjusts the amount of funding a state could receive on the basis of each state’s percentage increase in unemployment and per person spending on Medicaid services. Such a supplemental assistance strategy would leave the existing Medicaid formula unchanged and add a new, separate assistance formula that would operate only during times of economic downturn and use variables and a distribution mechanism that differ from those used for calculating matching rates. This concept is embodied in the health reform plan released by Chairman Baucus last week. Using data from the past three recessions, we simulated the provision of such targeted supplemental assistance to states. To determine the amount of supplemental federal assistance needed to help states address increased Medicaid expenditures during a downturn, we relied on research that estimated a relationship between changes in unemployment and changes in Medicaid spending. Our model incorporated a retrospective assessment which involved assessing the increase in each state’s unemployment rate for a particular quarter compared to the same quarter of the previous year. Our simulation included an economic trigger turned on when 23 or more states had an increase in the unemployment rate of 10 percent or more compared to the unemployment rate that existed for the same quarter 1 year earlier (such as a given state’s unemployment rate increasing from 5 percent to 5.5 percent). We chose these two threshold values—23 or more states and increased unemployment of 10 percent or more—to work in tandem to ensure that the national economy had entered a downturn and that the majority of states were not yet in recovery from the downturn. These parameters were based on our quantitative analysis of prior recessions. As shown in figure 4, for the 1990-1991 downturn, 6 quarters of assistance would have been provided beginning with the third quarter of 1991 and ending after the fourth quarter of 1992. Analysis of recent unemployment data indicate that such a strategy would already be triggered based on changes in unemployment for 2007 and 2008. In other words, current data confirm the economic pressures currently facing the states. Considerations involved in such a strategy include: Timing assistance so that it is delivered as soon as it is needed, Targeting assistance according to the extent of each state’s downturn, Temporarily increasing federal funding so that it turns off when states’ economic circumstances sufficiently improve, and Triggering so the starting and ending points of assistance respond to indicators of states’ economic distress. Any potential legislative response would need to be considered within the context of broader health care and fiscal challenges—including continually rising health care costs, a growing elderly population, and Medicare and Medicaid’s increasing share of the federal budget. Additional criteria could be established to accomplish other policy objectives, such as controlling federal spending by limiting the number of quarters of payments or stopping payments after predetermined spending caps are reached. The federal government depends on states and localities to provide critical services including health care for low-income populations. States and localities depend on the federal government to help fund these services. As the largest share of federal grant funding and a large and growing share of state budgets, Medicaid is a critical component of this intergovernmental partnership. The long-term structural fiscal challenges facing the state and local sector further complicate the provision of Medicaid services. These challenges are exacerbated during periods of economic downturn when increased unemployment leads to increased eligibility for the Medicaid program. The current economic downturn presents additional challenges as states struggle to meet the needs of eligible residents in the midst of a credit crisis. Our work on the long-term fiscal outlook for state and local governments and strategies for providing Medicaid-related fiscal assistance is intended to offer the Committee a useful starting point for considering strategic evidence-based approaches to addressing these daunting intergovernmental fiscal issues. For information about this statement for the record, please contact Stanley J. Czerwinski, Director, Strategic Issues, at (202) 512-6806 or czerwinskis@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony and related products include: Kathryn G. Allen, Director, Quality and Continuous Improvement; Thomas J. McCool, Director, Center for Economics; Amy Abramowitz, Meghana Acharya, Romonda McKinney Bumpus, Robert Dinkelmeyer, Greg Dybalski, Nancy Fasciano, Jerry Fastrup, Carol Henn, Richard Krashevski, Summer Lingard, James McTigue, Donna Miller, Elizabeth T. Morrison, Michelle Sager, Michael Springer, Jeremy Schwartz, Melissa Wolf, and Carolyn L. Yocom. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO was asked to provide its views on projected trends in health care costs and their effect on the long-term outlook for state and local governments in the context of the current economic environment. This statement addresses three key points: (1) the state and local government sector's long-term fiscal challenges; (2) rapidly rising health care costs which drive the sector's long-term fiscal difficulties, and (3) the considerations involved in targeting supplemental funds to states through the Medicaid program during economic downturns. To provide Congress and the public with a broader perspective on our nation's fiscal outlook, GAO previously developed a fiscal model of the state and local sector. This model enables GAO to simulate fiscal outcomes for the sector in the aggregate for several decades into the future. GAO first published the findings from the state and local fiscal model in 2007. This statement includes August 2008 data to update the simulations. This Committee and others also asked GAO to analyze strategies to help states address increased Medicaid expenditures during economic downturns. GAO simulated the provision of such supplemental assistance to states. As we previously reported, the simulation model adjusts the amount of funding states would receive based on changes in unemployment and spending on Medicaid services. Rapidly rising health care costs are not simply a federal budget problem. Growth in health-related spending also drives the fiscal challenges facing state and local governments. The magnitude of these challenges presents long-term sustainability challenges for all levels of government. The current financial sector turmoil and broader economic conditions add to fiscal and budgetary challenges for these governments as they attempt to remain in balance. States and localities are facing increased demand for services during a period of declining revenues and reduced access to capital. In the midst of these challenges, the federal government continues to rely on this sector for delivery of services such as Medicaid, the joint federal-state health care financing program for certain categories of low-income individuals. Our model shows that in the aggregate the state and local government sector faces growing fiscal challenges. Incorporation of August 2008 data shows that the position of the sector has worsened since our January 2008 report. The long-term outlook presented by our state and local model is exacerbated by current economic conditions. During economic downturns, states can experience difficulties financing programs such as Medicaid. Downturns result in rising unemployment, which can increase the number of individuals eligible for Medicaid, and declining tax revenues, which can decrease revenue available to fund coverage of additional enrollees. GAO's simulation model to help states respond to these circumstances is based on assumptions under which the existing Medicaid formula would remain unchanged and add a new, separate assistance formula that would operate only during times of economic downturn. Considerations involved in such a strategy could include: (1) timing assistance so that it is delivered as soon as it is needed, (2) targeting assistance according to the extent of each state's downturn, (3) temporarily increasing federal funding so that it turns off when states' economic circumstances sufficiently improve, and (4) triggering so the starting and ending points of assistance respond to indicators of economic distress.
The federal real property environment has many stakeholders and involves a vast and diverse portfolio of assets that are used for a wide variety of missions. Real property is generally defined as facilities, land, and anything constructed on or attached to land. According to FRPP data, the federal government owned and leased 1.2 million assets with a replacement value of $1.5 trillion in fiscal year 2006. The Department of Defense, USPS, GSA, and the Department of Veterans Affairs hold the majority of the owned and leased facility space. The makeup of the federal government’s facilities reflects the diversity of agencies’ missions and includes office buildings, prisons, post offices, courthouses, laboratories, and border stations. GSA is authorized by law to acquire, manage, utilize, and dispose of real property for most federal agencies. These authorities are contained in title 40 of the U.S. Code, and GSA is responsible for its implementation. Agencies are subject to title 40 authorities unless they have their own specific real estate authority and are exempted from title 40. Under title 40, GSA is authorized to enter into lease agreements for up to 20 years that the Administrator of GSA considers to be in the interest of the federal government and necessary to accommodate a federal agency. GSA uses this authority to lease space on behalf of many federal government agencies. In 1996, GSA began a program called “Can’t Beat GSA Leasing” that offered federal agencies the choice of using GSA as their leasing agent or assuming responsibility for their own leasing. Under this program, GSA delegated leasing authority for general purpose space to the heads of all federal agencies. GSA’s original delegation consisted of six conditions, which included the requirements that federal agencies acquire and utilize leased space in accordance with all applicable laws and regulations. In December of 2002, GSA revised its regulations to specifically state that all agencies must follow the budget scorekeeping guidelines and OMB’s requirements for leases, capital leases, and lease purchases identified in appendixes A and B of OMB Circular A-11. Federal agencies also may have their own independent statutory authority related to real property. In November 2007, GSA amended its delegations of leasing authority to acquire general purpose office space and special purpose office space. GSA said its basis for amending these delegations of authority was to increase oversight and to facilitate compliance with all applicable laws and regulations governing the acquisition of real property, since several recent audits of its delegation program found instances in which agencies had failed to meet the requirements of their leasing delegation. USPS, which is an independent establishment in the executive branch, is authorized to sell, lease, or dispose of property under its general powers and is exempt from most federal laws dealing with real property and contracting. Since 2003, we have reported that federal real property is a high-risk area due to excess and deteriorating property, reliance on costly leasing, unreliable data, and security challenges. Specifically, problems are exacerbated by underlying obstacles that include competing stakeholder interests, legal and budgetary limitations, and the need for improved capital planning. For example, agencies cited local interests as barriers to disposing of excess property, and agencies’ limited ability to pursue ownership leads them to lease property that may be more cost-effective to own over time. In February of 2004, the President signed Executive Order 13327 and added real property management to the President’s Management Agenda, which scores agencies on their progress in meeting performance targets. The order applies to 24 executive branch departments and agencies, but not to USPS. Agencies under that executive order have, among other things, designated senior real property officers, established asset management plans, standardized real property data reporting, and adopted various performance measures to track progress. The administration’s establishment of FRPC also supports reform efforts. Furthermore, OMB staff said that the administration intends to work with Congress to provide agencies with tools to more effectively manage their real property assets. To meet the order’s requirement for standardized real property data reporting, FRPC worked with GSA to develop FRPP. The first governmentwide reporting of inventory data for FRPP took place in December of 2005, and selected data were included in the fiscal year 2005 Federal Real Property Report, published by GSA on behalf of FRPC in June of 2006. In our April 2007 update on real property high-risk issues, we reported that the administration and major real-property-holding agencies had made progress toward strategically managing federal real property and addressing some long-standing problems. Federal agencies rely extensively on leasing, occupying about 398 million square feet of leased building space domestically in fiscal year 2006, according to data from FRPP. According to fiscal year 2006 FRPP information, GSA and USPS hold the majority of the federal government’s leased building space, totaling about 270 million square feet, or about 67 percent of the leased inventory of space within the United States and U.S. territories. Agriculture holds 4 percent of leased space. In fiscal year 2006, GSA, which acts as a leasing agent for other agencies, had 6,750 leases and provided slightly less than 169 million square feet of leased building space to nearly every department within the federal government. Table 1 shows the amount of domestically leased space by agency, according to FRPP. According to FRPP data, over half of the space, in terms of square footage, is office space, with a mix of other uses including warehouses, family housing, and schools. The single largest category of federally leased space—office-related purposes—accounts for approximately 201 million square feet, or 51 percent of all domestic leased building space. This category includes office space and military headquarters space. The second largest category of leased space, “all other,” includes space for post offices and laboratories, as well as other buildings that cannot be classified in other categories. About 104 million square feet of leased space were reported in this category. In addition, agencies reported leasing about 29 million square feet of building space for warehouses and about 22 million square feet for family housing—a category that includes, among other things, public housing and military personnel housing. Figure 1 shows domestic leased square footage by predominant usage. As the federal agency that leases the most space, GSA leases space for a variety of purposes, but about 152 million square feet, or 90 percent of its leased portfolio, is leased exclusively for offices. GSA also leases about 15 million square feet, or about 9 percent of its overall leased portfolio, for warehouses for agencies. Additionally, GSA leases building space for such purposes as laboratories, family housing, and other miscellaneous uses. These uses account for about 1 percent of its leased space. Agriculture had more than 3,700 leases totaling nearly 17 million square feet of building space in fiscal year 2006. About 92 percent of Agriculture’s leased space, or slightly more than 15 million square feet, is for offices. Agriculture also has a little over a million square feet leased under the all other, warehouse, and service categories. According to FRPP data, USPS has roughly 28,100 leases, which are categorized as all other. According to OMB staff, because of a USPS data-coding error, the square footage for USPS assets was included in the “all other” category and accounts for about 95 percent of the square footage reported for this category. USPS told us that the majority of its leased buildings are used primarily for customer service post offices, and a portion of its building space is used for retail facilities and carrier annexes. We did not develop data on the overall yearly cost of leased space to the federal government. OMB staff said that variation in costs included in lease payments among agencies would create data consistency problems. As an indicator of asset value, FRPP tracks replacement value—or the cost to replace leased space with owned space—and the estimated replacement value of the federal government’s existing domestic leases totals $48 billion. In April of 2007, we reported that although agencies have made progress in collecting and reporting standardized real property data for FRPP, data reliability is still a challenge at some of the agencies, and agencies lack a standard framework for data validation. For this review, we assessed the fiscal year 2006 FRPP leasing data and found them to be generally reliable for the purpose of profiling the leased inventory. However, we noted some data quality issues that would be cause for concern if FRPP were used for more than describing the inventory, such as strategic decision making. For example, USPS categorized its 28,108 leased assets “mission dependent not critical” and did not include annual operating costs for each leased asset. The categorization of USPS’s leased facilities as “mission dependent not critical” and “not mission critical” could be questioned, since USPS’s facilities serve as the main channel for providing mail delivery service to all people residing in the United States. In our April 2007 report, we recommended that OMB develop a framework that agencies can use to better ensure the validity and usefulness of key real property data in FRPP. OMB concurred with this recommendation and has required agency-specific validation and verification plans and, according to OMB officials, has developed an FRPP validation protocol to certify agency data. According to OMB staff, each score card agency has developed and implemented an agency-specific data validation and verification plan. FRPP is a relatively new inventory, a result of implementing Executive Order 13327, and therefore governmentwide data on leasing trends were not available. However, GSA, whose tenants represent a cross-section of federal agencies, maintains historical data that are useful in examining trends. The most striking trend in GSA-leased space, GSA predicts, is that for the first time it will lease more space than it owns in 2008. From fiscal year 2003 through fiscal year 2006, GSA increased its leased space from about 160 million square feet to about 172 million square feet while its owned space decreased from about 180 million square feet to about 174 million square feet. Besides tracking total leased square footage, as all federal agencies are required to do for FRPP, GSA captures, analyzes, and evaluates a number of other leasing trends, including trends in lease extensions, vacancy rates in leased facilities, negative net operating income leases, and GSA lease rates compared with market lease rates. GSA conducts trend analysis using the data from prior years that it retains, or annually archives. GSA officials stated that annually archiving data is the key to establishing baselines and conducting trend analysis because it aggregates data for similar fields over time, which allows for analyses of comparable data each year. According to GSA officials, these analyses can then be used to better anticipate and react to market changes, helping to ensure the most efficient management of the lease portfolio. GSA officials said they can use annually archived data to isolate key trends, examine the causes of these trends, and identify potential solutions. According to GSA officials, they are broadening the use of outside published data to forecast market conditions and rent for leasing activity. GSA officials told us that trend analysis with annually archived data can identify “low- value” leases—those for which the government is currently paying above-market rates. Such analysis also can identify leases in markets where rental rates are forecasted to grow quickly and the government risks paying higher lease rates in the future. Second, trend analysis of market data comparisons can indicate whether a lease extended without full competition is more expensive than a lease fully competed in the free market. GSA estimated that approximately 65 percent of its expiring leases were extended at the request of tenant agencies. GSA officials said that leases that are extended could potentially place GSA at risk, especially in areas where the agency may be overpaying because of changing market rates. According to GSA officials, information on vacancy rates is crucial for asset managers to effectively manage and minimize vacancies. In the 10 leases we examined, decisions to lease space that would be more cost-effective to own were driven by the limited availability of capital for ownership and other considerations, such as operational efficiency and security. To examine the cost-effectiveness of leasing decisions, we analyzed economic analyses—30-year net present value calculations—that GSA provided for seven building leases and that USPS provided for three leases. We found that leasing was more costly over the long-term than construction for four of the seven GSA leases, and these four GSA leases were estimated to be $83.3 million more costly over 30 years than construction. Table 2 shows the results of our analyses for the seven selected GSA lease acquisitions, the comparative cost advantages and disadvantages of construction versus leasing for these acquisitions, and the reasons cited by agency officials for leasing. USPS provided similar data for its leases but requested that we not provide them in this report because of a Postal Regulatory Commission ruling that such data should not be disclosed to the public. For FBI’s field office in Chicago, Illinois, the 30-year net present value cost of meeting FBI’s long-term space need with an operating lease was estimated to cost $40 million more than construction. In this instance, GSA officials stated, limited availability of up-front capital and security considerations prevented them from pursuing ownership at that time. According to GSA officials, before deciding to enter into the lease in 2006 for the new field office, which has a 14-year term, GSA pursued other options for its tenant agency, including repair and alteration and building construction. Ultimately, these options proved unfeasible because, according to GSA officials, massive repairs were needed at one of the proposed facilities and existing facilities could not meet new building security requirements established after the September 11, 2001, terrorist attacks. For the FBI field office in Tampa, Florida, the 30-year net present-value cost of meeting FBI’s long-term space need with an operating lease was estimated at about $7 million more than construction over a 30-year period in 2005. According to a GSA official, building construction was never considered as a viable option because of the perceived lack of necessary up-front capital. A GSA official stated that FBI had outgrown existing space in several leased facilities throughout the city and cited enhanced security requirements, anticipated expansion, and the immediacy of FBI’s space need as major reasons for consolidating existing leases into one central location. The term of the lease is 15 years. Primarily because of the amount of square footage required, virtually all downtown Tampa locations were eliminated. GSA is using operating leases to help FBI meet long-term needs for field offices. For instance, GSA is using operating leases in 40 of 41 new FBI field office locations across the country. Figure 2 shows the FBI field offices in Chicago and Tampa. GSA entered into 10-year operating leases in Chicago for the Bureau of Alcohol, Tobacco, and Firearms and the Secret Service in the same building, estimated at a 30-year net present value to cost $33 million more than construction. GSA officials said that the prior leases for both agencies, which also were housed previously in the same facility, expired and the original lessor opted not to retain the agencies as tenants for various reasons, including the need for enhanced security requirements. GSA did not have any owned federal space available in its existing inventory to meet the specialized security needs of both agencies, so finding another location that met their security needs was a major factor in choosing the new space. For GSA, limited funding for construction is exacerbated by federal budget scorekeeping rules which require, for ownership and capital leases, that the full cost of the government’s commitment be recorded up front in the budget. In contrast, for operating leases, only the amount needed to cover yearly lease payments plus cancellation costs is required to be recorded in the budget, thereby making operating leases “look cheaper” in any given year. This is a long-standing problem, and overreliance on leasing is one of the major reasons we designated federal real property management as a high-risk area. The budget scorekeeping issue and efforts to resolve it will be discussed at length later in this report. GSA real property officials in the regions we visited told us that for most space requests they fulfill, constraints on capital funding influenced their pursuit of ownership as a realistic option. While several of the GSA officials we spoke with noted that construction funds are available for capital projects in their region, these dollars tend to be designated for the construction of agency headquarters, courthouses, and border stations and typically are not used for federal office space, such as that needed to fulfill FBI’s field office needs. Specifically, for fiscal years 2006 through 2008, the President requested funding for GSA primarily for courthouses or border stations, although for all 3 years GSA requested funds to cover other construction and repair and alteration projects. According to GSA real property officials, these types of constraints on construction funding for federal office space often limit their ability to pursue ownership for general purpose office space. For this review, GSA developed and provided a 30-year net present-value analysis of leasing versus ownership for the building leases we selected. When proposing a lease, GSA no longer provides this type of analysis to Congress. According to GSA officials, until the submission of the fiscal year 1995 capital investment and leasing program, GSA included the results of a 30-year net present-value analysis of housing alternatives in lease prospectuses. Now, according to GSA officials, GSA instead performs a scoring analysis for all lease prospectuses. The scoring analysis, GSA officials said, includes an estimate of construction costs. Other estimated costs include those for the proposed asset’s design, management, and inspection and site acquisition. According to GSA, these combined estimates provide a total asset value based on cost that forms the basis for determining whether the proposal scores as a capital or an operating lease. GSA officials said GSA’s authorizing committees requested that they provide this information with lease prospectuses in lieu of the 30-year net present value analysis. Although USPS is self-financed and not dependent on appropriations or subject to the scorekeeping rules that apply to other federal agencies, USPS officials said that limited up-front capital to fund construction projects is also a hindrance to ownership. USPS’s leasing guidance states that a lease-versus-purchase analysis is to be conducted when new construction leasing or the purchase of a building is the recommended building acquisition alternative. These lease-versus-purchase analyses consider the net present value and the return on investment of acquisition alternatives. The lease-versus-purchase analysis at USPS includes the purchase of a newly constructed building, but also can include the exercise of options available in certain lease contracts to purchase a currently leased building. USPS officials at the three facilities service offices we visited said major reductions in capital funding dictated many of their decisions about leasing and ownership. In particular, USPS headquarters officials stated that they would prefer to own all of the larger facilities, such as mail-processing facilities, but that capital constraints can prevent ownership of all such facilities. While GSA and USPS officials emphasized that the limited availability of capital was a major impediment to building ownership, they also cited operational requirements— such as changes to an agency’s mission, immediate space needs, security requirements, or a desire for flexibility— as drivers of the decision to lease rather than own space. Other factors, such as shorter-term or smaller space needs, also may influence agencies’ decisions to lease space. GSA and USPS officials cited agency mission as a reason they chose to pursue leasing rather than building ownership for certain projects. For instance, USPS officials said they strive to locate postal service buildings in areas that will optimize the efficiency of mail delivery. Thus, when deciding between leasing and constructing a building, they may consider operational factors such as the size of a facility, traffic routes, access to parking, and convenience for the customer. USPS officials noted, particularly for customer service post offices, that leasing in existing retail space, rather than constructing a new facility, is usually the optimal way to reach customers. However, as we have previously reported, significant challenges remain related to USPS’s planning and implementation of its infrastructure realignment, which is designed to reduce excess capacity as well as reflect changes in operations. Further challenges persist related to USPS’s identification and disposal of excess property. We previously have recommended that USPS develop a facilities plan that includes a strategy for how USPS intends to rationalize the postal facilities network and remove excess processing capacity and space from the network. In some instances, officials told us that operational requirements take precedence over economic considerations. For instance, automation at USPS has affected operational planning requirements for future facilities by changing the expected need for square footage. GSA officials also cited changes in a tenant agency’s mission as dictating an immediate need for space. For instance, expanding mission requirements for the Department of Homeland Security created additional space requirements. Faced with a changing mission and relatively immediate space needs, agencies may opt for lease construction rather than federal construction, GSA officials told us, because lease construction is perceived to take less time than federal construction. Under lease construction, an agency works with the private sector to design and build a building that the government leases to meet the agency’s mission needs. The private developer finances the construction of the building and agrees to lease the finished building to the agency. This arrangement allows an agency to obtain a new building suited to its needs without having to pay the up-front costs associated with federal construction. GSA officials said that it is highly challenging to determine when a tenant agency’s mission may change and what space needs may subsequently emerge. Both GSA and USPS cited the need for flexibility in their space assignments as a reason for leasing rather than owning space. Certain agencies that GSA obtains space for, such as the Social Security Administration, try to locate their facilities close to their customers. As demographic shifts occur in certain areas of the country, customers can potentially move to new areas. GSA real property officials stated that leasing rather than ownership is frequently used to give these agencies the flexibility to relocate closer to their customers, if necessary. Postal officials also cited flexibility as a reason for leasing retail post office space. According to Postal Service officials in the Southwest Facilities Service Office, recent population increases in Texas and Northwestern Arkansas may expand the need for retail postal facilities in these areas. Because the majority of the space USPS obtains in the region is small and subject to demographic shifts, leasing provides flexibility to meet changing operational needs. Due to the expansion of security requirements in recent years, such as those for blast-resistant building exteriors and the need for greater setbacks, GSA officials said that agency requirements have become more stringent and complex. In some circumstances, GSA officials said, these security needs cannot be met in existing federal buildings, causing agencies to pursue lease construction. When acquiring space for the FBI field office in Chicago, GSA first pursued the option of repair and alteration and then building purchase. GSA officials said that after reviewing the federal inventory and investigating the site with the most potential, GSA determined that the repairs needed to make existing federal building space comply with post-September 11 security requirements would be cost prohibitive. Given the costliness of the repair and alteration method and the limited availability of capital for construction, GSA officials selected lease construction as the method to acquire a building for FBI. Similarly, when looking for new space to consolidate FBI operations in Tampa, GSA real estate specialists told us they eliminated downtown Tampa—where existing federal buildings were located—as a site because of the difficulty of locating space with the required 100-foot building setbacks. GSA did find, however, a private developer for a lease construction project away from the downtown area on the Western Shore of Tampa. An additional factor that may cause agencies to lease space is a customer’s temporary or short-term space needs. For instance, GSA officials said that over 200 GSA-owned and leased buildings were damaged by Hurricane Katrina, necessitating the relocation of 2,600 federal employees from 28 federal agencies, many of which were GSA tenant agencies. To meet this emergency need, GSA expanded its use of leases to house agencies in temporary space to fulfill a short-term need. GSA Regional officials told us they still have a number of Hurricane Katrina-related leases in their portfolio. Agencies also choose leasing over ownership because it is a practical way to address issues such as a limited amount of available federal space in a geographic area or a need for a small amount of square footage. GSA officials stated that in certain rural locations, construction would not be economically advantageous. The amount of square footage needed also may dictate whether an agency chooses to lease rather than own space. For instance, more than 80 percent of GSA’s leases are for 20,000 square feet or less. When agencies require less than 20,000 square feet, GSA officials stated, leasing is usually cost-competitive with ownership, and federal construction is an unlikely option. Additionally, USPS officials told us that many of their assignments are for 3,000 square feet or less and that for assignments of this size, construction is not often practical. “Efforts to increase ownership are … hampered by a budgetary bias against capital investment. GSA must record in 1 year’s budget the total cost of a newly constructed or purchased building, but is required to record only 1 year’s lease payments for a leased building. As a result, leasing appears to be less costly for the current year despite its greater long-term costs” (December 1989, GAO/GGD-90-11). “...consideration be given to revisiting the scoring of operating leases. In principle, those leases that are perceived by all sides as long-term federal commitments ought to be scored in a way that is comparable to direct federal ownership. Applying the principle of full recognition of the long-term costs to all instruments is more likely to promote the emergence of the most cost-effective alternative” (October 1993, GAO/T-AIMD-GGD-94-43). “…GSA’s economic analysis for 55 … leases it proposed showed that federal construction would have been a less costly alternative and would have saved approximately $700 million (present value) over a 30-year period” (July 1995, GAO/T-GGD-95-149). “…a GSA present value cost analysis estimated that the recently leased U.S. Patent and Trademark Office-complex currently being constructed in Alexandria, Virginia, by a private company, cost taxpayers about $48 million more to lease over the 20-year lease period than it would have cost to purchase it” (April 2003, GAO-03-609T). “Federal real property is a high-risk area due to excess and deteriorating property, reliance on costly leasing, unreliable data, and security challenges … Energy, Interior, GSA, State, and VA reported an increased reliance on leasing to meet space needs” (April 2007, GAO- 07-349). A complete listing of these reports appears at the end of this report in the Related GAO Products section. It is important to recognize in any discussion about the budget scorekeeping rules that their intended purpose is rooted in sound budget principles and transparency. For Congress to efficiently allocate resources, it needs to know and vote on the full cost of any program it approves at the time the funding decision is made. Hence, the scorekeeping rules require that budget authority for the cost of purchasing an asset—whether through construction or purchase—be recorded in the budget when it can be controlled, that is, up front, so that decision makers have the information needed and an incentive to take the full cost of their decisions into account. Under current budget scorekeeping rules, the budget records the full cost of the government’s commitment in the year the commitment is made. As a result, for operating leases, only the amount needed to cover the first year lease payments plus cancellation costs need to be recorded, thereby disguising the fact that over time, leasing will cost more than ownership. In addition to encouraging the use of operating leases, the budget scorekeeping rules have a clear impact on public-private partnerships. One type of partnership that agencies such as the Department of Veterans Affairs and the Department of Defense have specific statutory authority to enter into is called an enhanced use lease agreement. Such an agreement is a lease agreement for property under an agency’s control or custody that the agency can (1) enter into with a public or private entity and (2) receive as payment under the lease either cash or other consideration, such as repairs of the property. According to the Congressional Budget Office (CBO), in a public-private partnership, a business entity is created by public and private parties for a single, specified purpose with activities that are predetermined by the contracts and other arrangements between the parties involved. The scoring of H.R. 3947, the Federal Property Asset Management Reform Act of 2002, illustrates how scoring has, and will continue to have, an impact on the prospects for greater use of public-private partnerships. The bill—which was not enacted—would have authorized most federal real- property-holding agencies, including GSA, to enter into partnerships and other business arrangements with private firms to improve the government’s real property. Agencies could have sold, leased, or conveyed government property as part of the business arrangements and retained or spent the proceeds without further appropriations. CBO expected that many of the ventures that agencies would enter into would be used to finance investment on behalf of the government. Because of the extent of the government’s control and use of the projects likely to be undertaken, CBO concluded that spending by the ventures associated with that financing should be treated as governmental and recorded as budget authority and outlays. CBO reported that spending would increase by $1 billion between 2004 and 2012. CBO’s report contains a full explanation of CBO’s conclusions. Over the nearly 20 years that we have been raising the scorekeeping issue as a problem that needs to be addressed, several alternatives have been discussed by, for example, the President’s Commission to Study Capital Budgeting and us. In addition to improving capital planning overall so that the most cost-effective capital investments can be identified, other alternatives have included scoring operating leases up front and establishing capital acquisition funds at agencies to fund ownership. Although these alternatives would pose various implementation challenges, they serve to illustrate proposals that have been considered. An alternative that could result in long-term savings for the government, proposed in the past by us, would be to recognize that many operating leases are used for long-term needs and should be treated on the same basis as ownership. This would make such instruments comparable in the budget to direct federal ownership and would foster more cost-effective decision making by OMB and Congress. Applying the principle of full, up- front recognition of the long-term costs to all options for satisfying long- term space needs—whether through construction, purchase, lease- purchase, or operating lease—is more likely to result in the selection of the most cost-effective alternative than using the current scoring rules. It is important to note that there would be implementation challenges if this option were pursued. Additionally, it would be challenging to reach consensus on what constitutes long-term space needs that would warrant this up-front budgetary treatment. In commenting on various options for correcting the scoring issue, CBO reported that ending the distinction between various types of leases has been considered in the private sector. According to CBO, the Financial Accounting Standards Board—noting that private firms often devise leases that barely fall within the limits for operating leases—has considered requiring firms to capitalize all leases in their books, rather than maintain the current distinction between capital leases and operating leases. In the context of federal budgeting, CBO reported that capitalizing all leases could mean scoring all leases up front on the basis of the present value of lease payments over the lease term, without attempting to distinguish between leases that are equivalent to purchases—capital leases—and those that are not—operating leases. The President’s Commission to Study Capital Budgeting recommended in 1999 that Congress and the executive branch have one or more agencies with capital-intensive operations establish a separate capital acquisition fund (CAF) within their budget that would receive appropriations for the construction and acquisition of large capital projects. CAFs would use authority to borrow from Treasury’s general fund and then charge operating units within the agency rents equal to the debt service (interest and amortization costs) on those projects. Although the Commission’s discussion of CAFs was within the context of identifying measures short of capital budgeting that the government could adopt, CAFs have implications for the scoring issue because they represent a vehicle for providing up-front funding for ownership. The main advantage of CAFs, according to the Commission, is that they should improve agencies’ planning and budgeting. If units or divisions within agencies are charged the true costs of their space and of other large capital items, they are likely to make more efficient uses of those assets, according to the Commission report. The report said that CAFs would not replace the Federal Buildings Fund, GSA’s governmentwide revolving fund. Authority to acquire buildings through CAFs could be delegated by GSA as agencies demonstrate their effectiveness in using this instrument. In 2005, we reported that implementation issues could overwhelm the potential benefits of CAFs, which could be achieved through simpler means. In our April 2007 report updating our designation of federal real property as a high-risk area, we recommended that OMB, in conjunction with FRPC, develop an action plan for how FRPC will address key problems, including the continued reliance on costly leasing when ownership would be more cost-effective over the long term. OMB agreed with our recommendation and developed an action plan that included, as a priority action, “analyz real property budget scoring issues.” OMB staff added that in their view, basic improvements, such as developing a reliable, governmentwide inventory of space and establishing performance measures had to occur before the administration could take on broader, complex policy issues such as how to address the leasing challenge. As a result, the current real property reform initiative has lacked a comprehensive analysis of alternatives and potential solutions to the leasing challenge. Without such an analysis, agencies’ reliance on costly leasing is likely to continue and opportunities for using other instruments, such as public-private partnerships, may be limited. OMB staff said that efforts to resolve the leasing challenge could benefit from the input not only of FRPC, but also of other outside stakeholders, including Congress. At a minimum, consensus on resolving this difficult issue would involve analyzing current and past legislative and administration proposals that address the budget scorekeeping issue in relation to real property, gauging stakeholders’ perspectives on what proposals are most viable, and determining the conditions under which leasing is an acceptable alternative even if it is not the most cost-effective option. The leasing challenge—under which agencies have become overreliant on costly operating leases to meet long-term space needs—has persisted for many years. We have reported since the late 1980s that this problem has cost taxpayers hundreds of millions of dollars and needs to be addressed. Overreliance on costly leasing was, and continues to be, a major reason why real property management is on GAO’s high-risk list. Trends show continued reliance on significant amounts of leased space. In particular, GSA predicts that in 2008 it will lease more space than it owns—an unprecedented situation. The predilection for leasing has been driven, in part, by federal budget scorekeeping rules, which make operating leases an attractive alternative to ownership. The scorekeeping rules, though rooted in sound budget policy and intended to improve transparency, have had this undesirable effect. Various alternatives have been proposed to remedy the problem, but it persists, and reaching a resolution will not be easy. Whatever change is under consideration—whether it involves scoring leases up front like ownership or using other methods to spur ownership—will involve making choices among competing priorities for limited federal resources. Whether to increase funding for federal real property at the expense of other programs and initiatives is properly a policy decision that only Congress and the President can make. Nonetheless, with the improvements in federal real property management made thus far and increased commitment by OMB and Congress to address long-standing real property problems, there is reason to be optimistic that stakeholders can reach a consensus on how to address the leasing challenge. The Director of the Office of Management and Budget should direct the Deputy Director for Management, Office of Management and Budget, in conjunction with the Federal Real Property Council and in consultation with key stakeholders including Congress, to develop a strategy to reduce agencies’ reliance on leased space for long-term needs when ownership would be less costly. The strategy should, at a minimum, analyze current and past legislative and administration proposals that addressed the budget scorekeeping issue in relation to real property, gauge stakeholders’ perspectives on what proposals are most viable, and identify the conditions, if any, under which leasing is an acceptable alternative even if it is not the most cost-effective option. We provided a draft of this report to OMB, GSA, Agriculture, and USPS for review and comment. OMB generally agreed with the report and concurred with our recommendation. OMB also provided technical clarifications, which we incorporated, where appropriate. OMB’s comments are discussed in more detail below. OMB’s letter is contained in appendix II. GSA also agreed with the report and provided technical clarifications, which we incorporated where appropriate. GSA’s letter appears in appendix III without the enclosure that contained the technical clarifications. USPS and Agriculture did not provide comments on the draft report. While generally agreeing with our report and recommendation, OMB asked us to narrow the scope of the recommendation and identified other issues inherent to the acquisition of leased space, including (1) leasing as a more practical option in certain situations, (2) the validity of the 30-year net economic analysis comparing the acquisition costs of owned and leased space, and (3) challenges associated with pursuing building ownership. OMB asked us to narrow the scope of the recommendation to identify instances in which agencies are relying on costly, long-term leasing. A means of identifying such leases could logically be part of the strategy we are recommending and seems worthwhile pursuing. However, our report objectives did not include how best to identify costly leases, and therefore we chose not to change our recommendation. Over the last decade, our work has shown that GSA relies heavily on operating leases to meet new long-term space needs and that building ownership often costs less than operating leases, especially for long-term space needs. In these reports, we cite examples of leases that were estimated to cost millions of dollars more than construction over the long-term, including operating leases for the Patent and Trademark Office in Northern Virginia and the Department of Transportation’s headquarters building and the Securities and Exchange Commission building in Washington, D.C. In this report, we identify examples in which the comparative cost advantage of building ownership would result in significant financial savings over the long term, including the FBI field office buildings in Chicago, Illinois, and Tampa, Florida (see table 2). Also, OMB said that we carefully selected long-term lease arrangements in our report. Our report methodology clearly indicates that we chose GSA’s regional offices in Atlanta, Georgia; Chicago, Illinois; and Fort Worth, Texas; and USPS facility service offices in Lawrenceville, Georgia; Bloomingdale, Illinois; and Dallas, Texas, because these locations had a high number of larger-dollar-value leases and were geographically diverse. OMB said that our report recognizes 80 percent of GSA’s leases are for 20,000 square feet or less. OMB also said that when there is general purpose office space available in a competitive marketplace, leasing may be a more viable option. Our report acknowledges that leasing is a practical way to address issues such as a limited amount of available federal space in a geographic area or a need for a small amount of square footage. In addition, we cite operational requirements— such as changes to an agency’s mission, immediate space needs, security requirements, or a desire for flexibility—as drivers of the decision to lease rather than own space. OMB said that the 30-year economic comparison of leasing to ownership will almost always show that ownership is more cost-effective than leasing, especially when including land values, and that the federal government may not recoup the value the 30-year comparison suggests. We believe that the 30-year net present-value analysis, which GSA has used for decades and that measures multiyear cash flows in present-dollar terms so the value of a dollar received today can be compared against the value of a dollar received in the future, remains a valid measure for evaluating long-term costs of leasing versus building ownership. Finally, OMB states that in instances where there is a long-term need or if there is a need for a special purpose facility not readily available in the leasing market, that acquisition is an appropriate strategy and that agencies should budget accordingly. Our report discusses at length GSA’s and USPS’s concerns that they lack the up-front capital to pursue building ownership for facilities. For example, GSA officials we spoke with in the field said that construction funds are available for capital projects in their region but these dollars tend to be designated for the construction of agency headquarters, courthouses, and border stations and typically are not used for federal office space, such as that needed to fulfill FBI’s field office needs. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Director and Deputy Director of OMB, the Administrator of GSA, and the Postmaster General of the United States. Additional copies will be sent to interested congressional committees. We also will make copies available to others upon request, and the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2834 or at goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our objectives were to identify (1) the profile of domestically held, federally leased space, including the overall amount and type of space agencies lease, and discuss any related trends; (2) the factors that drive agencies to lease space that may be more cost-effective to own; and (3) the actions, if any, the administration has taken, and what alternative approaches have been proposed, to address agencies’ reliance on costly leased space. To identify the profile of domestically held, federally leased space, we examined the Federal Real Property Profile (FRPP), the government’s real property database, and federal fiscal year 2006 leasing data and obtained breakouts of leased building space by federal agency and predominant usage. FRPP is maintained by the General Services Administration (GSA) on behalf of the Federal Real Property Council, which controls access to the data. We eliminated federal agencies from consideration that lease their building space overseas. We identified three civilian real-property- holding agencies that are the largest in terms of leased building space held within the United States and U.S. territories for a more detailed analysis and assessment of FRPP data. These agencies are GSA, the U.S. Postal Service (USPS), and the U.S. Department of Agriculture (Agriculture), which collectively hold 71 percent of domestic, federally leased space. We analyzed GSA’s, USPS’s, and Agriculture’s fiscal year 2006 FRPP leasing data, including the data for elements such as real property type, use, legal interest, status, reporting agency, using organization, size, rate of utilization, value, condition index, mission dependency, annual operating and maintenance costs, and general location. To analyze the major trends in leased space, we were not able to use FRPP, since fiscal year 2005 was the first year federal agencies were required to submit information on their leased real property assets to FRPP. Therefore, we reviewed annually archived GSA leasing data for fiscal years 2003 through 2006, including analyses of certain aspects of the leased portfolio such as vacancy rates in leased buildings, lease extensions, and leases that are operating at a negative net operating income. Because GSA leases a variety of different space types for many government agencies, its leased portfolio provides a governmentwide perspective on federally leased building space. Issues related to collection techniques and availability precluded USPS and Agriculture from providing us with an electronic copy of annually archived leasing data, and therefore we could not include information from these agencies in our analysis of trends under this objective. We also reviewed GSA’s System for Tracking and Administering Real Property and GSA- generated reports on real property, including the State of the Portfolio and the Lease Portfolio Reports. We assessed the reliability of the leasing data provided by the Office of Management and Budget (OMB) and GSA and interviewed OMB officials because OMB oversees the implementation of Executive Order 13327, which addresses real property management and FRPP. We determined that these data were sufficiently reliable and valid for the purposes of this review. To determine the factors that drive agencies to lease building space that may be more cost-effective to own, we focused on GSA and USPS, the two agencies that lease the most building space. To determine the major factors that guide these decisions, we interviewed GSA and USPS headquarters and regional officials with authority over leasing building space. In addition, we visited GSA regional offices and USPS facility service offices that are either in the same metropolitan area or near to it. We visited GSA regional offices in Atlanta, Georgia; Chicago, Illinois; and Fort Worth, Texas; and USPS facility service offices in Lawrenceville, Georgia; Bloomingdale, Illinois; and Dallas, Texas. We selected these locations because they had space leased to multiple agencies and a high number of larger-dollar-value leases and were geographically diverse. The leases we selected were among the larger-dollar-value leases within these locations. To determine the cost of leasing versus the cost of new construction ownership, we selected seven GSA and three USPS building leases. For GSA buildings, we used GSA economic analyses we requested that compared the estimated costs of leasing with the estimated costs of ownership, while for USPS buildings, we reviewed analyses previously prepared by USPS officials comparing these estimated costs. For GSA properties, we worked with budget allocation specialists from GSA’s real property division to estimate the costs of leasing versus ownership over a 30-year period. We developed this estimate through a net present value analysis of both leasing and purchasing using GSA’s The Automated Prospectus System (TAPS). To estimate leasing costs, we used data from the selected leases for each of the sample buildings, such as usable square footage, tenant alteration costs, services and utilities, and lease terms. If this information could not be located on a particular GSA lease, TAPS program defaults were used. Certain rental rates, such as “step rents” that increase over a period of years, were adjusted to present-value terms, as appropriate. To estimate the costs of new construction ownership, we used cost data from GSA’s General Construction Cost Review Guide (for estimated construction costs), commercial real estate data from CoStar (for estimated land costs), and the Public Building Service’s Design and Construction Professional Services Budget Estimating Tool (for estimated design and review and management and inspection costs). All inputs for both estimated leasing and ownership costs were adjusted to 2005 dollars using OMB inflation and interest rate data and certain TAPS program defaults from that year. We selected 2005 as the base year because it was the most recent year for which the General Construction Cost Review Guide contained updated actual construction data, rather than estimates. After estimating the costs of both leasing and ownership, we compared these amounts to determine which method of financing would have greater financial savings over the long term. Our findings from visits to, and economic analyses of, federally leased space cannot be generalized to federally leased space nationwide. To assess the actions, if any, the administration has taken, and what alternative approaches have been proposed, to address agencies’ reliance on costly leased space, we reviewed previously issued GAO reports on real property management and leasing. We also reviewed past proposals for reforming federal leasing policy, including reports by the President’s Commission to Study Capital Budgeting and by the Congressional Budget Office on budget scoring. We also interviewed OMB officials about efforts to address agencies’ reliance on leasing as part of ongoing reform efforts. We conducted this performance audit from July 2006 to January 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact person named above, Elizabeth Repko, Susan Michal-Smith, David Sausville, Gary Stofko, and Adam Yu also made key contributions to this report. Federal Real Property: Progress Made toward Addressing Problems, but Underlying Obstacles Continue to Hamper Reform. GAO-07-349. Washington, D.C.: April 13, 2007. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. Federal Real Property: NIH Has Improved Its Leasing Process, but Needs to Provide Congress with Information on Some Leases. GAO-06-918. Washington, D.C.: September 8, 2006. Federal Real Property: Reliance on Costly Leasing to Meet New Space Needs Is an Ongoing Problem. GAO-06-136T. Washington, D.C.: October 6, 2005. Federal Real Property: Further Actions Needed to Address Long-standing and Complex Problems. GAO-05-848T. Washington, D.C.: June 22, 2005. U.S. Postal Service: Bold Action Needed to Continue Progress on Postal Transformation. GAO-04-108T. Washington, D.C.; November 5, 2003. Budget Issues Alternative Approaches to Finance Federal Capital. GAO- 03-1011. Washington, D.C.: August 21, 2003. General Services Administration: Factors Affecting the Construction and Operating Costs of Federal Buildings. GAO-03-609T. Washington, D.C.: April 2, 2003. High-Risk Series: Federal Real Property. GAO-03-122. Washington, D.C.: January 2003. Budget Scoring: Budget Scoring Affects Some Lease Terms but Full Extent Is Uncertain. GAO-01-929. Washington, D.C.: August 31, 2001. General Services Administration: Comparison of Space Acquisition Alternatives—Leasing to Lease-Purchase and Leasing to Construction. GAO/GGD-99-49R. Washington, D.C.: March 12, 1999. General Services Administration Opportunities for Cost Savings in the Public Buildings Area. GAO/T-GGD-95-149. Washington, D.C.: July 13, 1995. Budget Issues: Budget Scorekeeping for Acquisition of Federal Buildings. GAO/T-AIMD-94-189. Washington, D.C.: September 20, 1994. Public Buildings Budget Scorekeeping Prompts Difficult Decisions, GAO/T-AIMD-GGD-94-43. Washington, D.C.: October 28, 1993. Federal Office Space Increased Ownership Would Result in Significant Savings. GAO/GGD-90-11. Washington, D.C.: December 22, 1989.
In January 2003, GAO designated federal real property as a high-risk area, citing the government's overreliance on costly, long-term leasing as one of the major reasons. GAO's work over the years has shown that building ownership often costs less than operating leases, especially for long-term space needs. GAO was asked to identify (1) the profile of domestically held, federally leased space including the overall amount and type of space agencies lease, and any related trends; (2) the factors that drive agencies to lease space that may be more cost-effective to own; and (3) any actions taken by the administration and alternative approaches proposed to address this issue. GAO reviewed fiscal year 2006 Federal Real Property Profile (FRPP) leasing data and relevant documents and interviewed officials from the General Services Administration (GSA), the Office of Management and Budget (OMB), and the U.S. Postal Service (USPS). GAO also reviewed 10 building leases that were among those with the largest dollar value in 3 locations GAO visited. Federal agencies rely extensively on leasing, occupying about 398 million square feet of leased building space domestically in fiscal year 2006, according to FRPP data. GSA, USPS, and the U.S. Department of Agriculture lease about 71 percent of this space, mostly for offices, with the military services leasing another 17 percent. GSA is increasing its use of leased space and predicts that in 2008 it will, for the first time, lease more space than it owns. In the 10 GSA and USPS leases GAO examined, decisions to lease space that would be more cost-effective to own were driven by the limited availability of capital for building ownership and other considerations, such as operational efficiency and security. For example, for four of the seven GSA leases GAO analyzed, leasing was more costly over time than construction--by an estimated $83.3 million over 30 years. Although ownership through construction is often the least expensive option, federal budget scorekeeping rules require the full cost of this option to be recorded up-front in the budget, whereas only the annual lease payment plus cancellation costs need to be recorded for operating leases, making them "look cheaper" in any year even though they generally are more costly over time. USPS is not subject to the scorekeeping rules and cited operational efficiency and limited capital as its main reasons for leasing. While the administration has made progress in addressing long-standing real property problems, efforts to address the leasing challenge have been limited. GAO has raised this issue for almost 20 years. Several alternative approaches have been discussed by various stakeholders, including scoring operating leases the same as ownership, but none have been implemented. The current real property reform initiative, however, presents an opportunity to address the leasing challenge.
The Ready Reserve Force (RRF) is a government-owned, inactive fleet of former commercial ships of various configurations and capabilities that should be ready to sail within 4, 5, 10, or 20 days in response to national emergency sealift requirements. The RRF was organized in 1976 with 30 ships drawn from the 360 ships in the National Defense Reserve Fleet, which was created in 1946 to be able to respond to national emergencies.During fiscal year 1994, the RRF consisted of 108 ships. The fleet is expected to increase to 142 ships by 1999 in accordance with the Department of Defense’s (DOD) Mobility Requirements Study and defense plans. The Department of Transportation, through the Maritime Administration (MarAd), manages and maintains RRF ships, and DOD directs and controls operations once the ships have been activated. The Military Sealift Command, under the Transportation Command, carries out DOD’s oversight and operational control responsibilities. RRF ships are berthed at Reserve Fleet sites located in James River, Virginia; Beaumont, Texas; Suisun Bay, California; and other locations in the United States and overseas. MarAd contracts with ship managers, who are responsible for activating, maintaining, crewing, operating, and deactivating the ships as directed. The American Bureau of Shipping and the U.S. Coast Guard conduct periodic limited inspections of the ships for compliance with marine safety regulations. The ship managers are required to ensure that the ships are fully operational once MarAd notifies them to activate the ships. The Military Sealift Command designates a ship as being fully operational when, among other things, a complete crew has been provided, all required regulatory certificates have been obtained, and sea trials have been performed. Of the 96 ships that were in the RRF in August 1990, 78 were called upon to support the Persian Gulf War. This was the first large-scale activation and employment of the RRF since it was separated from the National Defense Reserve Fleet. These ships transported nearly one-fifth of the dry cargo deployed. However, ship activations were not as timely as planned—only 25 percent of the activated ships met their assigned readiness goals. Although insufficient fiscal year 1990 maintenance funding has been cited as a major reason for many late activations, a Department of Transportation Inspector General report attributed ships’ mechanical and crewing problems as the primary reasons the RRF ships were not able to meet specified activation times. Mechanical problems were attributed to (1) the poor condition of the ships when they were purchased for the RRF, (2) the age of the ships’ equipment, (3) improper steps taken to deactivate the ships, (4) the ships’ lack of use, (5) shipyard repairs and upgrades not being tested upon completion, and (6) uncorrected deficiencies identified during the periodic Coast Guard inspections and during specified fleet maintenance procedures. The Transportation Inspector General reported that 45 of the 79 ships activated for the Persian Gulf War had not been operated since their acquisition and acceptance into the RRF. The Transportation Inspector General also reported that mariners must be onboard the activating ships at specified times and should be knowledgeable of the ships and their operating systems. They are responsible for testing and operating all components and systems required to make the ships operational. Crewing problems during the war were caused by crew members that arrived late, lacked knowledge of the specific ship and operating systems, or had to be replaced because of crew turnover. The Chairman, Subcommittee on Readiness, House Committee on Armed Services, asked us to evaluate whether RRF ships would be ready within specified time frames in the event of a large-scale contingency. Specifically, we determined whether (1) the changes implemented to address problems encountered during war activations have improved the ships’ overall readiness, (2) the readiness level of the highest priority ships exceeds that of other strategic mobility components, and (3) a further decline in the number of available U.S. merchant mariners would have a long-term effect on crewing the force. To determine whether the overall readiness of the RRF had improved since the Persian Gulf War, we compared selected ships’ 1990 and 1993 activation performance. The selected ships included those activated in 1993 for purposes other than testing post-war repairs. We reviewed MarAd, Military Sealift Command, Coast Guard, and American Bureau of Shipping records, as well as related external studies, to determine the length of activation, significant machinery failures, and costs. We identified and reviewed program modifications initiated since the war. We visited ships on the East, Gulf, and West Coasts; interviewed officials from the Military Sealift Command, the U.S. Coast Guard, the American Bureau of Shipping, MarAd, the Merchant Marine Academy, and unions, as well as RRF crew members and ship managers; and reviewed studies and documents to assess the relative benefits of these program changes. We examined the status and effect of post-war repair expenditures, changes in MarAd’s policies for maintaining and preserving the ships, changes to MarAd’s contractual agreement with ship manager companies, and new automated maintenance management systems. We also assessed MarAd’s plans for maintaining the readiness of the RRF and the consequences of reduced funding. To determine the appropriate readiness status for the RRF, we obtained and analyzed data on several strategic mobility factors that may diminish the need for RRF ships to be maintained in a high-priority status. We analyzed the Mobility Requirements Study’s volume I and II reports, using the Middle East scenario, to determine the times specific types of ships first loaded, the ports that were assumed to be loading cargo within the first 30 days, and the locations and readiness goals set in the study. We discussed our analysis with Joint Chiefs of Staff officials. We reviewed the Army’s Strategic Mobility Program’s Management Plan to identify funding and plans for improvement to the continental United States infrastructure that would increase the mobility of selected military units to their seaports of embarkation. We discussed the status of these improvements with Army headquarters, Forces Command, Transportation Command, and Military Traffic Management Command officials. To determine the current and future availability of crews, we reviewed Persian Gulf War RRF activation data, RRF crewing requirements, and labor availability estimates. We sponsored a workshop on April 5, 1994, to discuss the impact of the declining merchant marine pool on U.S. sealift capability, identify impediments in crewing RRF ships, and reach consensus among the participants on how to address the issues discussed. We issued a report, that provided a summary of views expressed at the workshop. We have issued several other products on issues related to the RRF. In our testimony, we discussed several issues related to U.S. mobility capabilities that need to be resolved, including the Army’s current ability to get cargo to the ports. In our letter to the Maritime Administrator (GAO/NSIAD-94-96R, Jan. 7, 1994), we provided information on the latest selection of 12 roll-on/roll-off ships for the RRF. We stated that these ships would provide 1.6 million square feet of additional deck space for surge requirements, thereby increasing the fleet’s roll-on/roll-off cargo capacity by 40 percent. Three of the ships have been upgraded to U.S. specifications, and the other nine ships are expected to be completed by November 1994. Two of these ships are serving as Army prepositioning ships. A complete list of related GAO products appears on the last page of this report. We conducted our review between August 1993 and June 1994 in accordance with generally accepted government auditing standards. MarAd has made significant progress in implementing changes to address problems encountered while activating RRF ships for the Persian Gulf War. It identified and corrected equipment deficiencies, instituted more uniform and comprehensive specifications for the deactivation and preservation of RRF ships, strengthened ship manager controls by expanding and clarifying the manager’s contractual responsibilities, and developed and implemented automated information systems for tracking maintenance repairs. MarAd also initiated programs, such as assigning permanent, nucleus crews onboard high-priority ships to help maintain the RRF’s material condition achieved after the war and alleviate crewing concerns. Total RRF program expenditures between fiscal years 1990 and 1993 were more than $1 billion when Persian Gulf War activation and deactivation costs are included. Recent activations of the RRF have demonstrated the ships’ ability to meet readiness requirements and operate with fewer mechanical failures. These successful activations resulted largely from maintenance and repairs made during and after the war. In fact, activations without prior notice occurred an average of only 7 months after the ships had been deactivated from the war. However, continuing the present high readiness status of the RRF ships depends on MarAd’s future budgets and maintenance strategies. MarAd spent more than $1 billion to resolve mechanical, management, and crewing problems encountered during Persian Gulf War activations. Problems targeted were the poor condition of the ships upon entry to the force; improper deactivation, as well as insufficient preservation and maintenance techniques; insufficient monitoring of ship manager companies; unknown, deferred, or unreported equipment deficiencies; and inadequate crewing. MarAd has made significant progress toward solving those problems. The war experience enabled MarAd to identify and repair equipment deficiencies at an average cost of $5.5 million per ship. These repairs had been deferred because many of the ships had been laid up since being acquired for the RRF. Many ships had been added to the RRF with many known and unknown machinery deficiencies, and MarAd lacked the funding at that time to activate, fully inspect, and repair many of the ships. In fact, prior to the war, only 34 ships had been activated since entering the RRF. Improper deactivation and inconsistent preservation maintenance techniques for RRF ships contributed to mechanical problems encountered during war activations. Inadequate lay-up and maintenance techniques contributed to boiler problems, clogged pipe systems, freeze damage, and impurities in lubrication systems. After the war, MarAd instituted more uniform and comprehensive specifications for the deactivation and preservation of RRF ships. For example, to reduce corrosion in boiler tubes—a problem for steam-powered ships laid up in colder climates—some boiler panels are now left uncovered, and fans and heaters are installed to improve the circulation of dehumidified air to prevent freezing. These new techniques, however, have raised the cost of standard maintenance procedures. Some ship managers contributed to activation delays during the war because of their lack of capability. For example, two ship managers were unable to activate their ships due to their lack of knowledge. (These managers’ contracts were subsequently canceled.) As a result of this and other contract management difficulties during the war, MarAd expanded and clarified ship managers’ contractual responsibilities at an estimated annual cost increase of $18,000 per ship. The new contracts raise the standards of technical competence, increase the number of technical employees per ship, clarify penalties, and emphasize monitoring ship manager performance. Before the war, RRF ship logs and machinery history records appear to have been randomly kept by the chief and port engineers. For example, according to MarAd officials during the war activation, one of five similar ships had recently had an engine overhaul, but MarAd personnel and ship managers were unsure which ship had been repaired. Up-to-date inventories of vital components and spare parts were also not available for the aging machinery on the RRF ships, which caused activation delays while some parts had to be manufactured and shipped to the activation sites. After the war, MarAd implemented automated systems for tracking maintenance, repairs, and spare parts inventories. A Maintenance and Repair Tracking System has been developed by MarAd officials and contractors, and all repair funding requests must now be supported by deficiencies recorded in this system. Contractors have also developed an automated system for maintaining spare parts records and are completing inventories of shipboard spares to update the system’s records. As of April 1994, MarAd had invested $26 million in its new logistics program, and annual spare parts costs are estimated to be $70,000 to $80,000 per ship. The location of the RRF ships at the time of the war degraded the ships’ ability to be activated within planned time frames. Every ship required some maintenance at a shipyard before reaching its designated loading seaport. Although half the RRF ships were outported—berthed at locations other than the three MarAd reserve fleet sites—they were not necessarily located near a shipyard. During activation for the war, the number of ships requiring simultaneous activation in a given location sometimes exceeded the capacity of the local shipyards. As a result, queuing occurred, and some ship activations were delayed. When reassigning ships to a particular location, MarAd officials considered factors such as towing time; likely congestion in shipyards; and, for high-priority ships, transit times to likely seaports of embarkation. By fiscal year 1994, MarAd had reassigned most RRF ships accordingly. For example, 5- and 10-day ships have been better distributed according to shipyard facilities for activation, and the high-priority 4-day ships have been positioned closer to their ports of embarkation because they do not require shipyard services to be activated. To address the problems of identifying crews and transporting them to the ships in a timely manner, MarAd placed the highest priority ships in reduced operating status (ROS) with permanent, nucleus crews onboard. These crews form the core of the required operating crew, which ensures that key personnel needed for swift activation are onboard ships. According to MarAd, about 50 percent of the senior engineers and 34 percent of the junior engineers required by the RRF are already serving onboard ROS ships. The current crew structure on a ROS ship consists of a Chief Engineer; 1st, 2nd, and 3rd assistant engineers; a qualified member of the Engineering Department; an electrician; a chief mate; a bosun; a steward/cook; and a steward/utility. The crew’s familiarity with the ship’s particular operating systems and characteristics can avoid costly and time-consuming activation delays. For example, during the war, one newly assigned crew could not keep the steam propulsion system operating shortly after the ship had been loaded and left the port. The ship had to be towed back to port, where shipyard personnel that assisted in the ship’s activation restored the system within 30 minutes. ROS crews could also help newly arriving crew members make the transition to a level of full competence on an unfamiliar ship. Comparisons of recent activations to those during the war demonstrate that MarAd’s efforts to improve readiness have been successful. Sixteen of the 20 ships we reviewed had been activated for the war and were an average of 11.6 days late. In 1993, some of those 20 RRF ships were activated earlier than Military Sealift Command or MarAd criteria; they were activated more quickly with fewer, less significant equipment failures and at less cost. Military Sealift Command, American Bureau of Shipping, and MarAd officials agree that the improved readiness of RRF ships is largely due to the extensive repairs made during and after the war. Six of the 20 ships were activated without prior notice to test their readiness and therefore the tests were the most realistic tests of the RRF’s ability to meet defense requirements. All six ships were activated on time or earlier than the time required, and the average cost to activate them was about $1 million less than during the war. The ships had been inactive between 6 and 9 months, or an average of 7 months, before their unannounced activations. Tables 2.1 and 2.2 compare the time and cost for five of the ships activated without notice both in wartime and in 1993. For both the wartime and 1993 activations without prior notice, DOD reimbursed MarAd for activation and deactivation costs. Specific data that compares two of the five ships’ recent tests and wartime activations follow: The Cape Isabel, a steam-powered ship, was built in 1976, added to the RRF in 1986, and laid up until its activation for the war in August 1990. During its wartime activation, the ship had many equipment failures in its boiler system, fuel system, evaporators, and service generator. After the war, MarAd spent almost $5.6 million improving the ship’s habitability and overhauling its steam engine and other major equipment. In 1993, the ship was activated without prior notice and, according to the after-action report, overall ship systems operated well during the activation and the sea trial. The Comet, a steam-powered ship built in 1958, was added to the RRF in 1985, and laid up until its wartime activation in August 1990. During this activation, the ship encountered numerous problems with its steam plant, ballast tank pumps and indicators, salt water service systems, electrical equipment, and service generator. During postwar deactivations, MarAd spent almost $5.6 million overhauling the ship. When the ship was activated without notice in 1993, no major deficiencies were reported during the activation or the sea trial. The remaining 14 ships we reviewed were activated on less demanding schedules for either sea trials or operations. These activation costs were on average about $0.6 million less than during wartime activations. Also, there were fewer and generally less critical equipment failures and fewer delays in making these repairs. MarAd has initiated RRF maintenance strategies that will sustain fleet readiness. These strategies combine various actions taken after the Persian Gulf War to solve activation problems at an annual cost of $3 million for a 4- or 5-day ship compared with $800,000 for an average ship before the war. MarAd believes that these strategies should provide reasonable assurance that MarAd can activate RRF ships within required time frames. Table 2.3 identifies the major components of MarAd’s current maintenance strategies. The ROS concept and the annual test activations are significant changes to the RRF maintenance program. MarAd cannot be certain that funding will be available in the future to continue all of its preferred maintenance strategies and has begun examining variations within its overall RRF maintenance program. For example, MarAd assigned one 14-person crew onboard two roll-on/roll-off ships berthed together, and it is considering placing all 5-day ships in ROS with 9-person crews onboard. MarAd’s fiscal year 1994 RRF budget request submitted to the Congress for maintenance and operations was $136 million, or approximately $221 million less than the $357 million identified in the Mobility Requirements Study. To make up this difference, only 11 of the designated 22 4-day ships currently have permanent, nucleus crews. Also, the number of planned maintenance test activations was reduced from 93 in accordance with mobility study standards to 32. MarAd officials concluded that reduced funding in fiscal year 1994 would not result in a great degradation of readiness. Although budget constraints are expected to continue, MarAd’s fiscal year 1995 budget request includes $246 million for maintenance and operations. The readiness of the RRF has improved since the Persian Gulf War due to the $1 billion invested in the program. Officials from the Military Sealift Command, American Bureau of Shipping, and MarAd agree that the satisfactory readiness of RRF ships is primarily due to the identification and repair of machinery deficiencies during and after the war. Despite this investment, however, these officials could not say how long the ships will stay as ready. They agree that continuing the present readiness status of RRF ships depends on MarAd’s future budgets and maintenance strategies. On the basis of DOD’s 1992 Mobility Requirements Study, MarAd plans to keep 63 RRF ships in a high state of readiness (i.e., ready to activate within 4 or 5 days) starting in fiscal year 1996. However, this high state of readiness exceeds the Army’s ability to transport heavy equipment, such as large vehicles, tanks, weapon systems, and helicopters, to seaports for initially deploying units. Army deployment operations are hindered in the continental United States by deteriorating transportation-related facilities and overseas by its limited ability to unload ships in underdeveloped seaports. Moreover, DOD’s mobility study’s recommendation to keep 63 RRF ships in a high readiness status is not justified by the detailed study analysis. In the event of a large-scale contingency, the Army must be able to deploy heavy divisions rapidly from the continental United States. These divisions are located at Fort Stewart and Fort Benning, Georgia; Fort Campbell, Kentucky; and Fort Hood and Fort Bliss, Texas. The movement of people, equipment, and supplies to ports of embarkation is the first stage of a major operation, called “fort to port” movement. Figure 3.1 displays the fort to seaport movement for selected heavy divisions. We reported in 1992 that fort to port cargo movement during Operation Desert Shield was constrained by deteriorated rail facilities that could have threatened the Army’s ability to move equipment to seaports rapidly. Although the equipment reached the ports on time despite serious rail deficiencies, movement of equipment for Desert Shield was on a smaller scale and took a longer time than strategic mobility plans require today. Current deployment plans call for moving the same amount of cargo in 8 weeks that was moved during the first 6 months of the Persian Gulf War. The Army has a plan that identifies needed improvements to the transportation infrastructure in the continental United States to meet the cargo flows assumed in DOD’s mobility study. The Army’s plan identifies improvement projects such as acquiring rail cars and upgrading or constructing facilities, highway, and rail networks and ports for receiving, storing, and loading Army equipment and supplies. These improvements are projected to cost about $550 million and be funded through fiscal year 2001. As of August 1994, only $28 million had been spent on these projects. At the Army’s planned funding levels, some projects will take 15 to 20 years to complete and, therefore, will not be in place by the 1999 time frame assumed in the mobility study. Army officials acknowledged that many infrastructure conditions remain essentially the same since the war. The deployment problems of three key contingency divisions and the improvements identified by the Army are discussed below. (The Mobility Requirements Study assumes that these three divisions are a part of an early response.) The 101st Airborne Division (Air Assault) deploys from Fort Campbell, which is approximately 630 miles from its seaport of embarkation in Jacksonville, Florida. The mobilization plan for this division relies heavily on the use of rail transportation. However, this method of transportation could not be used during Desert Shield because of deteriorated conditions and limitations of the Hopkinsville, Kentucky, rail interchange. The Army has identified approximately $23 million in infrastructure improvements that are necessary to meet the requirements in DOD’s mobility study. Projects identified include rail upgrades, additional track (for a Hopkinsville bypass), and a pallet warehouse. In addition, 200 rail cars are to be acquired for Fort Campbell to execute the mobility plans. At the time of our review, none of the rail cars had been delivered. The 1st Cavalry Division deploys out of Fort Hood, which is about 320 miles from its seaport of embarkation in Beaumont, Texas. The division’s mobilization plan calls for the lead brigade to move to this seaport by rail. However, the existing rail system at Fort Hood cannot support these deployment requirements, according to the Military Traffic Management Command’s 1993 Engineering Study. Infrastructure improvements totaling $69 million have been identified. Projects include rail upgrades, additional tracks for storing and switching rail cars, a warehouse, and airfield upgrades. In addition, 512 rail cars are needed for prepositioning at this installation, but only 14 percent had been delivered at the time of our review. The 24th Infantry Division (Mechanized), stationed at Fort Stewart is only 40 miles from its designated seaport of embarkation in Savannah, Georgia. Fort Stewart was one of the first units during Desert Shield to transport large numbers of heavy tracked vehicles by rail. However, train speeds were restricted to 10 miles per hour or less, compared with 25 miles per hour under normal circumstances, because of deteriorated rail conditions. Fort Stewart needs $30 million in infrastructure upgrades and construction to meet deployment goals. Projects identified include rail passing tracks, a container handling facility, and a cargo staging area. Of the 220 rail cars needed to meet mobility requirements, 42 percent are currently located at the fort. The Army believes that many infrastructure improvements have been accomplished that are necessary to support the initial surge units, as demonstrated by readiness exercises. The Army conducts these exercises to train heavy units in the continental United States on strategic deployment. However, only a portion of a division takes part in the exercises. For example, in September 1993, the Army conducted an exercise moving selected military equipment for a brigade task force of the 1st Cavalry Division from Fort Hood to Beaumont. The equipment used for this exercise only accounted for 474 of 1,975 pieces of the brigade task force. In 1994, the Army conducted a similar exercise with 900 pieces of equipment. Movement of an entire division has not yet been tested in the exercises to date. The Army’s ability to meet its mobility requirements is also affected by its capability to deliver forces to underdeveloped or damaged overseas seaports. The Joint Logistics Over the Shore is a system of floating causeways that the Army can use to unload ships when the ships cannot enter a foreign seaport. As part of that concept, auxiliary crane ships will be used to unload RRF ships. Overall, the auxiliary crane ships are not kept in as high a readiness status as the surge cargo ships. Of the nine crane ships in the RRF, four are maintained in a 5-day status and five are maintained in a 10-day status. The Army plans to unload equipment and supplies from ships within 48 hours of the ships reaching their overseas destination. A recent DOD Inspector General report determined that the Army would not be able to meet its 48-hour requirement if all the factors—including the ships’ distances from shore, the ocean conditions, the number of crane ships available, and the number of watercraft (small ships, tugboats, and floating causeways)—were not optimal. Army officials are planning on improving the effectiveness of the Joint Logistics Over the Shore concept through training and additional acquisition. The mobility study examined a range of potential crises, including regional wars in Europe, the Middle East, and Korea, in the 1999 time frame. The most logistically demanding scenario examined, because of the number of forces and the distances involved, was a major regional war in the Middle East. DOD used a model’s results to support the study’s recommendation that the RRF maintain 36 ships in 4-day status and 27 ships in 5-day readiness. Our analysis of DOD’s model for the Middle East scenario does not support the study’s recommendation. The study’s model assumed that 6 RRF roll-on/roll-off ships, not 36, would be needed at the seaports of embarkation by day 4 and that 8 other RRF ships, not 27, would be needed at the seaports by day 5. By the 10th day, the model assumed that only 36 ships would have arrived at the seaports. DOD officials could not explain the wide discrepancy between the model’s data and the mobility study’s recommendation; however, DOD officials confirmed that they believe this recommendation is still valid. DOD is conducting another review of its strategic mobility requirements. This review will be based on the requirement from the October 1993 Bottom-Up Review to fight two regional conflicts nearly simultaneously. DOD anticipates issuance of a report by January 1995. In addition to DOD’s ongoing mobility study, the Transportation Command is conducting an RRF modernization study. The objectives of this study are to determine the optimum RRF size, composition, and readiness to meet surge sealift requirements. The study’s final recommendations will be submitted to DOD’s Joint Staff/Transportation Command working group for inclusion in an update to the Bottom-Up Review report. DOD’s determination of the number of RRF ships it wants maintained at various readiness levels is overstated, given (1) the current ability of the Army to get from the forts to the ports and (2) the disparity between the output of the analytical model and the mobility study’s recommendation. The results of DOD’s updated mobility study should guide future RRF ship readiness levels. Given these problems, we believe DOD needs to provide more realistic readiness requirements to MarAd. We recognize one lesson of the Persian Gulf War—RRF ships need to be more ready than they were at that time—however, we also believe that the government should not pay to keep an excess number of ships in a high-priority status. We recommend that the Secretary of Defense direct the Commander in Chief, Transportation Command, to annually review RRF ship readiness requirements provided to MarAd and ensure that they are in line with current military deployment capabilities. DOD generally agreed with our findings and concurred with our recommendation to the Secretary of Defense. DOD stated that RRF readiness requirements are reviewed annually in cooperation with MarAd as part of budget reviews. DOD also noted that RRF readiness levels are currently being examined as part of the Mobility Requirements Study Bottom-Up Review Update. DOD’s comments are reproduced in appendix I. MarAd and DOD agree that a viable U.S. merchant marine industry is the best source for mariners to crew the RRF in an emergency. However, future declines in the pool of U.S. mariners seem likely and would affect MarAd’s ability to adequately crew these ships. The Department of Transportation has proposed legislation designed to help support the U.S. merchant marine industry. If, however, this legislation is not implemented or does not adequately support a mariner labor pool capable of providing enough mariners for the RRF, other measures will need to be taken. Both DOD and MarAd have studied the idea of a merchant marine reserve program to either supply or augment the required crew. Maritime labor unions, who are opposed to reserve programs, have proposed other alternatives, such as chartering commercially useful RRF ships to U.S. operators. DOD is also examining a proposal that has the potential of eventually increasing the available U.S. merchant marine labor pool. The RRF currently requires around 3,700 mariners for 108 ships. To meet DOD’s Mobility Requirements Study, the RRF will require approximately 4,800 mariners, as shown in table 4.1. DOD is currently reevaluating the size and composition of the RRF in an update of the Mobility Requirements Study. Once DOD determines the number and type of ships in the RRF, crewing levels in accordance with Coast Guard regulations and MarAd recommendations can be reestablished. According to DOD, if the RRF crewing requirements remain about 4,800, and the U.S. merchant marine labor pool continues to decline, steps must be taken to ensure that an adequate pool of qualified mariners is in place. Crews for the RRF come from the labor pool that exists to operate ships in the active U.S. commercial fleet. Each merchant mariner job supports about two mariners in the labor pool to allow for training, vacations, and job rotations. The U.S. commercial ocean-going labor pool, from which crew members are drawn, currently consists of about 21,000 mariners competing for about 9,300 shipboard jobs. Therefore, MarAd should not have a labor supply problem for crewing RRF ships in the near term. Events in the maritime industry have had a direct impact on the potential crewing of RRF ships. During the 1960s, commercial ships registered in the United States typically went to sea with crews of more than 40 persons. The average crew size has since declined in response to labor-saving technology and automation. For example, the introduction of automated boiler controls during the mid-1960s and diesel-powered ships during the 1970s reduced crew size. As a result, the number of shipboard jobs decreased. For example, in 1970, the 843 ships in the U.S.-registered, ocean-going fleet provided around 40,000 jobs. By 1993, the number of ships had declined to 350, and only about 9,300 jobs were available. The gradual decline in the pool of qualified and available mariners needed for the RRF is shown in figure 4.1. The Department of Transportation and DOD have conducted numerous studies that address the continued decline in the merchant marine industry. General and specific proposals have been suggested to aid the industry. Thus far, Transportation, DOD, and others involved, such as unions and ship operators, have been unable to reach consensus on specific programs or crewing alternatives. Therefore, action has not occurred. Transportation has proposed legislation—the Maritime Security Program (H.R. 4003 and S. 1945)—to the 103rd Congress to help revitalize the U.S. Merchant Marine. The $1 billion, 10-year program proposes subsidizing up to 52 U.S.-registered liner ships. One of the program’s goals is to provide additional sealift capacity for national emergencies. Therefore, this program would help maintain an active pool of U.S. mariners that can provide the necessary crews to operate RRF ships during an emergency. However, the specific number of mariners the program may ultimately provide to RRF vessels is not known. In 1986, the Navy examined crewing alternatives for the RRF and DOD sealift ships. One alternative was to expand the existing Merchant Marine Reserve component of the U.S. Naval Reserve. At that time, the Merchant Marine Reserve’s purpose was to maintain an organization of merchant marine officers who were trained to operate merchant ships and a shoreside cadre assigned to naval activities supporting strategic sealift. The expanded program would include inactive, qualified mariners. The cost to implement this alternative was estimated to be $10 million annually. The Congress directed Transportation to explore establishing a civilian reserve program. In 1987, MarAd examined the concept of creating a civilian merchant marine reserve program. Qualified mariners who were not actively sailing but were willing to commit themselves under contract for service when needed would form the base for this program. MarAd envisioned that the program would be comprised of 6,480 mariners and estimated its cost to be $46 million a year. By 1991, MarAd had completed another study that included the concept of a civilian merchant marine reserve. MarAd studied four options that could increase mariner availability. These options ranged from a program with 500 members costing between approximately $3 million and $6 million annually to a program with over 2,000 mariners costing about $19 million annually. Studies completed after the Persian Gulf War also called for the need to consider a civilian reserve. For example, a 1991 Department of Transportation Inspector General report recommended that a civilian reserve program based on MarAd’s 1991 study be implemented. In addition, a 1991 DOD and Transportation RRF working group report recommended that the agencies jointly pursue efforts to formulate a civilian merchant marine reserve program. In 1986, the Navy examined an alternative establishing specific new Naval Reserve units dedicated to manning all defense sealift ships. These units would be comprised of about 9,000 personnel who are members of existing naval reserve programs and naval retirees. The cost of this alternative was estimated to be $46 million annually. In 1988 and 1989, the Navy told the Congress a number of its concerns about the use of naval reservists for crewing RRF ships. The Navy stated that (1) naval reservists would not be available until DOD mobilization, even though RRF ships would most likely be requested to be activated before that time; (2) civilian mariners, not Navy reservists, have the expertise in operating RRF ships; (3) no training capability is currently available in the armed forces for training personnel to operate civilian ships; and (4) crewing RRF ships with naval reservists would change their noncombatant status under international law. Maritime labor unions have opposed government efforts to establish any merchant marine reserve. They believe that a government reserve program would limit potential future jobs. They also believe that the potential implementation of any merchant marine reserve program shows a lack of support for the U.S. commercial merchant marine industry and that government funds should be directed toward aiding the maritime industry. Maritime labor unions have proposed that commercially useful ships in the RRF be chartered to U.S. operators. This proposal would help maintain a trained and experienced crew available at no cost to the government. The chartered rate would be set at the market rate and would be reduced if there were no U.S.-flag competitors. The terms of the charter would include a guarantee that an operational ship would be made available to DOD when needed. However, because these ships would be engaged in active shipping, some ships would have to first unload cargo and then return to the United States when notified and, therefore, might not be available to help meet DOD’s surge shipping requirements. Another maritime labor union proposal is to develop a designated cadre of volunteers from within the ranks of the unions to be trained and available for surge call-up. This cadre would consist of three times the number of crew members required for RRF ships to account for mariners currently at sea and others not immediately available. DOD would set the standards for training, and mariners would receive 2 weeks paid training aboard RRF ships. The cost of training, mariner wages, subsistence, transportation, and ship activity would be provided by DOD, according to this proposal. The cost for this proposal was not provided. At the RRF crewing workshop we sponsored in April 1994, MarAd presented an emergency crewing concept that would allow the unions time to assemble regular crews from the commercial sector. Under the proposal, teams of 10 to 15 inactive mariners would assist ships’ managers to activate RRF ships. The activation teams would then step aside once union crews reported to the ships. If some mariners were not available from a union, then team members could be tasked to fill those slots. MarAd stated that 2,000 mariners would be needed for this program. MarAd estimated that the initial cost would be $2.2 million and that the program would eventually cost around $11 million annually. However, funding to develop this program was not approved in MarAd’s fiscal year 1995 budget. DOD is currently considering a concept to move Navy ships, such as oilers, combat stores ships, and salvage ships, into the Military Sealift Command. Civilian mariners could crew these ships, which is a practice the Military Sealift Command currently uses for crewing its fast sealift ships. This concept could result in an increase in the size of active mariner pool and therefore the number of mariners available to the RRF. The number of mariners that could become part of the active pool would depend upon the mariner-to-billet ratio used by the Military Sealift Command. This concept may also result in substantial Navy cost savings because civilian crews aboard Sealift Command auxiliary ships are generally smaller than Navy crews aboard ships assigned to Navy battle forces. In 1986, the Chairman of the House Committee on Merchant Marine and Fisheries recognized the warning signs of a declining pool of mariners when he said that “inadequate manning is the Achilles heel of emergency sealift.” The Persian Gulf War clearly demonstrated how vulnerable the RRF could be if sufficient numbers of properly skilled mariners are not available to sail when the ships are ready. However, since the war, little has been done to improve the likelihood that RRF ships will be adequately crewed in the future. Neither DOD nor MarAd has proposed alternatives that seem to garner universal consensus—including from the Congress, the Coast Guard, maritime unions, and ship operators. Our workshop was the first time many of these players were together discussing this issue. Crewing the RRF will become a major problem as the pool of available mariners declines and the requirements remain stable or grow. We recommend that the Secretary of Transportation direct the Maritime Administrator to annually assess whether an adequate number of experienced U.S. merchant mariners would be available to crew RRF ships within DOD’s specified time frames. If these assessments indicate that the number of qualified mariners may not be sufficient, the Secretary should propose a specific merchant marine crewing alternative to the Congress. Transportation partially concurred with our recommendation to annually assess the number of qualified U.S. mariners available to crew RRF ships and, if necessary, report crewing options. Transportation said it maintains maritime workforce statistics on the size and the composition of U.S. merchant mariners and that the Coast Guard is making progress toward improving the accuracy regarding availability of mariners. However, this information is not reported to the Congress in conjunction with defense requirements. We believe that MarAd’s assessment would be an appropriate first step to define the effect that the declining U.S. merchant marine pool might have on national security. Transportation said that the recommendation should focus on the need for reemployment rights legislation for U.S. merchant mariners if called upon to serve during a war or national emergency. It pointed out that reemployment rights were discussed in our workshop report. While Transportation acknowledged that consensus has not been achieved on certain proposals—such as a civilian merchant marine reserve or expansion of the naval merchant marine reserve—it believes that identified crewing proposals have the potential to achieve consensus. Transportation stated that our workshop was particularly beneficial toward providing a forum to meet this end. Transportation also noted that it is hopeful that its Maritime Security Program, designed to strengthen the U.S. merchant marine, will pass the Congress this fall. We do believe that reemployment rights for U.S. mariners equivalent to the rights and benefits provided any member of a reserve component of the Armed Forces would be fair and equitable. However, the number of mariners who would be influenced to serve on RRF ships by this incentive is unknown and, therefore, the impact of such a program cannot be determined. In view of the continuing decline in the pool of U.S. mariners, we believe the central issue of how to crew RRF ships in the future has not been adequately addressed. Therefore, we believe that an annual assessment of the crewing issue could clearly identify specific actions needed to meet defense objectives related to sealift requirements. Further, the Congress has considered legislation for reemployment rights three times, and has not approved it twice, and at the time of issuance of this report, passage was still uncertain as was Maritime Security Reform passage. The full text of Transportation’s comments is reproduced in appendix II.
Pursuant to a congressional request, GAO reviewed the Ready Reserve Force (RRF) program, focusing on: (1) the readiness of RRF ships to respond to large-scale contingencies; (2) the program changes that were implemented to improve ship readiness and address problems encountered during the Persian Gulf War; (3) whether the readiness level of the highest-priority ships exceeds other strategic mobility components; and (4) the effect of further decreases in the number of available U.S. merchant mariners on RRF crew availability. GAO found that: (1) as a result of the problems it encountered during the Persian Gulf War, the Maritime Administration (MARAD) identified and corrected equipment deficiencies, instituted comprehensive specifications for the deactivation and preservation of RRF ships, strengthened ship manager controls, developed automated information systems for tracking maintenance repairs, and implemented new strategies for maintaining high-priority ships; (2) RRF ships will be able to meet their delivery schedules and sail within specified time frames as a result of maintenance and repairs performed during and after the Persian Gulf War; (3) MARAD ability to activate ships within 4 or 5 days exceeds the readiness level of other strategic mobility components; (4) the Army's ability to transfer unit equipment from key Army installations to seaports is constrained by deteriorated facilities; (5) although the Army plans to increase its capability to activate ships within 4 or 5 days, most projects will not be completed by 1999; (6) the Department of Defense has not justified maintaining 63 ships in a high state of readiness; (7) although the reduced number of available mariners should not immediately affect MARAD ability to crew RRF ships, its future ability to crew RRF ships is questionable; and (8) none of the proposed alternatives to resolve this situation have been adopted.
Carryover balances consist of unobligated funds and uncosted obligations. Each fiscal year, NASA requests obligational authority from the Congress to meet the costs of running its programs. Once NASA receives this authority, it can obligate funds by placing orders or awarding contracts for goods and services that will require payment during the same fiscal year or in the future. Unobligated balances represent the portion of its authority that NASA has not obligated. Uncosted obligations represent the portion of its authority that NASA has obligated for goods and services but for which it has not yet incurred costs. Through the annual authorization and appropriations process, the Congress determines the purposes for which public funds may be used and sets the amounts and time period for which funds will be available. Funding provided for NASA’s Human Space Flight and Science, Aeronautics, and Technology programs is available for obligation over a 2-year period. Authority to obligate any remaining unobligated balances expires at the end of the 2-year period. Five years later, outstanding obligations are canceled and the expired account is closed. Some level of carryover balance is appropriate for government programs. In particular, NASA’s Human Space Flight and Science, Aeronautics, and Technology appropriations are available for obligation over a 2-year period. In such circumstances, some funds are expected to be obligated during the second year of availability. Funds must travel through a series of approvals at headquarters and the field centers before the money is actually put on contracts so that work can be performed. According to NASA officials, it can be difficult to obligate funds that are released late in the year. In addition, the award of contracts and grants may sometimes be delayed. Once contracts and grants are awarded, costs may not be incurred or reported for some time thereafter. Expenditures, especially final payments on contract or grant closeouts, will lag still further behind. Finally, costs and expenditures for a multiyear contract or grant will be paced throughout the life of the contract. For these reasons, all NASA programs have carryover balances. The unobligated balances expire at the end of their period of availability, and according to NASA officials, uncosted obligations carried over will eventually be expended to cover contract costs. Carryover balances at the end of fiscal year 1995 for Human Space Flight and Science, Aeronautics, and Technology programs totaled $3.6 billion. Of this amount, $2.7 billion was obligated but not costed and $0.9 billion was unobligated. Table 1 shows the carryover balances by program. The balance carried over from fiscal year 1995 plus the new budget authority in fiscal year 1996 provides a program’s total budget authority. The total budget authority less the planned costs results in the estimated balance at the end of fiscal year 1996. Table 2 starts with the carryover from fiscal year 1996 and ends with the balance that NASA estimates will carry over from fiscal year 1997 into 1998. The cost plans shown in the tables reflect the amount of costs estimated to be accrued during the fiscal year. The carryover balances will change if actual cost and budget amounts differ from projections. NASA program officials are in the process of updating their 1996 cost plan estimates. Officials in some programs now expect the actual costs to be less than planned, resulting in higher carryover balances at the end of 1996 than those shown in the tables. NASA often discusses and analyzes carryover balances in terms of equivalent months of a fiscal year’s budget authority that will be carried into the next fiscal year. For example, the Aeronautical Research and Technology carryover balance of $217.9 million at the end of fiscal year 1996 is equivalent to 3 months of the $877.3 million new budget authority, based on an average monthly rate of $73.1 million. Table 3 shows each program’s carryover in equivalent months of budget authority. The carryover balances at the end of fiscal year 1995 ranged from the equivalent of 1 month for the Space Shuttle to 16 months for Academic Programs. NASA officials gave several overall reasons for the large relative differences in carryover amounts. One major reason was that programs such as the Space Station and the Space Shuttle, which have fewer months of carryover, prepare budgets based on the amount of work estimated to be costed in a fiscal year. Other programs, such as MTPE and Space Science, have based their budgets on the phasing of obligations over a number of fiscal years. Another major reason given was that some programs fund a substantial number of grants, which are typically funded for a 12-month period regardless of what point in the fiscal year they are awarded. This practice, coupled with slow reporting and processing of grant costs, contributes to higher carryover balances. Science programs such as MTPE, Space Science, and Life and Microgravity Sciences and Applications, fund grants to a much greater extent than the Space Station and the Space Shuttle. NASA officials also said the size of contractors affects carryover balances, with larger contractors requiring less funding into next year than smaller contractors. NASA officials gave two major reasons for MTPE’s carryover balance at the end of fiscal year 1995. First, the MTPE program has undergone several major restructurings since its inception in 1991. During the periods when the content of the program was being changed, selected program activities were restrained until the new baseline program was established. Since several contract start dates were delayed, the carryover balance grew. MTPE officials emphasized that all work for which funding was provided would be performed in accordance with the approved baseline and that, in most cases, the new baseline included the same end dates for major missions and ground systems. Officials expect the balances to decrease as delayed work is accomplished. The second reason given for the large carryover balance at the end of fiscal year 1995 is the large number of grants funded in the MTPE program. As discussed earlier, the process for awarding grants and delays in reporting costs on grants contributes to carryover balances. Officials from the Aeronautical Research and Technology program attributed their relatively low level of carryover to aggressively managing carryover balances. Officials have studied their carryover balances in detail and have greatly reduced their levels. In 1989, the program had a carryover balance of 43 percent, equivalent to 5 months of funding. Program financial managers analyzed their carryover and determined that it could be reduced substantially. By 1992, the carryover balance was about 25 percent, or 3 months, of new budget authority, and it is estimated to remain at that level through fiscal year 1996. In fiscal year 1997, program managers hope to achieve a 15-percent, or 2-month, carryover level. Officials attributed their improved performance to thoroughly understanding their carryover balances, emphasizing work to be accomplished and costed in preparing budgets, and carefully tracking projects’ performance. They believe that some of their methods and systems for managing carryover balances could be applied to other NASA programs. Although carryover naturally occurs in the federal budget process, NASA officials became concerned that the balances were too high. NASA is taking actions to analyze and reduce these balances. NASA’s Chief Financial Officer directed a study that recommended changes to reduce carryover balances. NASA’s Comptroller will review justifications for carryover balances as part of the fiscal year 1998 budget development process. A NASA steering group was tasked by NASA’s Chief Financial Officer to review carryover balances as part of a study to address NASA’s increasing level of unliquidated budget authority. The study identified a number of reasons for the current balances, including NASA’s current method of obligations-based budgeting, reserves held for major programs, delays in awarding contractual instruments, late receipt of funding issued to the centers, and grant reporting delays. The study recommended a number of actions to reduce carryover balances through improved budgeting, procurement, and financial management practices, including implementing cost-based budgeting throughout the agency and establishing thresholds for carryover balances. According to the study, cost-based budgeting takes into account the estimated level of cost to be incurred in a given fiscal year as well as unused obligation authority from prior years when developing a budget. The organization then goes forward with its budget based on new obligation authority and a level of proposed funding that is integrally tied to the amount of work that can be done realistically over the course of the fiscal year. However, the study cautioned that a cost-based budgeting strategy should recognize that cost plans are rarely implemented without changes. Therefore, program managers should have the ability to deal with contingencies by having some financial reserves. The study recommended that NASA implement thresholds for the amount of funds to be carried over from one fiscal year to the next. NASA had about 4 months of carryover at the end of fiscal year 1995, according to the study. It recommended that NASA implement a threshold of 3 months for total carryover: 2 months of uncosted obligations for forward funding on contracts and 1 month of unobligated balance for reserves. The study noted that carryover balances should be reviewed over the next several years to determine if this threshold is realistic. NASA’s Chief Financial Officer said the next logical step is to analyze balances in individual programs in more depth. We agree that the appropriateness of the threshold should be examined over time and that further study is needed to more fully understand carryover balances in individual programs. We also believe that individual programs must be measured against an appropriate standard. One problem with looking at carryover balances in the aggregate is that programs substantially under the threshold in effect mask large carryover balances in other programs. For example, at the end of fiscal year 1996, the total amount of carryover in excess of 3 months for seven programs is estimated to be $1.05 billion. However, the carryover balance for the Space Shuttle and the Space Station programs in the same year is estimated to be $1.03 billion under the threshold, which almost completely offsets the excess amount. We compared the balances of individual Human Space Flight and Science, Aeronautics, and Technology programs to this 3-month threshold and found that at the end of fiscal year 1995, nine programs exceeded the threshold by a total of $1.3 billion. By the end of fiscal year 1997, only four programs are expected to significantly exceed the threshold by a total of $0.6 billion. Table 4 compares individual program carryover amounts with the 3-month threshold at the end of fiscal years 1995, 1996, and 1997. As mentioned earlier, the estimates are based on projected costs for fiscal year 1996 and projected budgets and costs for fiscal year 1997. If actual costs and budgets are different, the amount of carryover exceeding the threshold will change. The NASA Comptroller is planning to review carryover balances in each program. According to the Comptroller and program financial managers, carryover balances have always been considered a part of the budget formulation process, but factoring them into the process is difficult since budget submissions must be prepared well before the actual carryover balances are known. For example, NASA’s fiscal year 1997 budget request was prepared in the summer of 1995 and submitted to the Office of Management and Budget in the fall. At that point, NASA’s appropriations for fiscal year 1996 were not final and costs for 1996 could only be estimated. Estimates of budget authority, obligations, and accrued costs of program activities will be specifically scrutinized to ensure that the timing of the budget authority to accrued costs is consistent with minimal, carefully justified balances of uncosted budget authorities at fiscal year end. Carryover of uncosted balances in excess of eight weeks of cost into the next fiscal year will have to be specifically justified. The carryover referred to by the Comptroller is the equivalent of 8 weeks, or 15 percent, of the next fiscal year’s cost. For example, the fiscal year 1996 budget, factoring in carryover from the prior years, should include enough budget authority to cover all costs in 1996 plus 8 weeks of costs in fiscal year 1997. The Comptroller stressed that he was not attempting to set a threshold for the appropriate level of carryover, but instead was setting a criteria beyond which there should be a strong justification for carryover. The Comptroller also told us that although the guidance specifically addressed preparation of the fiscal year 1998 budget, he has asked programs to justify carryover balances in excess of 8 weeks starting with the end of fiscal year 1996. Table 5 compares program carryover balances at the end of fiscal years 1995, 1996, and 1997 to the 8-week criteria. NASA was not able to provide cost plan data for fiscal year 1998. To approximate the 1997 carryover balances in excess of 8 weeks, we used the fiscal year 1997 cost plan. If a program cost plan for 1998 is higher than 1997, the 8-week criteria would also be higher and the carryover in excess of 8 weeks would be lower. On the other hand, a lower cost plan in 1998 would result in a higher balance in excess of 8 weeks. As shown in table 5, significant amounts of carryover funding would have to be justified. In fiscal year 1995, $1.9 billion would have had to be justified. In fiscal years 1996 and 1997, the amounts requiring justification are estimated at $1.5 billion and $1 billion, respectively. We discussed a draft of this report with NASA officials and have incorporated their comments where appropriate. We reviewed carryover balances for programs in the Human Space Flight and Science, Aeronautics, and Technology appropriations as of September 30, 1995, and estimated balances as of September 30, 1996, and 1997. We relied on data from NASA’s financial management systems for our analyses and calculations and did not independently verify the accuracy of NASA’s data. We reviewed budget and cost plans and discussed carryover balances with NASA’s Chief Financial Officer; NASA’s Comptroller and his staff; and with financial management staff for the MTPE, Space Science, Space Station, Space Shuttle, and Aeronautics programs. We also reviewed NASA’s internal study of carryover balances and discussed the study with the NASA staff responsible for preparing it. We performed our work at NASA headquarters, the Goddard Space Flight Center, the Jet Propulsion Laboratory, the Johnson Space Center, and the Marshall Space Flight Center. We conducted our work between April 1996 and July 1996 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce this report’s contents earlier, we plan no further distribution of the report until 10 days after its issue date. We will then send copies to the Administrator of NASA; the Director, Office of Management and Budget; and other congressional committees responsible for NASA authorizations, appropriations, and general oversight. We will also provide copies to others on request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix I. Frank Degnan Vijay Barnabas James Beard Richard Eiserman Monica Kelly Thomas Mills The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the extent of carryover budget balances for the National Aeronautics and Space Administration's (NASA) Mission to Planet Earth (MTPE) program and other NASA programs. GAO found that: (1) carryover balances in NASA's Human Space Flight and Science, Aeronautics, and Technology programs totalled $3.6 billion by the end of fiscal year (FY) 1995; (2) individual programs carried over varying amounts, ranging from the equivalent of 1 month to 16 months of FY 1995 new budget authority; (3) MTPE carried $695 million, or more than 6 months, of budget authority into FY 1996; (4) Under NASA's current budget and cost plans, these balances will be reduced in FY 1996 and 1997, but the actual reductions depend on the extent NASA's projected costs match the actual costs incurred and the amount of new budget authority received for FY 1997; (5) NASA officials are concerned that the current amounts are too high and are taking actions to reduce these balances; (6) a recent NASA study of carryover balances determined that the equivalent of 3 months of budget authority should be carried into the next fiscal year and recommended actions to bring NASA programs within that threshold, and also noted that the threshold needs to be studied over time to determine if it is appropriate; (7) applying the initial 3-month threshold to estimated carryover balances at the end of FY 1996 shows that 7 of the 11 Human Space Flight and Science, Aeronautics, and Technology programs have a total carryover of $1.1 billion beyond the threshold; (8) NASA's Comptroller intends to carefully scrutinize carryover amounts as part of the FY 1998 budget development process, and formally requested program managers to justify carryover balances that exceed amounts necessary to fund program costs for 8 weeks of the next fiscal year; (9) the 8 weeks was not a threshold for the appropriate level of carryover, but rather a criterion for identifying balances for review; (10) at the end of FY 1996, nine programs would need to justify $1.5 billion beyond the Comptroller's 8-week criterion; and (11) the three programs with the largest estimated balances requiring justification are Space Science with $558 million, MTPE with $435 million, and Life and Microgravity Sciences and Applications with $257 million.
Congress enacted and the President signed TRIA in 2002 to help restore confidence and stability in commercial property insurance markets after private insurers withdrew terrorism coverage in the wake of the September 11 attacks. TRIA requires that commercial property/casualty insurers, including (among others) workers’ compensation insurers, “make available” coverage for certified terrorist events under the same terms and conditions as other, nonterrorism coverage. Following a terrorist attack, the federal government would reimburse insurers for 85 percent of their losses after insurers pay a deductible of 20 percent of the value of each company’s prior year’s direct earned premiums. Federal reimbursement is activated when aggregated industry losses exceed $100 million and is capped at an annual amount of $100 billion. TRIA also would cover losses caused by NBCR terrorist attacks if the insurer had included this coverage in the insurance policy. Originally enacted as a 3-year program, TRIA was reauthorized in 2005; in 2007, Congress extended the program until 2014. In the deliberations over the 2005 and 2007 Reauthorization Acts, Congress considered mandating that commercial property/casualty insurers offer coverage for NBCR risks, with significantly lower deductibles and copayments. Congress also considered adding group life insurance to the TRIA program, so that group life insurers could receive reimbursements for the majority of their claims from terrorist events, including NBCR attacks. Members of Congress supporting this provision argued that group life insurers were vulnerable to the same extraordinary losses from a terrorist attack as other insurance lines and could become insolvent after a catastrophic event. However, Treasury testified that the Administration did not want to expand TRIA to cover group life insurers, citing some reports that the group life insurance market has remained competitive after September 11. The NBCR requirement and the group life provisions were not included in the final TRIA reauthorizing legislation. Government and other experts have stated that terrorist attacks involving NBCR weapons could affect people and property in a variety of ways, depending on the weapon used as well as the location of the attack. Table 1 provides examples of attacks using NBCR weapons as well as some of their potential effects. Previous attacks involving NBCR materials in United States and Japan also illustrate a range of consequences. In September and October 2001, contaminated letters laced with anthrax were sent through the mail to two U.S. senators and members of the media. As a result, 22 individuals contracted anthrax disease, and 5 of these individuals died. In 1984, the Rajneeshee religious cult in Oregon contaminated salad bars in local restaurants with salmonella bacteria to prevent people from voting in a local election. Although no one died, 751 people were diagnosed with the food-borne illness. In 1995, 12 people were killed and many more were injured after Aum Shinrikyo, a State Department-designated terrorist organization, released the chemical nerve agent Sarin in the Tokyo subway. States have the primary responsibility for regulating the insurance industry in the United States, and the degree of oversight varies by insurance line and state. In some lines of insurance, state regulators guide the extent of coverage by approving the wording of policies, including the explicit exclusion of some perils. Regulators coordinate their activities, in part, through NAIC. According to an NAIC representative, while practices vary by state, state regulators generally review prices for personal lines of insurance and workers’ compensation policies but not for commercial property/casualty policies. In most cases, state insurance regulators perform neither rate nor form review for large commercial property/casualty insurance contracts because it is presumed that businesses have a better understanding of insurance contracts and pricing than the average personal-lines consumer. Reinsurers generally are not required to obtain state regulatory approval for the terms of coverage or the prices they charge. Because state laws generally require employers to carry workers’ compensation insurance, which covers employees for death or injuries as a result of a workplace incident, employers generally obtain coverage either from a private insurance company or from a fund established by the state. Twenty-six states have established separate funds, either run by the state or as a separate company, according to the American Association of State Compensation Insurance Funds (AASCIF), and most of these states provide workers’ compensation coverage for all employers seeking it. An NAIC official told us that when state governments began requiring employers to purchase workers’ compensation coverage, many states established separate funds to provide a mechanism to ensure coverage for those employers that could not obtain it in the private market. The majority of state funds are competitive, meaning that the state fund competes for business with other private insurers. However, in four states, the state fund is the sole insurer for workers’ compensation, unless an employer is permitted to self-insure. The National Academy of Social Insurance reported that in 2006, just over half of workers’ compensation benefits (50.38 percent) were paid by private insurers, with just under half coming from state funds (19.66 percent), federal programs (5.98 percent), and self-insured employers (23.98 percent). Commercial property/casualty insurers and reinsurers generally seek to exclude coverage for NBCR risks or place significant restrictions on such coverage. According to industry participants, insurers interpret the language of longstanding exclusions developed for nuclear and pollution risks as excluding terrorist attacks involving NBCR weapons, but the use of such exclusions may be challenged in court. Representatives from policyholders from a variety of industries, including real estate, financial services, and hospitality, also told us that they do not have NBCR coverage either because a very limited amount of NBCR insurance is available or they do not view the rates for available coverage as reasonable. A few policyholders also reported self-insuring these risks through captive insurers. Representatives from workers’ compensation, life, and health insurers we contacted generally reported that they cover losses from terrorist attacks, including those involving NBCR materials, because they said state regulators generally do not allow these insurers to exclude such risks. While few market surveys that we identified specifically have addressed the availability of property/casualty insurance for terrorist attacks involving NBCR materials, our interviews with a range of industry participants suggests that such coverage continues to be limited. Representatives from the majority of the insurers and reinsurers we interviewed said that their companies generally do not offer NBCR coverage or offer a limited amount of such coverage. Representatives of large insurance and reinsurance trade associations, as well as national insurance brokers, also reported a general lack of coverage for NBCR risks. According to a representative from a large national insurance broker, he was not aware of any primary insurers that offered NBCR coverage as part of their standard property/casualty policies. The representative said that some insurers that offer “stand-alone” terrorism insurance policies offer NBCR coverage, but demand for this product is minimal due to its relatively high price and restrictions. Although representatives of several reinsurers based in Bermuda told us that their companies offer some NBCR coverage, the reported restrictions on these policies help illustrate some of the limitations of the available coverage. For example, the policy language of one reinsurance contract we reviewed limited NBCR coverage only to losses resulting from the initial “force or violence” of the NBCR terrorist attack and did not cover long-term effects, such as resulting illnesses or business interruption. Insurance companies seek to limit their coverage for NBCR risks by relying on long-standing exclusions for nuclear and pollution risks, which already have been approved by state regulators. As we stated in our September 2006 report, insurers have written exclusions related to nuclear hazard risks into their standard policies for decades, generally to protect themselves from losses related to nuclear power accidents. Furthermore, representatives from the Insurance Services Office (ISO), a national organization for the property/casualty insurance industry that develops standardized policy language designed to comply with regulatory requirements, said that insurers also typically exclude coverage for losses caused by pollution and contamination. ISO representatives told us that the pollution exclusion was developed to exclude coverage for the release of many different substances—such as asbestos or pesticides—that could cause harm to people and the environment. Some insurance representatives said that the pollution exclusion could be applied to biological and chemical agents released in a terrorist attack. Because these exclusions were developed for other purposes, some regulators and insurance industry participants said that their use by insurers in the event of an attack involving NBCR materials could be challenged. Representatives from one large insurer told us that language in the nuclear hazard exclusion may not be clear enough to apply to a nuclear terrorist attack. Similarly, representatives from a large insurance company said the pollution exclusion would not apply unless the terrorist attack itself was deemed to be a polluting event. An official from the New York Insurance Department also said that the Department did not interpret the definition of “pollutants” in the standard pollution exclusion forms to apply to biological and chemical terrorist attacks. Courts determine whether a particular substance is or is not a pollutant based upon, among other things, the language in the policy, the facts and circumstances of the case, and the law of jurisdiction. As we stated in our September 2006 report, given the potential for litigation and court interpretation, insurers and other industry experts have raised some concerns as to how effectively the pollution exclusion would protect insurers against losses resulting from an NBCR terrorist attack. Property/casualty insurers also may face potential exposure to losses from NBCR attacks as the result of state requirements, but it is difficult to assess the extent of this exposure. According to industry officials, 16 states—including California, Illinois, and New York—require property/casualty insurers to cover losses from fire following an event, regardless of the cause of fire. As we reported in 2006, in the case of a nuclear bomb detonation, once the property was destroyed, insurers could dispute the extent to which fire (covered in “fire following” states) or the blast (excluded by the nuclear exclusion) caused the damage. However, given the potential devastation resulting from a nuclear terrorist attack— including potentially widespread destruction and protracted evacuations—it may be difficult for insurers, policyholders, regulators, and courts to resolve any issues related to the cause of loss. Information we obtained from commercial policyholders in a range of industries across the country also indicate that property/casualty coverage for NBCR risks is very limited. For example, we interviewed representatives from real estate companies that own large, high-value commercial properties (such as office buildings or hotels) in cities— including Chicago, New York, and San Francisco—that generally are viewed as being at high risk of terrorist attack. While representatives from these companies said that they generally were able to obtain coverage for terrorist attacks that involve conventional weapons, such as truck bombs, they generally did not have NBCR coverage. In addition, results from a recent survey of risk managers conducted by the Risk and Insurance Management Society, Inc. (RIMS), show that commercial policyholders generally have not been offered NBCR coverage in their insurance policies. Although the RIMS survey has several limitations, it found that less than 15 percent of the respondents had coverage for NBCR attacks. Furthermore, representatives we contacted from industries such as transportation, hospitality, entertainment and utilities also reported that they did not have NBCR coverage, or had limited coverage, such as for only chemical risks. Policyholders we contacted said that they generally lacked NBCR coverage because (1) their insurers did not offer it; (2) the prices quoted on the coverage that was available were viewed as too expensive to purchase; or (3) they did not seek coverage. For example, a representative of a shopping center development company with retail locations in various cities throughout the United States said that the company is concerned about the risks of NBCR attack and has sought insurance coverage. However, the representative said that the company has not been able to identify any insurers that would offer the company NBCR coverage or provide pricing information. In addition, representatives of a commercial real estate developer in Washington, D.C., said that quoted insurance premiums for NBCR coverage were five times higher than their total property insurance costs. Insurance brokers we contacted told us that although some of their commercial policyholder clients have inquired about NBCR coverage, the demand for such coverage is less than that for conventional terrorism coverage. As we stated in 2006, demand for conventional terrorism coverage is high in the commercial real estate sector because mortgage lenders generally require companies to purchase coverage. However, according to brokers and a lender that we interviewed, lenders do not require companies to secure coverage for NBCR terrorist attacks because such coverage is largely unavailable. Due to concerns about the potential for NBCR attacks and the general lack of coverage offered by insurers, some policyholders said that they had established captive insurers to self-insure the risk and obtain federal reinsurance under TRIA. Captive insurers are generally established by major corporations, such as large real estate companies, to self-insure a variety of risks. Corporations may create captives for several reasons, including to obtain coverage for certain risks that may no longer be provided by the private market (such as medical malpractice insurance), access additional coverage directly from a reinsurer, or reduce tax payments. According to a representative from an insurance broker that helps companies in establishing and managing captives, companies either may add NBCR coverage to an existing captive insurer or may create one to cover NBCR risks. For example, a representative from a national real estate company told us that he had difficulty finding terrorism insurance at prices viewed as reasonable and without restrictions, so the company established a captive that covered NBCR risks. Although captives may help some companies limit their potential exposure from NBCR attacks, available information suggests that captives are not widely used for this purpose, perhaps because companies may lack the financial resources necessary to do so. To illustrate, representatives from 18 percent of the 39 policyholders we contacted and 6 percent of the 377 respondents to the RIMS survey that we have previously discussed reported using captives to insure NBCR risks. Unlike property/casualty insurers, workers’ compensation insurers we contacted said that they offer NBCR coverage because they generally are not permitted to exclude it under state laws and regulations. As we found in 2006, applicable state laws generally require workers’ compensation insurers to cover all perils, including those from NBCR risks. Under state workers’ compensation laws, employers are responsible for covering unlimited medical costs and a portion of lost earnings for injuries or illnesses that occur during the course of employment, regardless of the cause, according to NAIC. Similarly, we were told that group life insurers generally do not exclude NBCR coverage from their policies, according to regulators and industry participants. Officials from five of the six state insurance regulators that we interviewed reported that they do not allow terrorism or NBCR attacks to be excluded from life insurance policies. However, officials from these regulatory agencies also said that their states had not enacted laws that explicitly require insurers to offer such coverage. Given the lack of statutory requirements, officials from Washington, D.C., told us that group life insurers in the District could exclude NBCR risks from their coverage. However, representatives from the American Council of Life Insurers, a national trade association for life insurers, reported that they were not aware of the use of NBCR exclusions and believed that group life insurers generally cover NBCR risks; officials from several large life insurance companies confirmed that they provided coverage. Finally, health insurers also generally cover NBCR risks, according to state regulators, representatives from America’s Health Insurance Plans, and health insurers we contacted. According to industry participants, health insurers generally are required to pay claims, regardless of the cause that led to the claim. Insurance regulatory officials from several states with locations viewed as high risk—California, Georgia, Illinois, Massachusetts, and New York—told us that they do not permit health insurers to exclude NBCR coverage from their policies. However, regulatory officials in Washington, D.C., said that health insurers were not mandated to cover NBCR risks in the District and insurers had filed policies with NBCR exclusions. In addition, a representative from one large health insurer said that the insurer would invoke the force majeure clause—a general contract provision used to relieve parties from their responsibilities due to circumstances beyond their control, such as acts of God—to exclude NBCR risks. However, representatives from two state regulators we interviewed told us they were not familiar with the force majeure clause, and an official from the Georgia Department of Insurance told us he did not think the clause would apply to terrorist acts involving NBCR materials. Commercial property/casualty insurers and reinsurers generally are not willing to provide coverage for NBCR attacks or place significant restrictions on the coverage they offer because of the uncertainties surrounding such attacks and their potential for generating catastrophic losses. Although private workers’ compensation insurers generally have greater flexibility than state funds to limit their exposure to losses from NBCR attacks by not offering coverage to certain employers, both private insurers and state funds may face other challenges in managing the risks associated with terrorist attacks involving NBCR weapons, such as limits on their ability to price such risks and obtain private reinsurance. Life and health insurers may also face challenges in managing NBCR risks, such as competitive market pressures and challenges in establishing appropriate premiums for their potential exposures. As we stated in our September 2006 report, many insurers view terrorist attacks, particularly attacks involving NBCR materials, as an uninsurable risk because of uncertainties about the severity and frequency of such attacks. Insurance companies typically manage and assess risk on the basis of their expected losses, using historical information about the range of damages (severity) and the number of incidents in a given period of time (frequency). For some risks, such as those related to driving automobiles, insurers have access to a substantial amount of statistical and historical data on accidents, from which they can predict expected losses and then calculate premiums that are adequate to cover these losses. Large claims from automobile accidents also generally do not occur to a large number of policyholders at the same time, which serves to limit insurers’ exposures. In contrast, catastrophes, including natural disasters such as hurricanes as well as terrorist attacks, present unique challenges to insurers because they may result in substantial losses and are relatively infrequent. To address these challenges, insurers may use computer models developed internally and by outside firms to help estimate the financial consequences of various disaster scenarios, and in some cases, to develop appropriate premiums. However, as we have previously noted, due to data limitations, estimating the potential consequences of terrorist attacks is fundamentally different and substantially more difficult than forecasting natural catastrophes. For example, substantial data are available on the frequency and severity of hurricanes, but the United States has experienced relatively few terrorist attacks, particularly those involving NBCR materials. Estimates of the potential severity of attacks involving NBCR materials may be particularly difficult to produce for several reasons, according to insurance industry participants and representatives from firms that have developed computer models for catastrophe risks. For example, as we previously have discussed, a wide range of potential weapons are associated with NBCR attacks, which could result in varying amounts of property damage as well as injuries and deaths (see fig. 1). While estimates for the damage resulting from nuclear blast in an urban area exceed the loss estimates for a chemical attack on a single building or facility, loss estimates also may vary for different types of attacks using the same agent. For example, one modeling firm has produced a scenario in which a moving truck releases anthrax in a highly populated urban area creating total insured losses of $144 million, 20 times higher than if the anthrax were released through a sprayer inside the ground floor of a large building. Representatives of insurers and reinsurers we interviewed expressed concerns about models’ ability to account for all of the potential losses associated with an NBCR attack, such as business interruption and litigation costs, which may be difficult to quantify. In addition, a recent report by one modeling firm stated that decisions about the extent of cleanup required for nuclear and radiological contamination likely will be made after the attack, creating further uncertainties about the cost of rebuilding or remediation. Insurers also face challenges with developing frequency estimates for NBCR attack scenarios. Representatives of risk modeling firms told us they use worldwide incidents of NBCR attacks and researchers’ opinions on terrorists’ capabilities and potential targets to develop estimates for NBCR event frequency. However, some insurance industry participants described frequency estimates of NBCR attacks as too subjective to be used as a basis for pricing coverage, because views on the frequency of attacks vary. For example, while one modeling firm stated in a recent report that its estimates for the frequency of terrorist attacks are 0.6 events per year, or 2.0 events every 3 years, the representative of a large commercial property/casualty insurer said that his firm viewed the risk as occurring once every 8 years. Furthermore, insurance experts said that terrorists continue to adjust their strategies, thereby making past attacks a poor predictor of future events. Because insurers and reinsurers face challenges in reliably estimating the severity and frequency of terrorist attacks involving NBCR materials and setting appropriate premiums, industry representatives reported that their companies focus on the most catastrophic attacks under scenarios with widespread financial losses. For example, some representatives of property/casualty insurers told us that the scale of a nuclear blast could have a devastating impact on an insurer that chose to offer NBCR coverage, because such an attack could destroy or render uninhabitable many or all buildings within a large metropolitan area. In contrast, we have previously reported that since TRIA was enacted, insurers have some ability to limit their potential losses from terrorist attacks involving conventional weapons such as truck bombs, because the damage resulting from such attacks might be confined to a smaller geographic area, such as a radius of several blocks from the attack. Representatives of insurers we contacted told us their companies may limit their property/casualty coverage in locations viewed as at high risk for a terrorist attack, such as New York City; however, they reported that the potential losses from an NBCR attack could far exceed what their company would be able to cover. As we reported in 2006, academic experts and industry participants have pointed out that insurers have little incentive to insure catastrophic events that might jeopardize their financial soundness and solvency, so insurers remain unwilling to offer coverage for NBCR attacks. Although private and state workers’ compensation insurers generally must cover losses resulting from NBCR attacks, private companies generally have greater flexibility in managing their exposures to losses from NBCR attacks under the TRIA program. Specifically, private insurers may choose to which employers they will offer coverage. Accordingly, representatives of private insurers reported that their companies have monitored or limited coverage offerings to employers with employees concentrated in locations considered to be at higher risk for an NBCR attack. For example, representatives of smaller, more regionally based insurers said their companies decided not to offer coverage for certain employers that have employees concentrated in densely populated locations, or limited their overall coverage offerings for workers’ compensation in urban areas. In contrast to private insurers, state workers’ compensation funds generally are unable to limit their NBCR risks on the basis of employers’ perceived risk levels. State laws and regulations generally require state funds to provide coverage to all employers—regardless of their location or risk level—and serve either as the state’s sole insurer or as the insurer of last resort. While officials from some state funds we contacted said that they were concerned about exposure to losses from an NBCR attack, they also said the nature of their funds’ operations might limit that exposure to some degree. For example, representatives from some funds said that because they offered coverage to a significantly large group of employers of varying sizes across the state, their exposure to losses from NBCR attacks was somewhat diversified. Finally, representatives of private workers’ compensation insurers and state funds told us that they faced some challenges in managing NBCR exposures, such as pricing the risk and obtaining adequate amounts of private reinsurance. Recognizing workers’ compensation insurers’ exposure to terrorism risks, state regulators in at least 37 states, including the District of Columbia, have permitted insurers to apply a statewide surcharge, or additional premium, (which is on average about 1 cent per $100 payroll) to cover the potential losses from terrorist attacks, including those involving NBCR materials. The National Council on Compensation Insurance, Inc. (NCCI), developed statewide surcharges based on the results of a model, as a way for insurers that underwrite in states that belong to NCCI to cover potential losses from terrorism, including those using NBCR materials. NCCI officials told us that their surcharges generally are uniform across a state and insurers using this surcharge generally cannot levy higher surcharges for employers they perceive to be at higher risk of terrorist attack. Furthermore, NCCI’s surcharges were developed to cover potential losses from terrorist attacks involving conventional as well as NBCR weapons. Officials from the New York Compensation Rating Board, which develops workers’ compensation rate proposals for the state of New York (which does not belong to NCCI), also told us the state’s surcharges were developed to cover potential losses from both conventional and NBCR terrorist attacks. However, as we stated in our 2006 report, state regulators and insurance representatives advised us that any surcharges that insurers may be permitted to charge for NBCR exposure likely would not cover potential losses. Similarly, representatives of private workers’ compensation insurers we contacted for this report that underwrite coverage in locations considered at high risk for terrorist attacks said that their surcharges for terrorism may not cover all of their potential exposure. In addition, representatives of many of the private insurers and some of the state funds we interviewed said that they had little to no private reinsurance for NBCR risks, and that they would rely on TRIA in the event of a catastrophic NBCR attack. In contrast to workers’ compensation insurers, life and health insurers may have somewhat more flexibility to manage the risks associated with terrorist attacks involving NBCR materials. For example, unlike workers’ compensation insurers, the prices charged by group life insurers generally are not subject to state regulatory approval. Group health insurers generally are able to negotiate the terms of health care coverage with employers and employees, unlike workers’ compensation benefits that are state-mandated. However, based on the limited amount of work we conducted, we found that for terrorist attacks involving NBCR materials, group life and health insurers face the following risk-management challenges: Group life insurers may not actively seek to limit the amount of coverage that they offer in geographic markets perceived to be at high risk of attack, according to representatives from the American Council on Life Insurance (ACLI) and several large companies we contacted. According to these officials, the group life insurance market is highly competitive, with insurers competing to cover employers, even in densely populated urban areas at risk for terrorist attacks. Furthermore, life insurers’ use of models to manage the risks associated with providing coverage in densely populated areas may be limited. We spoke with representatives from two group life insurers that reported that while they have started to use models to review the impact of catastrophic scenarios, they lack specific data on the location of employees from some employers to monitor their concentration of insured individuals. An ACLI representative said that group life insurers with exposures across the country may be better able to manage risks from an NBCR attack than smaller, more regional insurers with portfolio concentrations near target locations. We also previously reported on the difficulties group life insurers face in charging higher premium rates to employers perceived to be at higher risk of terrorist attacks, including attacks involving NBCR materials. Life insurers price their products on the basis of mortality tables derived from experience with prior insurance contracts and calibrated to the effects of individual characteristics, such as smoking, or group characteristics, such as occupation type. According to ACLI, group life insurance policies currently are not designed or priced to account for catastrophic financial losses and mass casualties from an unpredictable terrorist attack with an NBCR weapon. Similarly, health insurers may face difficulties in setting premium rates to address the risks of terrorist attacks, including those involving NBCR materials. For example, health insurers said that they generally price coverage on the basis of previous experience with insured populations, and that without knowing the frequency and severity of NBCR risks, they could not develop actuarially sound prices for such a risk. Furthermore, because illnesses or symptoms of illnesses resulting from NBCR attacks could take years to develop, it might be very difficult for insurers to establish appropriate premiums for such long-term risks. Because the current commercial property/casualty market generally lacks coverage for terrorist attacks involving NBCR materials, the two proposals we reviewed to increase the availability of such coverage focus on that market. The proposals involve the federal government assuming most or all of the associated financial liabilities of such attacks. For example, an early version of the bill to reauthorize TRIA in 2007 would have required insurers to make NBCR coverage available and would have lowered their exposure to potential losses. While such a proposal may increase the availability of NBCR insurance, some industry participants believe it would disrupt insurance markets. Alternatively, some industry participants have suggested that the federal government should fully insure losses from terrorist attacks involving NBCR materials, similar to other federal disaster insurance programs. This program could help ensure the availability of NBCR insurance, according to some industry participants, but others said the program could result in substantial losses to the federal government. The House of Representatives initially passed an early version of the 2007 reauthorization of TRIA that would have amended the act to (1) require insurers to make NBCR coverage available to policyholders, and (2) require the federal government to assume a relatively high proportion of the associated financial risk. With certain exceptions, the proposal would have required insurers to offer coverage for NBCR attacks under terms, amounts, and other coverage limitations that did not differ materially from their coverage for other types of risks. The proposal would have allowed an insurer to exclude NBCR coverage altogether (except for workers’ compensation or other state coverage requirements) or offer a separate NBCR terrorism policy at different terms, amounts, and other coverage limitations than other types of coverage, if a policyholder rejected an insurer’s initial offer for coverage. To compensate insurers for the risks associated with providing NBCR coverage, the proposal initially would have set insurers’ TRIA deductibles for such attacks at 3.5 percent of direct earned premiums, substantially lower than the 20.0 percent deductible insurers would pay under the current program for terrorist attacks in general. In addition, under this proposal, insurers’ copayment, or additional share of losses, for an NBCR attack would have varied depending on the size of the losses associated with the attack. In the case of a smaller NBCR attack, an insurer would have paid 15 percent of its losses after paying its deductible, and for very large NBCR attacks, 5 percent. Additionally, the proposal would have permitted insurers to voluntarily reserve some of their conventional and NBCR terrorism premiums, tax-free, in a fund maintained by Treasury to cover the TRIA deductibles or copayments associated with losses from future terrorist attacks. Given insurers’ general reluctance to provide NBCR coverage, some industry participants we contacted stated that this proposal was reasonable. For example, a representative from one insurer said that unless mandated to do so, insurers would not offer coverage for NBCR risks. Representatives from other insurers and industry participants, including regulators, told us that limiting insurer losses for NBCR events would help insurers better manage risks associated with NBCR attacks. With their financial exposures limited, insurers could more easily develop terms and conditions for NBCR coverage to policyholders and offer the coverage at lower rates. In addition, some industry participants said that the provision in the legislation allowing for separate pricing of NBCR coverage would (1) allow insurers to tailor insurance coverage and prices to the type of terrorist attack, and (2) provide policyholders with the choice of purchasing NBCR and conventional terrorism coverage together or separately. A recent study by the RAND Corporation found that requiring insurers to offer NBCR coverage, with the federal government assuming significant financial liability for the associated losses from large attacks could be beneficial. For example, the RAND study stated that under such a program the number of policyholders purchasing coverage would increase substantially from current levels. Furthermore, the study concluded that the federal government’s expected outlays for compensation and assistance following attacks involving NBCR materials actually might decrease. Given that property/casualty coverage for NBCR attacks is largely unavailable, in the event of such an attack, the study noted that the federal government might decide to provide a large amount of disaster assistance or other compensation following an attack, as it has done for the victims of natural catastrophes and terrorist attacks. If insurers were required to provide some coverage for NBCR attacks, the study concluded that the federal government’s expected costs could be somewhat lower under certain conditions than otherwise would be the case. Some industry participants also suggested that insurers could use different strategies in addition to TRIA to further manage the risks associated with providing NBCR coverage, as would be mandated under this proposal. In particular, some participants said they favored insurers forming risk pools or changing tax laws to permit insurers to set aside tax-deductible reserves to offset some of the losses associated with terrorist (including NBCR) attacks, similar to provisions in the legislative proposal. We have reported that establishing a group of insurance companies to pool their assets could allow insurers to provide a greater amount of coverage for the entire market than could be provided by each individual company. Furthermore, as we discussed in our prior reports, allowing either a pool or individual insurers to maintain tax-deductible reserves could provide the industry with incentives to expand capacity to cover catastrophic risks, such as attacks with NBCR materials. Table 2 provides information on existing or proposed pooling arrangements in the United Kingdom and the United States that are designed to help insurers manage the risks associated with terrorist attacks involving NBCR materials or accidents involving nuclear materials. However, other industry participants cautioned that requiring insurers to provide NBCR coverage, even with the federal government assuming a relatively high percentage of the associated financial exposure, could have adverse consequences for insurance markets. For example, a variety of industry participants said that under such a mandate, insurers may be less willing to offer property/casualty coverage and may withdraw from the market or not offer coverage in areas viewed as at high risk of attack. Some industry participants expressed particular concern about the impact that such a proposal would have on smaller insurers. While this proposal substantially would have lowered the deductible for attacks involving NBCR materials, a few industry participants said that the proposed copayments for such attacks still could be substantial for smaller insurers. The officials said that smaller insurers may lack the financial capacity to cover such potential costs. In addition, some industry participants and policyholders said that this proposal could be prohibitively costly to policyholders and taxpayers. As we have previously discussed, industry participants said that estimates of the severity and frequency of terrorist attacks involve many uncertainties, making pricing difficult. Consequently, some industry participants said that insurers, faced with a mandate of providing NBCR coverage, might set premiums at rates they consider necessary to compensate for the risks of a catastrophic attack, which could deter many commercial entities from purchasing such coverage. For example, two researchers we contacted said that when Pool Re expanded its coverage to include NBCR risks after the September 11 attacks, prices for terrorism coverage doubled. In addition, some industry participants said that if the federal government were liable for a greater portion of insured losses resulting from an NBCR attack, then the overall costs to the taxpayer from that attack could be significant. Furthermore, although the RAND study concluded that costs to the federal government could be reduced by requiring insurers to offer NBCR coverage, the study noted that in the case of extremely large NBCR attacks, the federal government’s financial liability could be larger than if it did not participate in the market for terrorism insurance and require insurers to offer NBCR coverage. We also note that the federal government’s total costs could be higher under this option than the current situation where NBCR coverage is generally unavailable, and Congress later decided to provide additional funding to pay for uninsured losses from such an attack. Finally, information from our previous work, as well as interviews with some industry participants, raises questions about whether establishing pools or permitting insurers to maintain tax-deductible reserves materially would enhance available coverage for terrorist attacks, including those involving NBCR materials. According to industry participants and a study by a global consulting firm on a proposed pool for workers’ compensation coverage for terrorism risk, a reinsurance pool might not create new industry capacity or bring in additional capital to support writing more business. The study noted that if the overall industry does not have enough capital to manage the risk of an NBCR attack, then neither would an industry pool that simply combines existing industry capital in a new structure. Furthermore, we have reported that overall insurance capacity might not increase if a pool or individual insurers were allowed to establish tax-deductible reserves. Because reinsurance premiums already are tax-deductible, insurers would receive similar tax benefits from traditional reinsurance, pool reinsurance, or individual reserves. Therefore, insurers might substitute the pool reinsurance or individual reserves for their current reinsurance program, if that program includes coverage for NBCR attacks. Given concerns about the potential financial and other consequences of requiring insurers to provide NBCR coverage, some industry participants we contacted suggested that the federal government should develop a separate program to insure against such attacks. Under this proposal, the federal government would serve as insurer, covering all losses for NBCR attacks and charging premiums for providing these services. The insurance industry’s role largely would be administrative, as some industry representatives reported that the industry would have the staff, processes, and experience in place to manage such tasks. For example, insurance companies could be responsible for collecting premiums, adjusting claims, and disbursing claims payments from the government to policyholders. This proposal could be similar to other federal insurance programs shown in table 3, where the government assumes most, if not all, of the risk. These other programs generally were created because of gaps in coverage in the private market or the perception that the risks were uninsurable. While some industry analysts said that this proposal was the only way to ensure that NBCR coverage would be widely available, others expressed concerns about the potential costs of such a program to the federal government and its effects on the private market. With the government responsible for most, if not all, of the losses in the event of a terrorist attack involving NBCR materials, several industry participants expressed concerns about the potentially large post-disaster costs for the federal government and, ultimately, taxpayers. We note that other government disaster insurance programs have proven to be costly and have administrative challenges. For example, we have reported that while NFIP and the Federal Crop Insurance Program were created to provide affordable insurance coverage, they do not collect enough in premiums to fund potential losses from catastrophic disasters. Therefore, Congress has had to appropriate funds after disasters, such as floods, to pay catastrophic claims. Given the difficulties associated with reliably estimating the potential severity and frequency of terrorist attacks involving NBCR materials as discussed in this report, the federal government may face substantial challenges in establishing premiums sufficient to offset the risks involved in providing insurance coverage for such attacks. In addition to the large potential costs to taxpayers, industry participants expressed other concerns about the federal government assuming complete financial responsibility for potential NBCR property/casualty losses. For example, some industry participants, including regulators, did not think that the government should be responsible for all of the potential losses from an NBCR attack and that insurers could assume some of the risk. Furthermore, we have previously reported that some industry participants believe that too much federal government involvement in disaster relief crowds out private insurance and reduces the private market’s ability and willingness to provide insurance-based solutions to covering catastrophe risk. Finally, while insurers would play a largely administrative role under this proposal, some insurers expressed reservations about this potential responsibility because they have no experience training, equipping, and sending claims adjusters and other personnel into areas where NBCR materials have been released. We provided a draft of this report to the Department of the Treasury and NAIC for their review and comment. In their oral comments, Treasury officials said that they found the report informative and useful. They also provided technical comments that were incorporated where appropriate. NAIC provided written comments on a draft of this report, which have been reprinted in appendix II. In their comments, NAIC stated that the report was materially accurate and they agreed with our discussion on proposed policy proposals for expanding NBCR coverage in the commercial property/casualty market. However, NAIC reported a philosophical difference of opinion with comments in the draft report about the ability of workers’ compensation insurers to charge risk-based premiums for attacks involving NBCR weapons. NAIC stated that our draft report contained references that implied that state insurance regulators, due to voter and legislative pressure, keep premium rates artificially low for workers’ compensation insurers rather than relying on actuarial science. NAIC disputed what it characterized as our implied contention and suggested that the recent profitability of the insurance industry indicates that premiums have not been suppressed by regulatory actions. We made clarifications in the draft to address certain NAIC comments, such as more fully describing the surcharges that workers’ compensation insurers may levy for covering losses from terrorist attacks, including those involving NBCR weapons. However, the draft report in no way meant to imply that state insurance regulators succumb to voter and legislative pressures in approving rates, and simply reported that workers’ compensation insurers and some regulators we contacted for both our September 2006 report and this report said that they did not believe the permissible surcharges would be sufficient to cover the potential losses associated with an NBCR attack. Given that NBCR risks may not fully satisfy the principles of insurability, as we said in our September 2006 report, statements by representatives of workers’ compensation insurers that question whether the permitted surcharges are sufficient to cover potential losses do not appear inherently unreasonable. As discussed in the final report, the permitted surcharge in many states is the same for conventional terrorist attacks and for those involving NBCR weapons and insurers generally are not permitted to levy higher surcharges for employers they perceive to be at higher risk of attack. Furthermore, we note that while NAIC reports that workers’ compensation insurers have been profitable over the past several years, this does not mean that any premiums collected from this surcharge would be sufficient to cover the losses associated with a future NBCR attack. NAIC also commented on statements in the draft report regarding the ability of group life insurers to manage exposures to NBCR risks. Specifically, NAIC said that the competitive nature of group life insurance markets has more of an impact on group life insurers’ decisions to provide NBCR coverage in their policies than any regulatory constraints. NAIC stated that if one group life insurer were to exclude coverage for NBCR risks, and other group life insurers did not exclude such coverage, the insurer excluding NBCR risks would be at a competitive disadvantage. NAIC concluded that employers may choose not to purchase coverage from the group life insurer that excluded NBCR risks, unless the price difference was substantial. We generally agree with NAIC that competitive market pressures may affect group life insurers’ willingness to limit NBCR coverage, and note that the argument was included in the draft provided to NAIC for its review and comment. Nevertheless, we made some adjustments to the text to ensure that this analysis was better communicated throughout the final report. NAIC also provided additional technical comments and observations that were incorporated as appropriate. We also sent excerpts of our draft report to the six state regulators discussed in this report (California, Georgia, Illinois, Massachusetts, New York, and Washington, D.C.) for their review. Three state regulators responded that they did not have any changes to our characterization of NBCR requirements in their states, and one regulator provided a technical comment that we made. We also provided excerpts of the draft report to five other organizations referenced in this report, and all five responded, some with technical comments that were incorporated where appropriate. We are sending copies of this report to the appropriate congressional committees, the Department of the Treasury, NAIC, and other interested parties. The report is also available at no charge on our Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to review (1) the extent to which insurers and reinsurers offer coverage for nuclear, biological, chemical, and radiological (NBCR) attacks; (2) the factors that contribute to the willingness of insurers and reinsurers to provide coverage for NBCR attacks and their ability to manage these risks; and (3) any public policy options for expanding coverage for these risks, given current insurance market conditions. To address the first objective, we reviewed relevant studies and interviewed representatives of more then 100 organizations, including insurer and policyholder trade associations; individual policyholders; national insurance and reinsurance brokers; and insurance and reinsurance companies with knowledge of the commercial property/casualty, workers’ compensation, group life, and health insurance markets nationwide and with expertise in specific geographic markets. We also interviewed local brokers, insurance companies, and local property owners in cities and regions with locations considered to be at high, moderate, and low risk of exposure to terrorist attacks. These locations included Atlanta; Boston; Chicago; New York; San Francisco; and Washington, D.C. We selected these markets on the basis of rankings of locations by risk of terrorism exposure from the Insurance Services Office (ISO), an insurance industry analytics firm. Insurers may use these rankings, which account for cities’ risk of terrorist attacks and the potential for associated losses, as a basis for charging additional premiums for terrorism exposure, according to ISO and several regulators we contacted. We interviewed some participants in specialized insurance markets, including a nuclear pool, Bermuda reinsurers, and a national broker with expertise in environmental insurance. We spoke with representatives of policyholders that own hundreds of properties and other entities nationwide. These entities included large office towers in major U.S. cities, properties in proximity to high-profile federal buildings, hotels, industrial buildings, hospitals, sports stadiums, a chemical company, a railroad company, and residential properties in locations throughout the United States. In addition to one-on-one interviews, we also conducted group discussions with representatives of 14 policyholders at the annual Risk and Insurance Management Society, Inc. (RIMS), conference in San Diego, California, in April 2008. Although we selected industry participants to provide broad representation of market conditions geographically and by industry, their responses may not be representative of the universe of insurers, insurance brokers, policyholders, and regulators. As a result, we could not generalize the results of our analysis to the entire national market for commercial property/casualty, workers’ compensation, group life, and health insurance. We determined that the selection of these sites and participants was appropriate for our objectives, and that this selection would allow coverage of locations considered to be at high, moderate, and low risk of exposure to terrorist attacks, and would obtain information related to NBCR coverage for major insurers, policyholders, and other organizations to generate valid and reliable evidence to support our work. We also reviewed the Department of the Treasury’s 2005 Report to Congress, Assessment: The Terrorism Risk Insurance Act of 2002 and its results from a survey of commercial property/casualty insurers on the coverage they offered for NBCR risks. We were limited in our ability to use this information because it was unclear from the survey question whether an insurer offered NBCR coverage in one commercial property/casualty policy or in all policies. We also reviewed results from a survey of risk managers conducted by RIMS of their membership. However, we also were limited in our ability to use results from this survey on purchase rates of NBCR insurance as a signal for approximating overall demand because of the low response rate (approximately 10 percent) to the survey. To address the second objective, we selected large, national insurance companies to interview on the basis of their market share in the states we studied—California, Georgia, Illinois, Massachusetts, and New York as well as Washington, D.C. In the commercial property/casualty and workers’ compensation market, these national insurance companies held from 37 to 52 percent of the market share in the states we studied, according to information provided by the Insurance Information Institute. In addition, we interviewed representatives of regional insurance companies in our selected markets. We also spoke with representatives of seven reinsurance companies, including two of the largest worldwide reinsurance companies as well as risk modeling firms, state regulators, and two credit rating agencies. To select state workers’ compensation funds, we compiled and analyzed available data on workers’ compensation state funds based on information from the American Association of State Compensation Insurance Funds and the National Council on Compensation Insurance, Inc. We selected nine workers’ compensation state funds on the basis of the presence of a metropolitan city in the state; presence of cities considered at risk for terrorist attacks, developed using estimates from ISO; and type of state fund—either monopolistic (fund is the sole insurer in the state) or competitive (fund competes with private insurers to offer workers’ compensation coverage)—and its size. To learn more about the coverage in the group life and health insurance markets and factors affecting that coverage, we interviewed state regulators in California; Georgia; Illinois; Massachusetts; New York; and Washington, D.C., as well as officials from the American Council of Life Insurers and America’s Health Insurance Plans—two large national trade associations. We also interviewed several group life and health insurers with large shares of the market both nationally and in the selected states, as well as one large group life reinsurance company and a representative from a national brokerage firm with expertise in the reinsurance market for group life carriers. Although we selected insurers from each of the lines we studied to provide a broad representation of size and geographic scope, we could not generalize the results of our analysis to the entire population of private insurers or workers’ compensation state funds. To address the third objective, we reviewed options proposed in legislation, discussed in our prior reports or in other reports, or suggested by industry participants. We also interviewed academics, representatives from research organizations, and consumer interest groups. Although these discussions did not produce a consensus about what measures would increase the availability of NBCR coverage, for this report we focused on two proposals deemed viable by a variety of industry participants. We selected the proposal to amend Terrorism Risk Insurance Act to require insurers to make NBCR coverage available and lower insurers’ deductibles and co-payments from a recent legislative proposal. We selected the option for the federal government to insure losses for terrorist attacks involving NBCR materials from interviews conducted with industry participants. We compiled and analyzed the views of the industry participants listed above on these two proposals and reviewed our prior reports to obtain information about other federal insurance programs. We did not attempt to evaluate the prospective impact of these proposals and, therefore, did not come to any conclusions about the advisability of implementing them. We conducted this audit in Atlanta; Boston; Chicago; New York; San Diego; San Francisco; and Washington, D.C., from January 2008 to December 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Wesley M. Phillips, Assistant Director; Rudy Chatlos; Andrea Clark; Katherine Bittinger Eikel; Marc Molino; Jill M. Naamane; Linda Rego; Barbara Roesmann; Kathryn Supinski; and Shamiah Woods made key contributions to this report. Terrorism Insurance: Status of Efforts by Policyholders to Obtain Coverage. GAO-08-1057. Washington, D.C.: September 15, 2008. Terrorism Insurance Availability: Initial Results on Availability of Terrorism Insurance in Specific Geographic Markets. GAO-08-919R. Washington, D.C.: July 11, 2008. Homeland Security: First Responders’ Ability to Detect and Model Hazardous Releases in Urban Areas Is Significantly Limited, GAO-08-180. Washington, D.C.: June 27, 2008. Natural Disasters: Public Policy Options for Changing the Federal Role in Natural Catastrophe Insurance. GAO-08-7. Washington, D.C.: November 26, 2007. Terrorism Insurance: Measuring and Predicting Losses from Unconventional Weapons is Difficult, But Some Industry Exposure Exists. GAO-06-1081. Washington, D.C.: September 25, 2006. Catastrophe Risk: U.S. and European Approaches to Insure Natural Catastrophe and Terrorism Risks. GAO-05-199. Washington, D.C.: February 28, 2005. Terrorism Insurance: Effects of the Terrorism Risk Insurance Act of 2002. GAO-04-806T. Washington, D.C.: May 18, 2004. Terrorism Insurance: Effects of the Terrorism Risk Insurance Act of 2002. GAO-04-720T. Washington, D.C.: April 28, 2004. Terrorism Insurance: Implementation of the Terrorism Risk Insurance Act of 2002. GAO-04-307. Washington, D.C.: April 23, 2004. Catastrophe Insurance Risks: Status of Efforts to Securitize Natural Catastrophe and Terrorism Risk. GAO-03-1033. Washington, D.C.: September 24, 2003. Catastrophe Insurance Risks: The Role of Risk-Linked Securities and Factors Affecting Their Use. GAO-02-941. Washington, D.C.: September 24, 2002. Terrorism Insurance: Rising Uninsured Exposure to Attacks Heightens Potential Economic Vulnerabilities. GAO-02-472T. Washington, D.C.: February 27, 2002. Terrorism Insurance: Alternative Programs for Protecting Insurance Consumers. GAO-02-199T. Washington, D.C.: October 24, 2001. Terrorism Insurance: Alternative Programs for Protecting Insurance Consumers. GAO-02-175T. Washington, D.C.: October 24, 2001.
The Terrorism Risk Insurance Act of 2002 (TRIA) is credited with stabilizing insurance markets after the September 11, 2001, attacks by requiring insurers to offer terrorism coverage to commercial property owners (property/casualty insurance), and specifying that the federal government is liable for a large share of related losses. While TRIA covers attacks involving conventional weapons, insurers may use exceptions that may exclude coverage for attacks with nuclear, biological, chemical, or radiological (NBCR) weapons, which has raised concerns about the potential economic consequences of such attacks. TRIA's 2007 reauthorization directed GAO to review (1) the extent to which insurers offer NBCR coverage, (2) factors that contribute to the willingness of insurers to provide NBCR coverage, and (3) policy options for expanding coverage for NBCR risks. To do this work, GAO reviewed studies and reports and interviewed more than 100 industry participants about the availability of NBCR coverage in the market. GAO provided a draft of this report to the Department of the Treasury and the National Association of Insurance Commissioners (NAIC). Treasury and NAIC said that they found the report informative and useful. NAIC did express what it said was a philosophical difference of opinion with GAO's characterization of risk-based premiums for workers' compensation insurers. Consistent with the findings of a September 2006 GAO report on the market for NBCR terrorism insurance, property/casualty insurers still generally seek to exclude such coverage from their commercial policies. In doing so, insurers rely on long-standing standard exclusions for nuclear and pollution risks, although such exclusions may be subject to challenges in court because they were not specifically drafted to address terrorist attacks. Commercial property/casualty policyholders, including companies that own high-value properties in large cities, generally reported that they could not obtain NBCR coverage. Unlike commercial property/casualty insurers, insurers in workers' compensation, group life, and health lines reported generally providing NBCR coverage because states generally do not allow them to exclude these risks. Commercial property/casualty insurers generally remain unwilling to offer NBCR coverage because of uncertainties about the risk and the potential for catastrophic losses, according to industry participants. Insurers face challenges in reliably estimating the severity and frequency of NBCR attacks for several reasons, including accounting for the multitude of weapons and locations that could be involved (ranging from an anthrax attack on a single building to a nuclear explosion in a populated area) and the difficulty or perhaps impossibility of predicting terrorists' intentions. Without the capacity to reliably estimate the severity and frequency of NBCR attacks, which would be necessary to set appropriate premiums, insurers focus on determining worst-case scenarios (which with NBCR weapons can result in losses that would render insurers insolvent). For example, a nuclear detonation could destroy many insured properties throughout an entire metropolitan area. Workers' compensation, group life, and health insurers that generally cannot exclude NBCR coverage from their policies also face challenges in managing these risks. For example, workers' compensation insurers said they face challenges in setting premiums that they believe would cover the potential losses associated with an attack involving NBCR weapons. GAO reviewed two proposals that have been made to address the lack of NBCR coverage in the commercial property/casualty market. The first proposal, part of an early version of the bill to reauthorize TRIA in 2007, would have required insurers to offer NBCR coverage, with the federal government assuming a greater share of potential losses than it would for conventional attacks. Some industry participants supported this proposal because insurers otherwise would not offer NBCR coverage and because a substantial federal backstop was necessary to mitigate the associated risks. However, others said that some insurers might withdraw from the market if mandated to offer NBCR coverage, even with a substantial federal backstop. In a second proposal by some industry participants, the federal government would assume all potential NBCR risks through a separate insurance program and charge premiums for doing so. However, critics said the government might face substantial losses on such an NBCR insurance program because it might not be able to determine or charge appropriate premiums.
Today’s consumers are demanding more—and more detailed—health information, and are becoming more active in making medical and lifestyle decisions that affect them. The demand for health information has climbed steadily in the past 5 to 10 years. In the early 1990s, for example, mail inquiries to the Public Health Service’s information clearinghouses rose by over 40 percent, and telephone inquiries more than doubled. Public libraries reported in 1994 that 10 percent of all reference questions were health-related, accounting for about 52 million inquiries annually. Despite this interest, however, in a 1994 survey published by the Medical Library Association, almost 70 percent of the respondents reported problems in gaining access to appropriate health information. When queried, 60 percent said that they would be willing to pay for an easy way to access an integrated resource to provide such health and wellness information. The need for information is particularly apparent in self-care situations, for example when dealing with one’s own minor injury or illness. About 80 percent of all health care involves problems treated at home, according to the president of Healthwise, Inc., a nonprofit center for health care promotion and self-care research and development. Effective management of these problems can prevent the illness or injury from progressing to the point of needing professional intervention. However, consumers’ self-treatment must follow the correct self-diagnosis or benefits from automated dissemination of information could be negated and overall health could be harmed. The increasing demand for health information has driven the development of consumer health informatics systems. In fact, a number of informatics systems were developed by individuals who were frustrated by their inability to find needed information about their own health conditions or those of family members or friends. Several hundred informatics systems—using a range of technologies, from telephones to interactive on-line systems—have been developed in the past decade alone. Over half of the projects we identified were in operation for 2 years or less, or were still in the very early stages of development. Advances in technology also make access to consumer health information easier, responding to this increasing consumer demand. In 1995, as reported by the Council on Competitiveness, 37 percent of U.S. households had computers; that number was expected to reach 40 percent by the beginning of 1996. The use of technology in schools is also on the rise. According to Quality Data, Inc., the number of computers in the nation’s classrooms has grown steadily just in the past few years, reaching about 4.1 million for the 1994-1995 school year. (In contrast, about 2.3 million computers were in our nation’s classrooms in the 1991-1992 school year.) Growth has likewise been rapid in the use of the Internet and commercial on-line computer services. The Congressional Research Service has called the Internet “the fastest growing communications medium in history.” The number of Internet users has doubled in size yearly since 1988; between 1993 and 1994 that number rose from 15 million to 30 million people. Consumer health informatics is the union of health care content with the speed and ease of technology. Informatics systems provide health information to consumers in a wide range of settings. While many people access health information through personal computers in their homes, others access these systems in more public locations such as libraries, clinics, hospitals, and physicians’ waiting rooms. Informatics supports consumers’ ability to obtain health-related information through three general types of systems—those that simply provide information (one-way communication), those that tailor specific information to a user’s unique situation (customized information), and those that allow users to communicate and interact either with health care providers or other users (two-way communication). I’d like to offer some examples of each of these three general types of systems that are being used in informatics today. First, examples of providing information in one direction include on-line health-related articles, and computer software containing health encyclopedias or specific simple medical instructions, such as how to inject insulin; telephone-based systems that can be automatically connected to databases to call individuals with appointment reminders also fall into this category. Second, tailoring specific lifestyle recommendations aimed at improving one’s health can be accomplished with automated systems that request information from the consumer—via a questionnaire dealing with current health habits (such as exercise or smoking) and individual and family health history, for example. Information obtained in this way can then be analyzed, scored according to a set standard, and fed back to the user in the form of recommendations for improved health management. Finally, interactive communication is available through on-line discussion groups, which offer the chance for those seeking information on certain health topics or concerns to communicate with other users or a physician or other health care provider. Systems vary a great deal in terms of the technology employed, costs, and sponsors. The kinds of technologies used in the 78 projects we surveyed included (1) telephones and voice systems, (2) computers, software, and on-line services, and (3) interactive televisions and videotape. (Attachment 1 at the end of this statement provides a sample showing the range of projects included in our review.) The systems costs we were able to identify ranged from very little to $20 million to develop, and maintenance costs at the high end were up to $1.5 million annually (most cost information was proprietary). One factor affecting cost is whether existing equipment and personnel resources can be utilized. According to an expert from the University of Montana, a low-cost, Internet-type system was developed by students there as a class project, with the university providing the equipment. More complex systems that permit user interaction are usually among the most expensive. For example, Access Health, Inc., contracts with insurers, managed care programs, and employers to provide advice on illness prevention, disease management, and general health information to their enrollees and employees. The company employs close to 500 people, including nurses and technical support personnel; it reports that it has spent about $20 million on systems development over the last 7 years. Since informatics is a new field, only limited research has been performed to confirm its full monetary benefits. Some studies have shown, however, that informatics offers the potential to reduce some unnecessary medical services, thereby lowering health care costs. Information technologies also offer other advantages over hard-copy text material; for example, a consumer can more readily review material at his or her own pace and at the needed level of detail. The Shared Decision-making system, an interactive video program, was developed to help patients participate in treatment decisions; evaluators have also reported potential cost savings. According to its developer, the system helps educate the consumer, allowing patients and doctors to function together as a team. An evaluation of one program in this system—dealing with noncancerous prostate enlargement—found a 40-percent drop in elective surgery rates. According to the Agency for Health Care Policy and Research, potential cost savings could be substantial, as this is the second most common surgical procedure performed in the Medicare population. Cleveland’s ComputerLink—developed to help support Alzheimer’s caregivers by reducing their feelings of isolation—can also help save money, according to researchers at Case Western Reserve University, where it was developed. This is because when caregivers are provided access to such systems and other community-based services, according to the researchers, they tend to need fewer traditional health services, potentially saving taxpayers thousands of dollars. Other advantages cited by developers and system users include anonymity—increased ability to remain unknown while dealing with personal or sensitive information, allowing a more accurate health picture to emerge; outreach—improved access by those in rural or underserved areas; convenience—ability to use the system at any time, day or night; scope—increased ability to reach large numbers of people; and support—ease of establishing on-line relationships with others. In response to our on-line survey of Internet consumers, we found that consumers value support groups for many different reasons. One Internet user said he gains support and understanding from his on-line friends, who know exactly what his disease—Chronic Fatigue Syndrome—is like. Another Internet user said she obtains information electronically that she cannot easily get from other sources about what she called “the true facts from real people living the nightmare of ovarian cancer.” Similarly, a homebound caregiver of an Alzheimer’s patient described ComputerLink as her “lifeline to sanity.” Finally, an individual said he gained “immense benefit” from hearing of the experiences of fellow prostate-cancer sufferers, adding his belief that “accessing this material saved my life.” Informatics systems do not and cannot replace visits with physicians; they can, however, make such encounters more productive, for both doctor and patient. Such systems can also prepare doctors to more effectively treat certain patients. For example, doctors were able to diagnose alcoholism with the help of a pre-visit questionnaire because patients tended to be more candid with the computer, which many see as “nonjudgmental.” Specifically, in the case of one patient, doctors’ notes indicated that the patient “uses alcohol socially”; in contrast, the computer found that the patient had monthly blackouts. Likewise, a computer questionnaire identified more potential blood donors who had HIV-related factors in their health histories than did personal interviews by health care providers. While it is not difficult to find consumers and groups who endorse this technology, there are—in the opinions of the experts we interviewed—several issues raised by the rapid growth of informatics, issues that need to be resolved in the coming years. In survey interviews and at our conference last winter, the experts identified specific issues that will need to be addressed concerning the future development of consumer health informatics, and options for addressing them. The three issues identified as most significant were access, cost, and information quality. The other five issues raised dealt with security and privacy, computer literacy, copyright, systems development, and consumer information overload. (Attachment 2 shows the experts’ views on the significance of these issues.) Some health informatics systems are available only to those with available computers, modems, and telephones, which raises the issue considered central to many experts: access. About 60 percent of U.S. households lack computers, and at least 6 percent lack telephones. Other identified issues involving access include physical barriers, such as those affecting residents of remote or rural areas, and those affecting individuals dealing with physical handicaps. The next issue, cost, was seen as affecting the consumer’s use of informatics in terms of expenses associated with purchasing software, fees for using on-line services, and, for some, transportation costs to a library or other public source of information. The costs of developing informatics systems were also important to the experts: these issues included how much funding is needed, where funding comes from, and the cost of keeping up-to-date with changes in technology. Information quality was also seen as a very significant issue because the information in informatics systems could be incomplete, inaccurate, or outdated. According to one expert, CD-ROMs in use with current dates could in reality be based on much earlier, out-of-date research. Also identified was the potential for biased information that may have been developed by a person or organization with a vested interest. Another risk is that consumers could take information out of context or misapply it to their own medical situations. If they were to act on such information without first checking with a qualified medical professional, harmful health consequences could result. Experts discussed several options for addressing these issues, ranging from applying broad practices to following more specific suggestions. One solution, establishing public-and private-sector partnerships addresses all three issues, especially access. To illustrate: the Newark (N.J.) Public Schools joined with the University of Medicine and Dentistry of New Jersey and a private, nonprofit corporation to provide technology to people lacking access to computers. In addition to using their own resources, in 1994 and 1995, this group was awarded a total of over $200,000 in federal grants. Public- and private-sector leaders noted that the project was an effective approach for ensuring access and one that could be replicated in other communities. Experts also indicated that federal, state, and local governments—as well as universities and venture capitalists—could support research to further demonstrate the costs and benefits of consumer informatics. Specific suggestions were also provided to address the quality issue. Peer reviews of informatics systems could help ensure quality, or a consortium of experts in a field could be used, involving government and private-sector representation, to establish quality guidelines. The experts also suggested that ways could be found to notify consumers if information is from an unknown source. Five other issues were seen as somewhat less critical but still needing attention. Security and privacy were seen as important, particularly with on-line networks, where consumers may wish to share personal information anonymously. Further, experts felt that while copyright laws protect the proprietary nature of systems so that others will not be able to unfairly reap the rewards that rightfully belong to developers, at the same time copyright restrictions can slow the immediate availability of information to the consumer. In the area of systems development, the experts noted issues with compatibility, infrastructure, and standardization. When hardware or software incompatibilities exist, information transfer among systems is hindered because it is difficult for the different media to communicate and exchange information without programming changes or additional hardware. Further, no nationwide infrastructure exists to link information from hospitals, clinics, and physicians’ offices, making it difficult to share critical health-related and patient information. And standardization refers to the computer file formats in which patients’ health data are stored; various providers use different information systems, further hindering data-sharing. Finally, information overload and computer literacy were identified as issues related to the consumer. Mr. Chairman, we are a nation with a wealth of information—and on-line information contributes to this situation. Experts indicated that on-line information could overwhelm the consumer and provide him or her with too much technical information to comfortably handle. Most experts also felt that although systems are becoming more user-friendly, some people still fear computers and other technologies. Experts also noted specific options for addressing these issues. Sound systems development practices, along with helping ensure that a project is well-designed, can also significantly help safeguard the data. Carefully assessing and identifying user needs will also help develop a system that is user-friendly and accommodates the target users’ needs, which can increase consumers’ comfort levels with using new technology. The federal government in general—and the Department of Health and Human Services (HHS) in particular—are actively involved in consumer health informatics. This involvement takes the form of project development and testing, providing sources of consumer health information, funding clearinghouses and information centers, and providing grants to organizations that produce informatics systems. (Attachment 3 lists a sample of government agencies involved in these activities.) HHS is charged with controlling disease and improving the health of Americans, and includes consumer information and education among its activities to accomplish this. Many agencies within HHS also have central roles related to consumer health information and services. These include the Health Care Financing Administration (HCFA), the Centers for Disease Control and Prevention, the National Institutes of Health, the Food and Drug Administration, and the Agency for Health Care Policy and Research. Outside of HHS, other agencies having components that deal with health information include the Departments of Agriculture, Commerce, Defense, Energy, and Labor. As an example of federal involvement, last December HCFA awarded a 1-year grant to the University of Wisconsin’s Comprehensive Health Enhancement Support System (CHESS), which supports Medicare patients diagnosed with early-stage breast cancer. Patients choosing to participate are provided with computers in their homes containing the CHESS software, which includes detailed health-related articles and the ability to communicate with medical experts and support groups. The project will review the impact of this system on participants’ health and treatment decisions and will help determine the appropriateness of this technology for the Medicare population. States and local communities are also supporting projects that use technology to disseminate health information to their residents. One large-scale undertaking is the John A. Hartford Foundation-sponsored Community Health Management Information System (CHMIS). Collaborating with several states and local health care organizations, CHMIS provides a community network of health care information and may provide an initial infrastructure that could be used to disseminate consumer information more widely. As an example of local involvement in informatics, Fort Collins, Colorado, has developed its own system, called FortNet, providing health and other kinds of information for city residents. Fort Collins contributed over $60,000, to which federal and private contributions were added. A similar project exists in Taos, New Mexico, where the local community enjoys free access to on-line resources that include directories of local health providers. The system is funded by federal, state, and local contributions, including those of the University of New Mexico. As for the future, HHS has sent a report to the Vice President containing recommendations for federal activities that will enhance the availability of health care information to consumers through the National Health Information Infrastructure, a project that is being jointly undertaken by 14 private companies and nonprofit institutions and the federal government. The National Institute of Standards and Technology has awarded the C. Everett Koop Foundation a grant totaling $30 million—half in government funds and half in matching private funds—to develop the tools needed for such an information network. On the state level, Washington plans to develop an automated system containing clinical information and other medical data; it will be made available to all state residents. Local involvement in consumer health informatics is expected to continue as well. For example, the local communities involved in CHMIS projects plan to provide expanded services over the established networks—additional content areas to serve the health information needs of their consumers. HHS and other consumer health experts have recognized that federal coordination of government activities in consumer health informatics needs to be improved; while no single, comprehensive inventory of all federal activity exists for this new field, many federal agencies have plans for greater coordination and evaluation of consumer health informatics. For example, HHS’ National Institutes of Health plans to consolidate on-line information for its various institutes. Through its Gateway project, HHS is developing a database that is expected to contain hundreds of publications on health topics. The agency is also involved in developing guidelines for evaluating informatics projects; such an evaluation could be of value in helping the government determine how it is investing in technology in this area. Mr. Chairman, informatics is a young and emerging field, and systems have grown rapidly in a very short time; they are clearly providing benefits to many. As the use of informatics systems increases, the benefits and issues will become more apparent. Measuring these benefits and determining how we will deal with some of the issues raised by the experts will be necessary to ensure that consumers receive the best information possible. This concludes my prepared statement. I would be happy to respond to any questions you or other members of the Subcommittee may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the emergence of consumer health informatics. GAO noted that: (1) the demand for health-related information has increased steadily in the past 5 to 10 years; (2) many consumers have reported problems in gaining access to appropriate health information, especially in self-care situations; (3) several hundred informatics systems have been developed in the past decade, but most systems are still in early stages of development; (4) consumers are able to obtain health-related information through one-way communications, tailor specific information to unique situations, or communicate with health care providers through two-way communications systems; (5) more complex systems that permit user interaction are usually the most expensive; (6) consumers are able to reduce unnecessary medical services and lower health care costs by accessing health informatics systems; (7) these systems also help health care providers to more effectively treat certain patients; (8) the most significant issues that need to be addressed include system access, system development cost, and information quality; (9) there is no nationwide infrastructure to link information from hospitals, clinics, and physicians' offices; (10) states and local communities are supporting projects to disseminate health information to their residents; and (11) many federal agencies are planning greater coordination and evaluation of consumer health informatics.
Ammonium nitrate products are manufactured and sold in various forms, depending upon their use. For example, ammonium nitrate fertilizer may be produced and sold in liquid form or as solid granules. According to The Fertilizer Institute, solid ammonium nitrate fertilizer is used heavily by farmers in Alabama, Missouri, Tennessee, and Texas primarily on pastureland, hay, fruit, and vegetable crops. In addition to its agricultural benefits, ammonium nitrate can be mixed with fuel oil or other additives and used by the mining and construction industries as an explosive for blasting. While ammonium nitrate can increase agricultural productivity, use of this chemical poses a safety and health risk because it can intensify a fire and, under certain circumstances, explode. Ammonium nitrate by itself does not burn, but it increases the risk of fire if it comes in contact with combustible materials. Ammonium nitrate that is stored in a confined space and reaches high temperatures can explode. An explosion is more likely to occur if ammonium nitrate is contaminated by certain materials, such as fuel oil, or if it is stored in large stacks. Because of ammonium nitrate’s potential to facilitate an explosion, facilities storing ammonium nitrate may pose a security threat in part because it can be used to make weapons. Ammonium nitrate fertilizer has been used by domestic and international terrorists to make explosive devices. For example, on April 19, 1995, ammonium nitrate fertilizer— mixed with fuel oil—was used by a domestic terrorist to blow up a federal building in Oklahoma City, Oklahoma. The explosion killed 168 people and injured hundreds more. Ammonium nitrate has been involved in several major chemical accidents over the past century, including explosions in the United States and Europe. In addition to killing at least 14 people and injuring more than 200 others, the explosion in West, Texas severely damaged or destroyed nearly 200 homes; an apartment complex; and three schools that were, at the time, unoccupied (see fig. 1). Prior to that incident, an explosion in 1994 involving ammonium nitrate at a factory in Port Neal, Iowa killed four workers and injured 18 people. In 1947, explosions aboard two ships holding thousands of tons of ammonium nitrate fertilizer killed more than 500 people, injured approximately 3,500, and devastated large areas of industrial and residential buildings in Texas City, Texas. In Europe, accidents involving ammonium nitrate have occurred in Germany, Belgium, and France. A 1921 accident in Germany and one in Belgium in 1942 caused hundreds of deaths after explosives were used to break up piles of hundreds of tons of ammonium nitrate, resulting in large scale detonations. In France, a ship carrying more than 3,000 tons of ammonium nitrate exploded in 1947, a few months after the Texas City disaster, after pressurized steam was injected into the storage area in an attempt to put out a fire. In 2001, an explosion at a fertilizer plant in Toulouse, France involving between 22 and132 tons of ammonium nitrate resulted in 30 deaths, thousands of injuries requiring hospitalization, and widespread property damage. Past accidents also indicate that smaller quantities of ammonium nitrate can cause substantial damage. For example, in 2003, an explosion of less than 6 tons of ammonium nitrate in a barn in rural France injured 23 people and caused significant property damage. OSHA and EPA play key roles in protecting the public from the effects of chemical accidents, with EPA focusing on the environment and public health and OSHA focusing on worker safety and health. Under the Occupational Safety and Health Act of 1970 (OSH Act), OSHA is the federal agency responsible for setting and enforcing regulations to protect workers from hazards in the workplace, including exposure to hazardous chemicals. In addition, the Clean Air Act Amendments of 1990 designated roles for both OSHA and EPA with respect to preventing chemical accidents and preparing for the consequences of chemical accidents. In response to requirements in this act, OSHA issued Process Safety Management (PSM) regulations in 1992 to protect workers engaged in processes that involve certain highly hazardous chemicals, and EPA issued Risk Management Program (RMP) regulations in 1996 to require facilities handling particular chemicals to plan how to prevent and address chemical accidents. The PSM and RMP regulations each apply to processes involving a specified list of chemicals above threshold quantities, and require covered facilities to take certain steps to prevent and prepare for chemical accidents. However, neither OSHA’s PSM regulations nor EPA’s RMP regulations cover ammonium nitrate. The Emergency Planning and Community Right-to-Know Act of 1986 (EPCRA) establishes authorities for emergency planning and preparedness and emergency release notification reporting, among other things. Under section 312 of EPCRA and EPA regulations, facilities with certain hazardous chemicals in amounts at or above threshold levels— including ammonium nitrate in some circumstances—are required to annually submit chemical inventory forms to state and local authorities to help emergency response officials prepare for and respond to chemical incidents. For purposes of enhancing chemical facility security, the Department of Homeland Security (DHS) Chemical Facility Anti-Terrorism Standards (CFATS) program requires facilities possessing certain chemicals at or above threshold quantities–including some types of ammonium nitrate–to submit reports to DHS with information about the facility and the regulated chemicals present on site. Among other things, DHS collects information on the quantities of certain hazardous chemicals held at facilities, the location of the facilities, and their industry codes. DHS set different threshold quantities for reporting based on the type of ammonium nitrate and the type of security risk presented (see table 1). Not all facilities with ammonium nitrate, however, are required to file CFATS reports with DHS. First, facilities are only required to report if they are holding amounts equal to or greater than threshold quantities of specific types of ammonium nitrate. Also, DHS does not require certain agricultural producers to report their chemical holdings to DHS. In addition, DHS’s reporting threshold for ammonium nitrate fertilizer only applies to quantities held in transportable containers such as cylinders, bulk bags, bottles (inside or outside of boxes), cargo tanks, and tank cars. Finally, there are several statutory exemptions to CFATS requirements. Specifically, CFATS does not apply to public water systems or treatment works, any facility that is owned or operated by the Department of Defense or the Department of Energy, facilities regulated by the Nuclear Regulatory Commission, or facilities covered by the Maritime Transportation Security Act of 2002 administered by the Coast Guard. Other federal agencies regulate different aspects of the use of hazardous chemicals. For example, the Department of Transportation regulates the transport of hazardous materials, the Coast Guard inspects containers of hazardous materials at ports and waterways, and the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) in the Department of Justice regulates the manufacture, distribution, and storage of explosive materials, including blasting agents and other explosive materials containing ammonium nitrate. State and local government agencies are also involved in regulating hazardous chemical facilities under federal laws and their own state or local laws. Federal laws may authorize or assign state and local governments certain roles and responsibilities for overseeing chemical facilities. For example, as permitted by the OSH Act, OSHA has approved state plans that authorize about half the states to operate their own occupational safety and health programs. As a result, private sector workplaces in 21 states and Puerto Rico are regulated and inspected by state occupational safety and health agencies rather than OSHA. Similarly, EPA has delegated its authority to implement and enforce the Risk Management Program to nine states and five counties. As previously mentioned, both state and local governments play a role in implementing EPCRA, which requires covered facilities to report basic information about their hazardous chemical inventories to certain state and local authorities, including estimates of the amounts of chemicals present at facilities. In addition, state and local governments may establish and enforce their own laws, regulations, or ordinances to protect the public from chemical accidents. For example, state and local governments may adopt and enforce fire codes or zoning laws that specify how far chemical facilities must be located from residential areas. The Executive Order issued on August 1, 2013 established a Chemical Facility Safety and Security Working Group co-chaired by the Secretary of Homeland Security, the Administrator of EPA, and the Secretary of Labor. The Executive Order includes directives for the working group to: improve operational coordination with state and local partners; enhance federal agency coordination and information sharing; modernize policies, regulations, and standards; and work with stakeholders to identify best practices. The order includes tasks focused specifically on ammonium nitrate. Specifically, it directs the Secretaries of Homeland Security, Labor, and Agriculture to develop a list of potential regulatory and legislative proposals to improve the safe and secure storage, handling, and sale of ammonium nitrate. In addition, the Department of Labor and EPA are directed to review the chemical hazards covered by the RMP and PSM regulations and determine whether they should be expanded to address additional hazards. The Organisation for Economic Co-operation and Development (OECD), an intergovernmental organization with 34 member countries, issued guidance in 2003 on the prevention of, preparedness for, and response to chemical accidents. This publication was developed with other international organizations active in the area of chemical accident safety, such as the World Health Organization. The document—OECD Guiding Principles for Chemical Accident Prevention, Preparedness and Response—includes detailed guidance for industry, public authorities, and the public on how they can help prevent chemical accidents and better respond when accidents occur. The total number of facilities in the United States with ammonium nitrate is not known because of the different reporting criteria used by different government agencies, reporting exemptions, and other data limitations. While the total number is unknown, over 1,300 facilities reported having ammonium nitrate to DHS. DHS’s data, however, do not include all facilities that work with ammonium nitrate, in part because some facilities, such as farms, currently do not have to report to DHS and, according to DHS officials, other facilities that are required to report may fail to do so. As of August 2013, 1,345 facilities located in 47 states reported to DHS under CFATS that they had ammonium nitrate. The facilities that reported to DHS as having reportable quantities of ammonium nitrate were most often engaged in supplying and supporting the agriculture and mining industries. Many of these facilities were concentrated in the South. About half of these facilities were located in six states: Alabama, Georgia, Kentucky, Missouri, Tennessee, and Texas. Table 2 shows the number of facilities that reported to DHS that they had ammonium nitrate and the number of states in which they were located. Our review of additional state data, including EPRCA data, from Texas and Alabama, which have different reporting criteria than CFATS, indicated that there are more facilities with ammonium nitrate than those that report to DHS. We compared the data they provided to the data on facilities that reported to DHS under CFATS. In these two states, we found that the data from each of the sources provided to us differed and that no single count of such facilities, whether from the state or DHS, represented a comprehensive picture of facilities with ammonium nitrate. For Texas, we reviewed three sources of data on facilities that have ammonium nitrate: (1) EPCRA data from the Texas Department of State Health Services; (2) a list of facilities that registered with the Office of the Texas State Chemist as having plans to produce, store, or sell ammonium nitrate; and (3) DHS’s CFATS data. We compared data from all three of these sources and found 189 facilities that reported having ammonium nitrate (see fig. 2). Of these 189 facilities, 52 filed CFATS reports with DHS. Data were not readily available to determine whether the remaining facilities were required to file CFATS reports. DHS officials told us the agency has begun an effort to obtain lists of chemical facilities the states have compiled and compare them with its CFATS data to identify facilities that should have filed CFATS reports but did not. This effort is still under way. As shown in figure 2, 17 of the 189 facilities in Texas were listed in all three data sources. For Alabama, we reviewed data from two sources on facilities that reported having ammonium nitrate: (1) EPCRA data from Alabama’s Department of Environmental Management, and (2) DHS’s CFATS data. From these two sources, we found 91 facilities that reported having ammonium nitrate— 57 that filed EPCRA reports with the state Department of Environmental Management and 71 that filed CFATS reports with DHS. Thirty-seven of the facilities filed reports with both the state and DHS. (See fig. 3.) Our analysis of federal trade data collected by DHS’s Customs and Border Protection agency also suggests that a greater number of facilities have ammonium nitrate than those that reported to DHS under the CFATS program. Using the data from the Customs and Border Protection agency, we identified 205 facilities that imported ammonium nitrate products and 81 facilities that exported ammonium nitrate products in fiscal year 2013. The majority of these facilities reported importing or exporting mixtures of ammonium nitrate and calcium carbonate or mixtures of urea and ammonium nitrate. Eight of these facilities filed CFATS reports with DHS. Moreover, we found about 100 facilities that imported or exported a form of ammonium nitrate that may be subject to DHS’s CFATS requirements for reporting quantities over 2,000 pounds but did not file a report. These facilities, however, may not be required to file CFATS reports. For example, they may meet one of the statutory exemptions, or the composition of their ammonium nitrate (or their ammonium nitrate mixture) may not trigger the reporting requirements. Data were not readily available to determine whether they met all of DHS’s reporting requirements for the CFATS program. In addition, according to DHS officials, other data limitations could explain some of the differences between the CFATS data and the federal trade data. For example, facilities may submit reports to the different agencies using different names and addresses. According to DHS, different people in the facility may prepare the different reports; the facility may define the perimeters of each site differently; or the corporate structure or nomenclature may have changed from the time one report was submitted to the next reporting period. The total number of facilities with ammonium nitrate is also difficult to determine because of the variation in reporting criteria, including exemptions for some facilities from reporting to either their state or to DHS. For example, farmers could be exempt from reporting under both EPCRA and CFATS because EPCRA’s reporting requirements do not apply to substances used in routine agricultural operations and DHS does not currently require certain agricultural producers to report their chemical holdings to DHS. In addition, DHS’s reporting threshold for ammonium nitrate fertilizer only applies to quantities held in transportable containers such as cylinders, bulk bags, bottles (inside or outside of boxes), cargo tanks, and tank cars. Also, EPCRA does not require retailers to report fertilizer held for sale to the ultimate customer. However, an August 2013 chemical advisory on ammonium nitrate issued jointly by EPA, OSHA, and ATF clarified that EPCRA requires fertilizer distributors to report ammonium nitrate that is blended or mixed with other chemicals on site. In addition, some facilities may not report to DHS or their state because they have amounts of ammonium nitrate that are below the applicable reporting thresholds. Some facilities may not be included in either DHS’s or states’ data because they fail to submit their required reports, but the magnitude of underreporting is not known. DHS officials acknowledged that some facilities fail to file the required forms. The facility in West, Texas had not filed a CFATS report to DHS but, in 2012, this facility filed an EPRCA form with the state, reporting that it had 270 tons of ammonium nitrate. According to DHS officials, the agency does not know with certainty whether the West, Texas facility should have reported its ammonium nitrate to DHS because the agency did not visit the facility after the explosion and it does not know the manner in which the facility held its ammonium nitrate prior to the explosion. Following the explosion at the facility in West, Texas, DHS obtained data from the state of Texas and compared the state data to the facilities that reported to DHS. As a result of this data matching effort, DHS sent out 106 letters to other potentially noncompliant facilities in Texas. According to DHS, many of the Texas facilities that received the letter said they do not actually possess ammonium nitrate or do not meet the criteria to require reporting under CFATS. DHS has used EPA’s Risk Management Program (RMP) database to try and identify such facilities holding other chemicals, but it cannot use the RMP database to identify all facilities with ammonium nitrate because ammonium nitrate is not covered by EPA’s RMP regulations. In addition, DHS officials told us the agency is in the process of comparing its list of facilities that reported to DHS under the CFATS program to ATF’s list of facilities that have federal explosives licenses and permits to identify potentially noncompliant facilities, but this effort had not been completed at the time of our review. OSHA has limited access to data collected by other agencies to use in identifying facilities with ammonium nitrate. DHS does not currently share its CFATS data with OSHA, although DHS officials told us they were not aware of anything prohibiting DHS from doing so. While EPA shares data from its RMP with OSHA on a quarterly basis, the data do not include information on ammonium nitrate because ammonium nitrate is not covered by EPA’s RMP regulations. As previously discussed, under section 312 of EPCRA, facilities are required to annually report information to state and local authorities on the types and quantities of certain hazardous chemicals present at their facilities, which may include ammonium nitrate. Facilities that possess reportable quantities of ammonium nitrate submit this information electronically or on paper forms, and the state and local entities maintain copies of these forms. However, according to agency officials, the EPCRA data are not shared directly with federal agencies, including OSHA, EPA, or DHS (see fig. 4). EPA officials, however, noted that EPCRA is primarily intended to provide information to state and local officials, not to other federal agencies. Any person may submit written requests to the designated state or local authority for information on individual facilities that may have ammonium nitrate, but lists of all facilities in a state that have submitted these data, including those that reported having ammonium nitrate, are not publicly available. In certain states we contacted, officials indicated that data on individual facilities could be requested from the state, but the requester would have to request data on specific facilities to obtain information on the chemicals they hold. OSHA officials cited the lack of access to data on facilities with ammonium nitrate as a reason they would have difficulty designing an inspection program to target such facilities. The University of Texas at Dallas has a database (called E-Plan) that contains EPCRA data from over half of the states for the 2012 reporting year, but federal agencies have made limited use of it. University staff originally developed the E-Plan database in 2000 with funding from EPA to facilitate EPCRA reporting and provide first responders rapid access to information on chemical facilities in emergency situations. In many local areas, first responders and emergency services personnel can use the E- Plan data when they prepare for and respond to emergencies such as fires. According to E-Plan administrators, OSHA staff helped develop the database, but currently OSHA does not use E-Plan. EPA staff told us that some EPA regional offices have used the E-Plan database to assess compliance with the agency’s RMP reporting requirements. DHS officials told us the agency does not use E-Plan data to assess compliance with CFATS requirements. DHS officials also explained that, while the database could contain useful information, it is incomplete. Some states do not submit data to E-Plan at all, and other states’ data are incomplete. In addition, participation in E-Plan is voluntary and, even among those states that participate, some states do not choose to allow their E-Plan data to be shared with federal agencies. The Chemical Facility Safety and Security Working Group established by the August 2013 Executive Order has begun its efforts to develop proposals for improving information sharing, but this work has not been completed. The working group has held listening sessions throughout the country seeking input from interested parties on options for making improvements in chemical safety and security. It also has launched a pilot program in New York and New Jersey aimed at improving access to data on chemical facilities for federal, state, local, and tribal governments. In addition, the working group is evaluating how federal agencies can work with states to enhance the states’ roles as information sharing organizations, including options for sharing RMP, CFATS, and EPCRA data. Finally, it is exploring ways for federal and state agencies to share information and exchange data to, among other things, identify chemical facilities that are not in compliance with safety and security requirements. For example, DHS and EPA are comparing their CFATS and RMP data to determine if the CFATS data include facilities that should also have reported under the RMP. As a result, EPA has begun sending notification letters to facilities requesting information to help determine if the facility is subject to RMP requirements. Because the RMP regulations do not currently cover ammonium nitrate, however, this strategy would not be useful for identifying facilities that have ammonium nitrate. The federal working group is also sharing information to, among other things, identify whether additional facilities have failed to report under CFATS and is exploring whether EPA software offered to states to facilitate EPCRA reporting could also provide a vehicle to enhance access to the reports while meeting security objectives. OSHA has regulations for the storage of ammonium nitrate, but the agency has not focused its enforcement resources on the use of ammonium nitrate by the fertilizer industry, which is a primary user. EPA, on the other hand, has regulations requiring risk management planning by facilities that have certain hazardous chemicals, but these regulations do not apply to ammonium nitrate. OSHA’s Explosives and Blasting Agents regulations—issued in 1971— include provisions for the storage of both explosives grade and fertilizer grade ammonium nitrate in quantities of 1,000 pounds or more. OSHA based these regulations on two 1970 consensus standards developed by the National Fire Protection Association (NFPA). Few significant changes have been made to these regulations since they were issued, although the National Fire Protection Association periodically reviews and updates its standards. OSHA’s regulations include requirements that could reduce the fire and explosion hazards associated with ammonium nitrate, such as required fire protection measures, limits on stack size, and requirements related to separating ammonium nitrate from combustible and other contaminating materials. However, the regulations do not categorically prohibit employers from storing ammonium nitrate in wooden bins and buildings. In addition, if the facilities were in existence at the time the regulations were issued in 1971, OSHA’s regulations allow the use of storage buildings not in strict conformity with the regulations if such use does not constitute a hazard to life. Some of the provisions of OSHA’s ammonium nitrate storage regulations are described in table 3. Recently, OSHA, EPA, and ATF jointly issued a chemical advisory that recommends that facilities store ammonium nitrate in non-combustible buildings. Similarly, following the explosion in West, Texas, the National Fire Protection Association is considering changes to its ammonium nitrate storage provisions, which are part of its hazardous materials consensus standard, including restricting the use of wood to store ammonium nitrate. In addition to storage requirements, OSHA’s Hazard Communication regulations require that employers whose workers are exposed to hazardous chemicals, including ammonium nitrate, inform their workers of the dangers and train them to handle the materials appropriately. Employers are required to use labels, training, and safety data sheets to inform workers of chemical hazards in the workplace. Safety data sheets are written documents with details on the hazards associated with each chemical, measures workers can take to protect themselves, actions workers should take in case of an emergency, and safety precautions for handling and storing the chemical. Until the explosion in West, Texas, OSHA had not reached out to the fertilizer industry to inform its members of OSHA’s requirements for the storage of ammonium nitrate fertilizer. An OSHA official told us the agency has not traditionally informed the fertilizer industry about these regulations. However, another OSHA official said agency officials met with industry representatives after the explosion at the facility in West, Texas and, based on that meeting, concluded that the fertilizer industry is “well aware” of the agency’s storage regulations. OECD’s Guiding Principles for Chemical Accident Prevention, Preparedness, and Response recommend that public authorities provide clear, easy-to- understand guidance to facilities on how regulatory objectives and requirements can be met. OSHA recently published information about how the agency’s Explosives and Blasting Agents regulations apply to ammonium nitrate fertilizer. The agency provides employers with training, technical assistance, and information through its website on a variety of safety and health topics. Recently, OSHA updated its website to refer to its storage regulations for ammonium nitrate fertilizer. The August 2013 chemical advisory contains information on OSHA’s ammonium nitrate storage regulations, stating that OSHA’s Explosives and Blasting Agents regulations contain requirements for the storage of all grades of ammonium nitrate, including fertilizer grade ammonium nitrate. In addition, in February 2014, OSHA announced that the agency is working with the fertilizer industry to remind employers of the importance of safely storing and handling ammonium nitrate. OSHA published a letter on its website that provides employers with legal requirements and best practice recommendations for safely storing and handling ammonium nitrate. In the letter, OSHA states that the agency will enforce the requirements of 29 C.F.R. § 1910.109(i) for storage of ammonium nitrate, including at facilities in non-explosives industries. According to the announcement, fertilizer industry associations will share the letter with facilities across the country. Fertilizer industry representatives we interviewed said that, prior to the explosion in West, Texas, they did not know that OSHA’s ammonium nitrate storage regulations applied to the fertilizer industry, and they suggested that OSHA reach out to the fertilizer industry to help prevent another incident. Industry representatives explained that their understanding was based on a proposed rule published by OSHA in the Federal Register on April 13, 2007, which proposed revisions to the Explosives and Blasting Agents regulation. In that notice, OSHA proposed a change to the ammonium nitrate storage requirements “to clarify that OSHA intends the requirements to apply to ammonium nitrate that will be used in the manufacture of explosives.” Although this proposed rule was never finalized, the industry representatives told us they relied on this statement to mean OSHA did not intend the storage requirements to apply to ammonium nitrate fertilizer. In addition, we reviewed the safety data sheets developed by four U.S. producers of solid ammonium nitrate fertilizer and found that only one company’s sheet listed OSHA’s Explosives and Blasting Agents regulations as applicable to the storage and handling of ammonium nitrate fertilizer. An industry representative who assists agricultural retailers with regulatory compliance said he reviewed the regulatory information sections in his clients’ safety data sheets for ammonium nitrate fertilizer and none of them referred to OSHA’s Explosives and Blasting Agents regulations. A representative from one national fertilizer industry association said it would be helpful if OSHA took additional steps to explain its interpretation of the applicable requirements and reach out to the fertilizer industry so that affected companies are better informed. A representative from another national agricultural industry group suggested that OSHA develop and disseminate a compliance assistance tool or checklist to ensure that facilities are aware of and in compliance with the applicable regulations. The fertilizer industry is developing a voluntary program called Responsible Ag to promote compliance with federal regulations among fertilizer facilities. Officials from the Fertilizer Institute and the Agricultural Retailers Association told us they plan to consolidate federal regulatory requirements for fertilizer retail facilities into one comprehensive checklist and provide third party audits to retailers based on a checklist they have developed. In addition, officials with the Asmark Institute, a nonprofit resource center for agricultural retailers in the United States, said they developed their own compliance assessment tool for agricultural retailers. The Fertilizer Institute and the Agricultural Retailers Association selected the Asmark Institute to develop a database that will include information on audit reports and scores from the third party audits. This initiative will be modeled after a voluntary audit program in Minnesota for agricultural retailers to help them improve compliance with federal and state regulations. According to OSHA officials, OSHA has not been involved in the development of this industry initiative. Although OSHA has a national enforcement program that targets certain chemical facilities for inspection, this program does not systematically cover facilities with ammonium nitrate. OECD chemical safety guidance suggests public authorities periodically inspect the safety performance of hazardous facilities. OSHA conducts inspections of worksites, as authorized under the OSH Act. As part of its enforcement efforts, OSHA randomly selects facilities for inspection as part of a national emphasis program for chemical facilities it initiated in 2011. However, these inspections are for facilities and chemicals covered under its Process Safety Management (PSM) regulations, which do not include ammonium nitrate. According to OSHA officials, facilities that blend and store ammonium nitrate fertilizer fall outside the scope of this national emphasis program. When we asked whether OSHA might expand its national emphasis program to focus on ammonium nitrate fertilizer facilities, officials said that the agency is not planning on targeting these facilities, in part because OSHA has no means of identifying them. In addition, OSHA is not likely to target facilities with ammonium nitrate for inspection because of its limited resources, and because these facilities often do not meet OSHA’s current inspection priorities. OSHA conducts inspections with its own personnel and the number of inspections OSHA and the states can perform each year is limited by the size of their inspection workforce. According to OSHA officials, OSHA and the states have about 2,200 inspectors who inspected about 1 percent of the 8 million covered employers in fiscal year 2012. Among OSHA’s highest priorities for inspecting worksites are responding to major accidents and employee complaints. In fiscal year 2012, OSHA reported that 44 percent of the agency’s inspections were unplanned inspections, which include inspections initiated in response to an accident or complaint. OSHA also targets certain industries for planned inspections that have high rates of workplace injury and illness. For example, OSHA reported that 55 percent of OSHA’s planned inspections in fiscal year 2012 were inspections of worksites in the construction industry. OSHA has rarely issued citations for violations of its ammonium nitrate storage regulations at fertilizer facilities. OSHA officials told us a citation for a violation of the agency’s ammonium nitrate storage regulations was issued as the result of an inspection of a fertilizer facility only once before the explosion in West, Texas. In that case, OSHA inspected a Florida- based fertilizer manufacturer in 1997 in response to a complaint, and cited the company for 30 violations, one of which was a violation of its ammonium nitrate storage requirements. In addition, according to OSHA officials, within the last 5 years, none of the 21 states that operate their own safety and health programs have cited any employers for improper storage or handling of ammonium nitrate. Under a provision regularly included in the annual appropriations act, OSHA is prohibited from conducting planned safety inspections of small employers—those with 10 or fewer employees—in certain low hazard industries, as determined by their injury and illness rates. Although the number of facilities exempted from OSHA inspections under this provision is unclear, we found that, of the facilities that reported having ammonium nitrate to DHS as of August 2013, 60 facilities—about 4 percent of the 1,345 facilities that reported to DHS— reported having 10 or fewer employees and had an industry code with a lower than the average workplace injury and illness rate (see table 4). As a result, according to OSHA officials, this provision could have hindered the agency’s enforcement of its ammonium nitrate storage regulations at these facilities. OSHA’s fiscal year 2015 budget request asks Congress to consider amending OSHA’s appropriation language to allow the agency to perform targeted inspections of small establishments that have the potential for catastrophic incidents, such as those with processes covered by OSHA’s PSM or EPA’s RMP regulations. In the budget request, OSHA states that the current appropriations language limits the agency’s ability to conduct inspections, and neither the number of workers in a company nor low injury and illness rates is predictive of the potential for catastrophic accidents that can damage whole communities. OSHA’s PSM regulations for chemical safety do not cover ammonium nitrate. In response to a requirement in the Clean Air Act Amendments of 1990, OSHA issued its PSM regulations in 1992 to help prevent accidents involving highly hazardous chemicals, including toxic, flammable, highly reactive, and explosive substances. These regulations apply to processes involving listed chemicals in amounts at or above threshold quantities. Employers subject to the PSM regulations are required to take specified steps, which include evaluating the hazards associated with the process, as well as developing and implementing operating procedures, employee training, emergency action plans, and compliance audits at least every 3 years, among other requirements. Despite the hazards of ammonium nitrate, this chemical is not listed as one of the chemicals subject to these regulations. OSHA officials told us they did not know why ammonium nitrate was not included when the regulation was first issued. According to the August 2013 chemical advisory, although ammonium nitrate is not covered by the PSM regulations, the production or use of ammonium nitrate may involve PSM- listed chemicals, and the manufacture of explosives, which may involve ammonium nitrate, is covered by the regulations. In the late 1990s, OSHA staff drafted a proposal for expanding PSM regulations to cover ammonium nitrate and other reactive chemicals, but it was not reviewed by agency policy officials and was never published in the Federal Register for public comment. In addition, retail facilities, which may include facilities that store and blend fertilizer for direct sale to end users, are exempt from OSHA’s PSM regulations. In the preamble to the final rule for the PSM regulations, OSHA stated that retailers are not likely to store large quantities of hazardous chemicals, and that a large chemical release would be unlikely. While the facility in West, Texas stored large quantities of anhydrous ammonia, a chemical covered by the PSM regulations, OSHA officials told us that the PSM regulations would not apply to the facility because it was a retail outlet. In addition, other chemical safety regulations issued by EPA do not apply to facilities with ammonium nitrate. EPA’s RMP regulations, issued in 1996 in response to a provision of the Clean Air Act Amendments of 1990, require covered chemical facilities to develop and implement a risk management program, but ammonium nitrate is not included on the list of chemicals that would trigger the requirements. EPA’s RMP regulations require facilities that handle more than threshold amounts of certain chemicals to implement a risk management program to guard against the release of chemicals into the air and surrounding environment. Covered facilities must develop their own risk management plans, and some facilities must also develop an emergency response program and conduct compliance audits, among other requirements. Covered facilities must also submit their risk management plans to EPA, including data on the regulated substances handled, and prepare a plan for a worst-case chemical release scenario. Although EPA initially included high explosives in its list of regulated substances, which would include explosives grade ammonium nitrate, these explosives were subsequently removed from the list as a result of a legal settlement. EPA officials also told us that fertilizer grade ammonium nitrate was not considered for its list for RMP because the agency had determined that it did not meet the criteria EPA established to implement the statute. Specifically, EPA officials told us that ammonium nitrate could have been included in the RMP regulations, but ammonium nitrate was not included because it was not considered a toxic or flammable chemical, which were among the criteria EPA used when the agency first developed the regulations. Accordingly, ammonium nitrate is not a covered chemical and EPA inspectors do not review facilities’ risk management plans for this chemical during their RMP inspections. In 2006, EPA conducted an on-site inspection of the West, Texas facility, but the inspection focused on anhydrous ammonia, not ammonium nitrate. In response to the August 2013 Executive Order on Improving Chemical Facility Safety and Security, OSHA and EPA, as part of the federal working group, have invited public comment on a wide range of policy options for overseeing the housing and handling of hazardous chemicals in the United States. Because they are still evaluating these options, the agencies have not issued any notices of proposed rulemaking. As directed by the Executive Order, in December 2013, OSHA issued a Request for Information on potential revisions to its PSM and related regulations, including its ammonium nitrate storage regulations. OSHA’s Request for Information also seeks public input on changing the agency’s enforcement policy concerning the retailer exemption in the PSM regulations. In the Request for Information, OSHA states that “The West Fertilizer facility is not currently covered by PSM, however it is a stark example of how potential modernization of the PSM standard may include such facilities and prevent future catastrophe.” In addition, as chair of one of the workgroups established to implement the Executive Order, OSHA solicited public input in January 2014 on federal policy options for improved chemical safety and security, including whether to expand OSHA’s PSM regulations and EPA’s RMP regulations to cover ammonium nitrate, among other options. This solicitation also sought public input on whether federal agencies should examine the use of third party audits to promote safe storage and handling of ammonium nitrate. The solicitation defined third party audits as inspections conducted by independent auditors, retained by a chemical facility, who make process safety and regulatory compliance recommendations. In an ongoing pilot project in selected states implemented in response to the Executive Order, federal agencies report improved coordination of inspections, such as sharing inspection schedules, cross‐training inspectors, and inter‐agency referrals of possible regulatory non‐compliance. According to foreign officials and government documents, Canada and the three EU countries we contacted—France, Germany, and the United Kingdom—require facilities with specified quantities of ammonium nitrate, including fertilizer grade ammonium nitrate, to assess its risk and develop plans or policies to control the risks and mitigate the consequences of accidents. Like the United States, these countries are members of the OECD, which has published best practices for managing the risks of chemical accidents. The OECD publication includes guidance on preventing and mitigating the consequences of chemical accidents, preparedness planning, and land use planning, among other things. For example, OECD’s guidance recommends that regulatory authorities ensure that facilities with hazardous substances assess the range of possible accidents and require hazardous facilities to submit reports describing the hazards and the steps taken to prevent accidents. With respect to assessing the risks of ammonium nitrate, according to Canadian officials and Canadian government documents, ammonium nitrate is regulated under the country’s Environmental Emergency Regulations, which include risk management provisions. According to guidance published by Environment Canada, a federal-level regulatory agency, facilities that store 22 tons or more of ammonium nitrate must develop and implement an environmental emergency plan. In developing an emergency plan, facilities are directed to analyze the risks posed during the storage and handling processes for certain chemicals and adopt practices to reduce the risks, taking into consideration the impact a chemical accident would have on the surrounding community. According to information provided by EU officials, facilities in the 28 member countries of the EU with specific quantities of ammonium nitrate fertilizer are subject to the Seveso Directive, the EU legislation for facilities that use or store large quantities of certain toxic, explosive, and flammable substances, among other types of chemicals. At a minimum, EU officials told us that EU member countries must comply with the Seveso Directive, although they have the option to adopt more stringent requirements. The legislation was adopted after a chemical accident in Seveso, Italy in 1976 that exposed thousands of people to the toxic chemical known as dioxin. Under the Seveso Directive, last updated in 2012, member countries are to require facilities with large amounts of ammonium nitrate fertilizer to notify the appropriate authority in their respective country, adopt a major accident prevention policy, and in some cases, develop a detailed safety report (see table 5). Some countries, such as France and the United Kingdom, have other requirements for notifying authorities about the types and quantities of chemicals at facilities, including certain types of ammonium nitrate. In the United Kingdom, officials told us that facilities with 28 tons or more of certain types of ammonium nitrate must notify the Health and Safety Executive or local authority and the fire authorities. French officials said that facilities with more than 276 tons of ammonium nitrate fertilizer must notify local authorities about their holdings. The selected countries we reviewed generally reported having more centralized land use policies that specify where facilities with large quantities of ammonium nitrate should be located. For example, EU officials explained that the Seveso Directive requires member countries to develop and implement land use policies. Through controls on the siting of new Seveso facilities and new developments in the vicinity of such facilities, such as transportation routes and residential areas, they told us, member countries’ policies aim to limit the consequences of chemical accidents for human health and the environment. In the United Kingdom, officials told us that facilities intending to store more than 1,102 tons of ammonium nitrate must first receive permission from their local planning authority to do so for relevant ammonium nitrate materials. They explained that these local planning authorities consider the hazards and risks to people in surrounding areas and consult with the Health and Safety Executive prior to granting permission to such facilities. Three of the countries we reviewed—France, Germany, and the United Kingdom—restrict the use of wood for storage purposes in certain instances, according to information and documents provided by relevant officials. EU officials told us that the Seveso Directive does not prescribe how chemicals, including ammonium nitrate, should be stored. EU countries have developed their own technical standards or rely on industry standards for storing and handling ammonium nitrate. For example, according to information provided by French officials, after several accidents involving ammonium nitrate fertilizer, the government in France launched a working group to update existing ammonium nitrate regulations, including storage and handling requirements. They described the most recent regulations in France, issued in 2010, which include updated fire resistance provisions for new and existing facilities banning or restricting the use of materials such as wood and asphalt flooring for storing ammonium nitrate. Specifically, according to documents provided by French officials, the regulations direct facilities not to store ammonium nitrate fertilizer in structures with wood walls or sides. According to an official in Germany, strict storage requirements for using certain types of ammonium nitrate fertilizer have led many farmers to voluntarily use an alternative type of fertilizer, known as calcium ammonium nitrate., For example, she explained that, in Germany, certain kinds of ammonium nitrate must be divided into quantities of 28 tons prior to storage, and quantities are separated by concrete walls. In addition, certain ammonium nitrate and ammonium nitrate-based preparations must be separated from combustible materials, for example by brick or concrete walls. Guidance in the United Kingdom also recommends that buildings for storing ammonium nitrate should be constructed of material that does not burn, such as concrete, bricks, or steel, as does the recent advisory in the United States published by OSHA, EPA, and ATF. Guidance on Safe Practices. In the countries we reviewed, government entities developed materials to help facilities with ammonium nitrate fertilizer comply with safety regulations. For example, in the United Kingdom, the government published guidance on storing and handling ammonium nitrate that illustrates proper storage practices and is written in plain language. The United Kingdom also developed a checklist that facilities can use as a compliance tool to determine whether they are meeting safe storage requirements. In Canada, Environment Canada issued a guidance document in 2011 so that facilities covered by its Environmental Emergency Regulations, including facilities with certain types and amounts of ammonium nitrate, can better understand and comply with regulatory requirements. The EU compiles information about chemical accidents and disseminates publications that include guidance on how facilities can prevent future incidents. Specifically, the EU has a system for reporting major accidents, including accidents involving ammonium nitrate, and tracks the information in a central database. For example, as of January 2014, this database contained information on several incidents involving ammonium nitrate dating back to 1986. EU researchers use this information to develop semi-annual publications in order to facilitate the exchange of lessons learned from accidents for both industry and government regulators. Each publication focuses on a particular theme such as a specific substance, industry, or practice, and summarizes the causes of related accidents and lessons learned to help prevent future accidents. EU officials told us that the next publication will be issued in the summer of 2014 and will focus on the hazards of ammonium nitrate in part as a result of the explosion that occurred in West, Texas. Routine Inspections. In the EU, member countries are required to inspect facilities with large quantities of chemicals covered by the Seveso Directive, which includes facilities with ammonium nitrate. According to EU officials and documents, the EU’s Seveso Directive requires covered facilities to be inspected either annually or once every 3 years, depending on the amount of hazardous chemicals a facility has—the greater the amount, the more frequent the inspection. EU officials also explained that member countries are required to report information to the European Commission every 3 years on how they are implementing the Seveso Directive requirements, including the number of facilities that have been inspected in their country. According to a report published by the European Commission in June 2013, member countries reported in December 2011 that they had 10,314 covered facilities. According to the report, of those facilities to be inspected annually, 66 percent were inspected, on average, in 2011, and of those facilities to be inspected once every 3 years, 43 percent were inspected, on average, in 2011. Voluntary Initiatives and Third Party Audits. In the countries we reviewed, the fertilizer industry has actively promoted voluntary compliance with national safety requirements among facilities with ammonium nitrate fertilizer. For example, Fertilizers Europe, which represents the major fertilizer manufacturers in Europe, published guidance in 2007 for the storage and handling of ammonium nitrate-based fertilizers. This guidance recommends that buildings used to store ammonium nitrate- based fertilizers be constructed of non-readily combustible materials such as brick, concrete, or steel and that wood or other combustible materials be avoided, among other things. Fertilizers Europe has also developed a compliance program that is a key requirement for membership, which consists of independent third party audits. As part of the program, it developed a self assessment tool for fertilizer manufacturers to use to identify gaps and possible improvements. In the United Kingdom, the government and the fertilizer industry worked together in 2006 to develop a voluntary compliance program for facilities that manufacture and store fertilizers, among other activities, including ammonium nitrate-based fertilizers. According to a United Kingdom official, the government provided some of the initial funding for this initiative, and the voluntary compliance program is now self financed. Although the program was initially focused on fertilizer security, it has evolved over the years to also address fertilizer safety in the United Kingdom. As part of the voluntary compliance program, participating facilities carry out risk assessments. These facilities are audited annually by an independent audit team comprised of specialists to determine whether they comply with industry and government standards, including standards for safely storing and handling ammonium nitrate fertilizer. Officials we interviewed in the United Kingdom told us that the government encourages and supports this industry initiative and that about 90 percent of facilities with ammonium nitrate in the United Kingdom, including those that have small quantities, are members of the voluntary program. A United Kingdom official said, in his opinion, one would expect facilities participating in this industry initiative to be more likely to be found in compliance by the government when it conducts its own inspections. Furthermore, government officials, industry representatives, and program administrators meet twice a year to discuss how the program is being implemented and monitored. Large quantities of ammonium nitrate are present in the United States, although the precise number of facilities with ammonium nitrate is not known. While incidents involving ammonium nitrate are rare, this chemical can react in ways that harm significant numbers of people and devastate communities. Facilities may be required, in certain circumstances, to report their chemical holdings to federal, state, and local authorities for security and emergency planning purposes. However, given the various reporting requirements and numerous reporting exemptions, some facilities may be uncertain about what to report to whom. Through the new Executive Order, federal agencies including DHS, EPA, and OSHA have the opportunity to work together on data sharing initiatives to help identify facilities with ammonium nitrate fertilizer. Such data sharing could help federal agencies identify facilities that are not complying with their regulations and enable OSHA to target high risk facilities with ammonium nitrate for inspection. Without improved coordination among the various federal and state agencies that collect data on facilities that store potentially hazardous chemicals, identifying facilities with ammonium nitrate for purposes of increasing awareness of the hazards and improving regulatory compliance will remain a challenge. Although OSHA has requirements for storing ammonium nitrate fertilizer in its Explosives and Blasting Agents regulations that could reduce the likelihood of an explosion, OSHA has done little to ensure that the fertilizer industry, which is one of the primary users of ammonium nitrate, understands how to comply with its existing regulations. The August 2013 chemical advisory and OSHA’s February 2014 letter to facilities help clarify how OSHA’s Explosives and Blasting Agents regulations apply to fertilizer facilities. However, without additional action by OSHA to promote awareness of how to comply with its regulations, fertilizer facilities may not know whether their practices are in compliance with OSHA’s existing ammonium nitrate storage regulations or if changes need to be made. Moreover, unless OSHA takes steps to leverage additional resources to support its enforcement efforts, whether through enhanced targeting or coordination with other agencies or outside parties, beginning with encouraging voluntary compliance with ammonium nitrate regulations through various industry initiatives, it will not know the extent to which dangerous conditions at some facilities may continue to exist. While much can be achieved under current regulations, OSHA and EPA’s regulations contain gaps with respect to ammonium nitrate that may allow unsafe facilities to operate and poor planning to persist. OSHA has not significantly changed its ammonium nitrate storage regulations since they were issued in 1971, which means that fertilizer facilities may be adhering to outdated practices. For example, other countries we reviewed have revisited and updated their ammonium nitrate regulations and the National Fire Protection Association is considering making changes to its ammonium nitrate storage standards as a result of the explosion in West, Texas. In addition, as a result of incidents involving ammonium nitrate abroad, countries in the European Union and Canada require facilities to assess the risks of working with ammonium nitrate fertilizer, and the European Union requires member countries to routinely inspect facilities that have very large quantities of it. These approaches offer examples of how the risks of ammonium nitrate can be managed. Although increased regulation may be more burdensome to industry, without some means of ensuring that high risk facilities plan for and manage the risks associated with ammonium nitrate, such facilities may not be prompted to adequately address the risks the chemical creates for workers and neighboring communities. 1. To improve federal oversight of facilities with ammonium nitrate, we recommend that the Secretary of Labor, the Administrator of EPA, and the Secretary of Homeland Security, as part of their efforts as members of the Chemical Facility Safety and Security Working Group established by the Executive Order issued in August 2013, develop and implement methods of improving data sharing among federal agencies and with states. 2. We also recommend that the Secretary of Labor direct the Assistant Secretary for Occupational Safety and Health to take the following three actions: Extend OSHA’s outreach to the fertilizer industry. For example, OSHA could work with the fertilizer industry to develop and disseminate informational materials related to storage of ammonium nitrate. Take steps to identify high risk facilities working with ammonium nitrate and develop options to target them for inspection. Consider updating regulations for the storage of ammonium nitrate taking into consideration, as appropriate, other related standards and current practices. 3. To strengthen federal oversight of facilities with ammonium nitrate, we recommend that the Secretary of Labor and the Administrator of EPA direct OSHA and EPA, respectively, to consider revising their related regulations to cover ammonium nitrate and jointly develop a plan to require high risk facilities with ammonium nitrate to assess the risks and implement safeguards to prevent accidents involving this chemical. We provided a draft of this report to the Administrator of EPA, the Secretary of Homeland Security, and the Secretary of Labor for review and comment. We received written comments from EPA, DHS, and OSHA, which are reproduced in appendices I, II, and III. EPA, DHS, and OSHA agreed with our recommendation that the agencies improve data sharing and described their current efforts to address this issue as part of their implementation of the Executive Order on Improving Chemical Facility Safety and Security. The agencies stated that a status report by the Executive Order Working Group, which will be submitted to the President by the end of May, 2014, will include proposals for enhancing data sharing among federal agencies and with states. OSHA agreed with our recommendation that the agency conduct additional outreach to the fertilizer industry, stating that additional outreach efforts will be identified in the Executive Order status report and that these efforts should help the fertilizer industry understand OSHA’s safety requirements and industry best practices. OSHA also agreed with our recommendation that the agency target high risk facilities for inspection, stating that the agency is evaluating options for targeting high risk fertilizer facilities for inspection. OSHA and EPA agreed with our recommendation that the agencies consider revising their regulations to cover ammonium nitrate. OSHA is currently reviewing public comments submitted in response to a Request for Information on a proposed revision to the agency’s Process Safety Management and Prevention of Major Chemical Accidents regulations and the a request for public input on issues associated with Section 6 of the Executive Order, which addresses Policy, Regulation, and Standards Modernization. EPA stated that the agency will be publishing a Request for Information seeking public input on its proposed revision to process safety and risk management issues relevant to its Risk Management Program regulations, including coverage of ammonium nitrate. In addition, EPA, DHS, and OSHA provided technical comments, which we have incorporated as appropriate. We also provided portions of the draft report related to each of the four countries we reviewed to relevant officials from each country, and incorporated their technical comments, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Administrator of EPA, the Secretary of Homeland Security, the Secretary of Labor, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-7215 or moranr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. In addition to the contact named above, Betty Ward-Zukerman, Assistant Director; Catherine Roark, Analyst in Charge, Theodore Alexander, Nancy Cosentino, Joel Marus, and Meredith Moore, made significant contributions to all phases of the work. Also contributing to this report were Hiwotte Amare, Jason Bair, James Bennett, Susan Bernstein, Stephen Caldwell, Sarah Cornetto, Charles Johnson, Jr., Kathy Leslie, Ashley McCall, Sheila McCoy, Jean McSween, John Mortin, Vincent Price, Stephen Sanford, Sushil Sharma, Linda Siegel, Maria Stattel, and Kathleen van Gelder.
In April 2013, about 30 tons of ammonium nitrate fertilizer detonated during a fire at a facility in West, Texas, killing at least 14 people and damaging nearby schools, homes, and a nursing home. This incident raised concerns about the risks posed by similar facilities across the country. OSHA and EPA play a central role in protecting workers and communities from chemical accidents, and DHS administers a chemical facility security program. GAO was asked to examine oversight of ammonium nitrate facilities in the United States and other countries. This report addresses (1) how many facilities have ammonium nitrate in the United States, (2) how OSHA and EPA regulate and oversee facilities that have ammonium nitrate, and (3) what approaches selected other countries have adopted for regulating and overseeing facilities with ammonium nitrate. GAO analyzed available federal data and data from selected states with high use of ammonium nitrate; reviewed federal laws and regulations; and interviewed government officials, chemical safety experts, and industry representatives in the United States and selected countries. Federal data provide insight into the number of facilities in the United States with ammonium nitrate but do not provide a complete picture because of reporting exemptions and other data limitations. The Occupational Safety and Health Administration (OSHA) and the Environmental Protection Agency (EPA) do not require facilities to report their ammonium nitrate holdings. The Department of Homeland Security (DHS) requires facilities with certain quantities of ammonium nitrate to report their holdings for security purposes. While the total number of facilities in the United States with ammonium nitrate is unknown, as of August 2013, at least 1,300 facilities in 47 states reported to DHS that they had reportable quantities of ammonium nitrate. Federal law also requires certain facilities to report their ammonium nitrate holdings to state and local authorities for emergency planning purposes, but these data are not routinely shared with federal agencies. According to EPA, states are not required to report these data to federal agencies, and each state determines how to share its data. As part of an Executive Order on Improving Chemical Facility Safety and Security issued in August 2013, federal agencies are exploring options for improving data sharing, but this work is not yet complete. OSHA and EPA provide limited oversight of facilities that have ammonium nitrate. OSHA's regulations include provisions for the storage of ammonium nitrate, but the agency has done little outreach to increase awareness of these regulations within the fertilizer industry, a primary user. In addition, the regulations have not been significantly revised since 1971 and allow storage of ammonium nitrate in wooden buildings, which could increase the risk of fire and explosion. Other OSHA and EPA chemical safety regulations—which require facilities to complete hazard assessments, use procedures to prevent and respond to accidents, and conduct routine compliance audits—do not apply to ammonium nitrate. Furthermore, although OSHA targets worksites in certain industries for inspection, its inspection programs do not target facilities with ammonium nitrate and, according to OSHA officials, information on these facilities is not available to them to use for targeting the facilities. International chemical safety guidance suggests authorities should provide facilities information on how regulatory requirements can be met and periodically inspect them. GAO reviewed approaches to overseeing facilities with ammonium nitrate in Canada, France, Germany, and the United Kingdom, selected in part based on recommendations from chemical safety experts. According to foreign officials and government documents, these countries require facilities with specified quantities of ammonium nitrate to assess its risk and develop plans or policies to prevent chemical accidents. For example, Canadian officials said facilities with 22 tons or more of ammonium nitrate are required to complete a risk assessment and an emergency plan. Some countries' storage requirements also restrict the use of wood to store ammonium nitrate. For example, officials told GAO that France restricted the use of wood for storing ammonium nitrate fertilizer after several incidents involving ammonium nitrate fertilizer, and German officials told GAO that certain ammonium nitrate and ammonium nitrate-based preparations must be separated from combustible materials by brick or concrete walls. GAO is recommending that federal agencies improve data sharing, OSHA and EPA consider revising their related regulations to cover ammonium nitrate, and OSHA conduct outreach to the fertilizer industry and target high risk facilities for inspection. DHS, EPA, and OSHA agreed with GAO's recommendations and suggested technical changes, which GAO incorporated as appropriate.
In our June 2006 report, we found that DOD and VA had taken actions to facilitate the transition of medical and rehabilitative care for seriously injured servicemembers who were being transferred from MTFs to PRCs. For example, in April 2004, DOD and VA signed a memorandum of agreement that established referral procedures for transferring injured servicemembers from DOD to VA medical facilities. DOD and VA also established joint programs to facilitate the transfer to VA medical facilities, including a program that assigned VA social workers to selected MTFs to coordinate transfers. Despite these coordination efforts, we found that DOD and VA were having problems sharing the medical records VA needed to determine whether servicemembers’ medical conditions allowed participation in VA’s vigorous rehabilitation activities. DOD and VA reported that as of December 2005 two of the four PRCs had real-time access to the electronic medical records maintained at Walter Reed Army Medical Center and only one of the two also had access to the records at the National Naval Medical Center. In cases where medical records could not be accessed electronically, the MTF faxed copies of some medical information, such as the patient’s medical history and progress notes, to the PRC. Because this information did not always provide enough data for the PRC provider to determine if the servicemember was medically stable enough to be admitted to the PRC, VA developed a standardized list of the minimum types of health care information needed about each servicemember transferring to a PRC. Even with this information, PRC providers frequently needed additional information and had to ask for it specifically. For example, if the PRC provider notices that the servicemember is on a particular antibiotic therapy, the provider may request the results of the most recent blood and urine cultures to determine if the servicemember is medically stable enough to participate in strenuous rehabilitation activities. According to PRC officials, obtaining additional medical information in this way, rather than electronically, is very time consuming and often requires multiple phone calls and faxes. VA officials told us that the transfer could be more efficient if PRC medical personnel had real- time access to the servicemembers’ complete DOD electronic medical records from the referring MTFs. However, problems existed even for the two PRCs that had been granted electronic access. During a visit to those PRCs in April 2006, we found that neither facility could access the records at Walter Reed Army Medical Center because of technical difficulties. As discussed in our January 2005 report, the importance of early intervention for returning individuals with disabilities to the workforce is well documented in vocational rehabilitation literature. In 1996, we reported that early intervention significantly facilitates the return to work but that challenges exist in providing services early. For example, determining the best time to approach recently injured servicemembers and gauge their personal receptivity to considering employment in the civilian sector is inherently difficult. The nature of the recovery process is highly individualized and requires professional judgment to determine the appropriate time to begin vocational rehabilitation. Our 2007 High-Risk Series: An Update designates federal disability programs as “high risk” because they lack emphasis on the potential for vocational rehabilitation to return people to work. In our January 2005 report, we found that servicemembers whose disabilities are definitely or likely to result in military separation may not be able to benefit from early intervention because DOD and VA could work at cross purposes. In particular, DOD was concerned about the timing of VA’s outreach to servicemembers whose discharge from military service is not yet certain. DOD was concerned that VA’s efforts may conflict with the military’s retention goals. When servicemembers are treated as outpatients at a VA or military hospital, DOD generally begins to assess whether the servicemember will be able to remain in the military. This process can take months. For its part, VA took steps to make seriously injured servicemembers a high priority for all VA assistance. Noting the importance of early intervention, VA instructed its regional offices in 2003 to assign a case manager to each seriously injured servicemember who applies for disability compensation. VA had detailed staff to MTFs to provide information on all veterans’ benefits, including vocational rehabilitation, and reminded staff that they can initiate evaluation and counseling, and, in some cases, authorize training before a servicemember is discharged. While VA tries to prepare servicemembers for a transition to civilian life, VA’s outreach process may overlap with DOD’s process for evaluating servicemembers for a possible return to duty. In our report, we concluded that instead of working at cross purposes to DOD goals, VA’s early intervention efforts could facilitate servicemembers’ return to the same or a different military occupation, or to a civilian occupation if the servicemembers were not able to remain in the military. In this regard, the prospect for early intervention with vocational rehabilitation presents both a challenge and an opportunity for DOD and VA to collaborate to provide better outcomes for seriously injured servicemembers. In our May 2006 report, we described DOD’s efforts to identify and facilitate care for OEF/OIF servicemembers who may be at risk for PTSD. To identify such servicemembers, DOD uses a questionnaire, the DD 2796, to screen OEF/OIF servicemembers after their deployment outside of the United States has ended. The DD 2796 is used to assess servicemembers’ physical and mental health and includes four questions to identify those who may be at risk for developing PTSD. We reported that according to a clinical practice guideline jointly developed by DOD and VA, servicemembers who responded positively to at least three of the four PTSD screening questions may be at risk for developing PTSD. DOD health care providers review completed questionnaires, conduct face-to-face interviews with servicemembers, and use their clinical judgment in determining which servicemembers need referrals for further mental health evaluations. OEF/OIF servicemembers can obtain the mental health evaluations, as well as any necessary treatment for PTSD, while they are servicemembers—that is, on active duty—or when they transition to veteran status if they are discharged or released from active duty. Despite DOD’s efforts to identify OEF/OIF servicemembers who may need referrals for further mental health evaluations, we reported that DOD cannot provide reasonable assurance that OEF/OIF servicemembers who need the referrals receive them. Using data provided by DOD, we found that 22 percent, or 2,029, of the 9,145 OEF/OIF servicemembers in our review who may have been at risk for developing PTSD were referred by DOD health care providers for further mental health evaluations. Across the military service branches, DOD health care providers varied in the frequency with which they issued referrals to OEF/OIF servicemembers with three or more positive responses to the PTSD screening questions------ the Army referred 23 percent, the Air Force about 23 percent, the Navy 18 percent, and the Marines about 15 percent. According to DOD officials, not all of the OEF/OIF servicemembers with three or four positive responses on the screening questionnaire need referrals. As directed by DOD’s guidance for using the DD 2796, DOD health care providers are to rely on their clinical judgment to decide which of these servicemembers need further mental health evaluations. However, at the time of our review DOD had not identified the factors its health care providers used to determine which OEF/OIF servicemembers needed referrals. Knowing these factors could explain the variation in referral rates and allow DOD to provide reasonable assurance that such judgments are being exercised appropriately. We recommended that DOD identify the factors that DOD health care providers used in issuing referrals for further mental health evaluations to explain provider variation in issuing referrals. DOD concurred with the recommendation. Although OEF/OIF servicemembers may obtain mental health evaluations or treatment for PTSD through VA when they transition to veteran status, VA may face a challenge in meeting the demand for PTSD services. In September 2004 we reported that VA had intensified its efforts to inform new veterans from the Iraq and Afghanistan conflicts about the health care services—including treatment for PTSD—VA offers to eligible veterans. We observed that these efforts, along with expanded availability of VA health care services for Reserve and National Guard members, could result in an increased percentage of veterans from Iraq and Afghanistan seeking PTSD services through VA. However, at the time of our review officials at six of seven VA medical facilities we visited explained that while they were able to keep up with the current number of veterans seeking PTSD services, they may not be able to meet an increase in demand for these services. In addition, some of the officials expressed concern because facilities had been directed by VA to give veterans from the Iraq and Afghanistan conflicts priority appointments for health care services, including PTSD services. As a result, VA medical facility officials estimated that follow-up appointments for veterans receiving care for PTSD could be delayed. VA officials estimated the delays to be up to 90 days. As discussed in our April 2006 testimony, problems related to military pay have resulted in overpayments and debt for hundreds of sick and injured servicemembers. These pay problems resulted in significant frustration for the servicemembers and their families. We found that hundreds of battle-injured servicemembers were pursued for repayment of military debts through no fault of their own, including at least 74 servicemembers whose debts had been reported to credit bureaus and private collections agencies. In response to our audit, DOD officials said collection actions on these servicemembers’ debts had been suspended until a determination could be made as to whether these servicemembers’ debts were eligible for relief. Debt collection actions created additional hardships on servicemembers by preventing them from getting loans to buy houses or automobiles or pay off other debt, and sending several servicemembers into financial crisis. Some battle-injured servicemembers forfeited their final separation pay to cover part of their military debt, and they left the service with no funds to cover immediate expenses while facing collection actions on their remaining debt. We also found that sick and injured servicemembers sometimes went months without paychecks because debts caused by overpayments of combat pay and other errors were offset against their military pay. Furthermore, the longer it took DOD to stop the overpayments, the greater the amount of debt that accumulated for the servicemember and the greater the financial impact, since more money would eventually be withheld from the servicemember’s pay or sought through debt collection action after the servicemember had separated from the service. In our 2005 testimony about Army National Guard and Reserve servicemembers, we found that poorly defined requirements and processes for extending injured and ill reserve component servicemembers on active duty have caused servicemembers to be inappropriately dropped from active duty. For some, this has led to significant gaps in pay and health insurance, which has created financial hardships for these servicemembers and their families. Mr. Chairman, this completes my prepared remarks. I would be happy to respond to any questions you or other members of the subcommittee may have at this time. For further information about this testimony, please contact Cynthia A. Bascetta at (202) 512-7101 or bascettac@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Michael T. Blair, Jr., Assistant Director; Cynthia Forbes; Krister Friday; Roseanne Price; Cherie’ Starck; and Timothy Walker made key contributions to this statement. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. VA and DOD Health Care: Efforts to Provide Seamless Transition of Care for OEF and OIF Servicemembers and Veterans. GAO-06-794R. Washington, D.C.: June 30, 2006. Post-Traumatic Stress Disorder: DOD Needs to Identify the Factors Its Providers Use to Make Mental Health Evaluation Referrals for Servicemembers. GAO-06-397. Washington, D.C.: May 11, 2006. Military Pay: Military Debts Present Significant Hardships to Hundreds of Sick and Injured GWOT Soldiers. GAO-06-657T. Washington, D.C.: April 27, 2006. Military Disability System: Improved Oversight Needed to Ensure Consistent and Timely Outcomes for Reserve and Active Duty Service Members. GAO-06-362. Washington, D.C.: March 31, 2006. Military Pay: Gaps in Pay and Benefits Create Financial Hardships for Injured Army National Guard and Reserve Soldiers. GAO-05-322T. Washington, D.C.: February 17, 2005. Vocational Rehabilitation: More VA and DOD Collaboration Needed to Expedite Services for Seriously Injured Servicemembers. GAO-05-167. Washington, D.C.: January 14, 2005. VA and Defense Health Care: More Information Needed to Determine If VA Can Meet an Increase in Demand for Post-Traumatic Stress Disorder Services. GAO-04-1069. Washington, D.C.: September 20, 2004. SSA Disability: Return-to-Work Strategies from Other Systems May Improve Federal Programs. GAO/HEHS-96-133. Washington, D.C.: July 11, 1996. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As of March 1, 2007, over 24,000 servicemembers have been wounded in action since the onset of Operation Enduring Freedom (OEF) and Operation Iraqi Freedom (OIF), according to the Department of Defense (DOD). GAO work has shown that servicemembers injured in combat face an array of significant medical and financial challenges as they begin their recovery process in the health care systems of DOD and the Department of Veterans Affairs (VA). GAO was asked to discuss concerns regarding DOD and VA efforts to provide medical care and rehabilitative services for servicemembers who have been injured during OEF and OIF. This testimony addresses (1) the transition of care for seriously injured servicemembers who are transferred between DOD and VA medical facilities, (2) DOD's and VA's efforts to provide early intervention for rehabilitation for seriously injured servicemembers, (3) DOD's efforts to screen servicemembers at risk for post-traumatic stress disorder (PTSD) and whether VA can meet the demand for PTSD services, and (4) the impact of problems related to military pay on injured servicemembers and their families. This testimony is based on GAO work issued from 2004 through 2006 on the conditions facing OEF/OIF servicemembers at the time the audit work was completed. Despite coordinated efforts, DOD and VA have had problems sharing medical records for servicemembers transferred from DOD to VA medical facilities. GAO reported in 2006 that two VA facilities lacked real-time access to electronic medical records at DOD facilities. To obtain additional medical information, facilities exchanged information by means of a time-consuming process resulting in multiple faxes and phone calls. In 2005, GAO reported that VA and DOD collaboration is important for providing early intervention for rehabilitation. VA has taken steps to initiate early intervention efforts, which could facilitate servicemembers' return to duty or to a civilian occupation if the servicemembers were unable to remain in the military. However, according to DOD, VA's outreach process may overlap with DOD's process for evaluating servicemembers for a possible return to duty. DOD was also concerned that VA's efforts may conflict with the military's retention goals. In this regard, DOD and VA face both a challenge and an opportunity to collaborate to provide better outcomes for seriously injured servicemembers. DOD screens servicemembers for PTSD but, as GAO reported in 2006, it cannot ensure that further mental health evaluations occur. DOD health care providers review questionnaires, interview servicemembers, and use clinical judgment in determining the need for further mental health evaluations. However, GAO found that 22 percent of the OEF/OIF servicemembers in GAO's review who may have been at risk for developing PTSD were referred by DOD health care providers for further evaluations. According to DOD officials, not all of the servicemembers at risk will need referrals. However, at the time of GAO's review DOD had not identified the factors its health care providers used to determine which OEF/OIF servicemembers needed referrals. Although OEF/OIF servicemembers may obtain mental health evaluations or treatment for PTSD through VA, VA may face a challenge in meeting the demand for PTSD services. VA officials estimated that follow-up appointments for veterans receiving care for PTSD may be delayed up to 90 days. GAO's 2006 testimony pointed out problems related to military pay have resulted in debt and other hardships for hundreds of sick and injured servicemembers. Some servicemembers were pursued for repayment of military debts through no fault of their own. As a result, servicemembers have been reported to credit bureaus and private collections agencies, been prevented from getting loans, gone months without paychecks, and sent into financial crisis. In a 2005 testimony GAO reported that poorly defined requirements and processes for extending the active duty of injured and ill reserve component servicemembers have caused them to be inappropriately dropped from active duty, leading to significant gaps in pay and health insurance for some servicemembers and their families.
The export of domestically produced crude oil has generally been restricted since the 1970s. In particular, the Energy Policy and Conservation Act of 1975 (EPCA) led the Department of Commerce’s Bureau of Industry and Security (BIS) to promulgate regulations that require crude oil exporters to obtain a license.that BIS will issue licenses for the following crude oil exports: exports from Alaska’s Cook Inlet, exports to Canada for consumption or use therein, exports in connection with refining or exchange of SPR crude oil, exports of certain California crude oil up to twenty-five thousand barrels per day, exports consistent with certain international energy supply exports consistent with findings made by the President under certain exports of foreign origin crude oil that has not been commingled with crude oil of U.S. origin. Other than for these exceptions, BIS considers export license applications for exchanges involving crude oil on a case-by-case basis, and BIS can approve them if it determines that the proposed export is consistent with the national interest and purposes of EPCA. In addition to BIS’s export controls, other statutes control the export of domestically produced crude oil, depending on where it was produced and how it is transported. In these cases, BIS can approve exports only if the President makes the necessary findings under applicable laws. Some of the authorized exceptions, outlined above, are the result of such presidential findings. As we previously found, recent increases in U.S. crude oil production have lowered the cost of some domestic crude oils. For example, prices for West Texas Intermediate (WTI) crude oil—a domestic crude oil used as a benchmark for pricing—were historically about the same price as Brent, an international benchmark crude oil from the North Sea between However, from 2011 through Great Britain and the European continent.2014, the price of WTI averaged $12 per barrel lower than Brent (see fig. 1). In 2014, prices for these benchmark crude oils narrowed as global oil prices declined, and WTI averaged $52 from January through May 2015, while Brent averaged $57. The development of U.S. crude oil production has created some challenges for crude oil transportation infrastructure because some production has been in areas with limited linkages to refining centers. According to EIA, these infrastructure constraints have contributed to discounted prices for some domestic crude oils. Much of the crude oil currently produced in the United States has characteristics that differ from historic domestic production. Crude oil is generally classified according to two parameters: density and sulfur content. Less dense crude oils are known as “light,” and denser crude oils are known as “heavy.” Crude oils with relatively low sulfur content are known as “sweet,” and crude oils with higher sulfur content are known as “sour.” As shown in figure 2, according to EIA, most domestic crude oil produced over the last 5 years has tended to be light oil. Specifically, according to EIA estimates, about all of the 1.8 million barrels per day increase in production from 2011 to 2013 consisted of lighter sweet crude oils. Light crude oil differs from the crude oil that many U.S. refineries are designed to process. Refineries are configured to produce transportation fuels and other products (e.g., gasoline, diesel, jet fuel, and kerosene) from specific types of crude oil. Refineries use a distillation process that separates crude oil into different fractions, or interim products, based on their boiling points, which can then be further processed into final products. Many refineries in the United States are configured to refine heavier crude oils and have therefore been able to take advantage of historically lower prices of heavier crude oils. For example, in 2013, the average density of crude oil used at domestic refineries was 30.8, while nearly all of the increase in production in recent years has been lighter crude oil with a density of 35 or above. According to EIA, additional production of light crude oil over the past several years has been absorbed into the market through several mechanisms, but the capacity of these mechanisms to absorb further increases in light crude oil production may be limited in the future for the following reasons: Reduced imports of similar grade crude oils: According to EIA, additional production of light oil in the past several years has primarily been absorbed by reducing imports of similar grade crude oils. Light crude oil imports fell from 1.7 million barrels per day in 2011 to 1 million barrels per day in 2013. As a result, there may be dwindling amounts of light crude oil imports that can be reduced in the future, according to EIA. Increased crude oil exports: Crude oil exports have increased recently, from less than thirty thousand barrels per day in 2008 to 396 thousand barrels per day in June 2014. Continued increases in crude oil exports will depend, in part, on the extent of any relaxation of current export restrictions, according to EIA. Increased use of light crude oils at domestic refineries: Domestic refineries have increased the average gravity of crude oils that they refine. The average American Petroleum Institute (API) gravity of crude oil used in U.S. refineries increased from 30.2 degrees in 2008 to 30.8 degrees in 2013, according to EIA. Continued shifts to use additional lighter crude oils at domestic refineries can be enabled by investments to relieve constraints associated with refining lighter crude oils at refineries that were optimized to refine heavier crude oils, according to EIA. Increased use of domestic refineries: In recent years, domestic refineries have been run more intensively, allowing the use of more domestic crude oils. Utilization—a measure of how intensively refineries are used that is calculated by dividing total crude oil and other inputs used at refineries by the amount refineries can process under usual operating conditions—increased from 86 percent in 2011 to 88 percent in 2013. There may be limits to further increases in utilization of refineries that are already running at high rates, according to EIA. In our September 2014 report, we reported that according to the studies we reviewed and the stakeholders we interviewed, removing crude oil export restrictions would likely increase some domestic crude oil prices, but could decrease consumer fuel prices, although the extent of consumer fuel price changes are uncertain and may vary by region. As discussed earlier, increasing domestic crude oil production has resulted in lower prices of some domestic crude oils compared with international benchmark crude oils. Three of the studies we reviewed also concluded that, absent changes in crude oil export restrictions, the expected growth in crude oil production may not be fully absorbed by domestic refineries or through exports (where allowed), contributing to even wider differences in prices between some domestic and international crude oils. According to these studies, by removing the export restrictions, these domestic crude oils could be sold at prices closer to international prices, reducing the price differential and aligning the price of domestic crude oil with international benchmarks. Specifically, the Department of Commerce’s definition of crude oil includes condensates, which are light liquid hydrocarbons recovered primarily from natural gas wells. subject to export restrictions. One stakeholder stated that this may lead to more condensate exports than expected. Within the context of these uncertainties, estimates of potential price effects vary in the four studies we reviewed, as shown in table 1. Specifically, estimates in these studies of the increase in domestic crude oil prices due to removing crude oil export restrictions ranged from about $2 to $8 per barrel. was $103 per barrel, and these estimates represented 2 to 8 percent of that price. In addition, NERA Economic Consulting found that removing export restrictions would have no measurable effect in a case that assumes a low future international oil price of $70 per barrel in 2015 According to the NERA Economic rising to less than $75 by 2035.Consulting study, current production costs are close to these values, so that removing export restrictions would provide little incentive to produce more light crude oil. Unless otherwise noted, dollar estimates in the rest of this report have been converted to 2014 year dollars. These are average price effects over the study time frames, and some cases in some studies projected larger price effects in the near term that declined over time. ICF International West Texas Intermediate crude oil prices increase $2.35 to $4.19 per barrel on average from 2015- 2035. IHS Prices increase $7.89 per barrel on average from 2016-2030. NERA Economic Consulting Prices increase $1.74 per barrel in the reference case and $5.95 per barrel in the high case on average from 2015-2035. Implications refer to the difference between the reference case and its baseline with export restrictions in place, and also the difference between the high oil and gas recovery case and its corresponding baseline. NERA Economic Consulting also found that removing crude oil export restrictions would have no measurable effect in the low world oil price case. Regarding consumer fuel prices, such as gasoline, diesel, and jet fuel, the studies we reviewed and most of the stakeholders we interviewed suggested that consumer fuel prices could decrease as a result of removing crude oil export restrictions. A decrease in consumer fuel prices could occur because such prices tend to follow international crude oil prices rather than domestic crude oil prices, according to the studies reviewed and most of the stakeholders interviewed. If domestic crude oil exports caused international crude oil prices to decrease, consumer fuel Table 2 shows that the estimates of the prices could decrease as well. price effects on consumer fuels varied in the four studies we reviewed. Price estimates ranged from a decrease of 1.5 to 13 cents per gallon. These estimates represented 0.4 to 3.4 percent of the average U.S. retail gasoline price at the beginning of June 2014. In addition, NERA Economic Consulting found that removing export restrictions would have no measurable effect on consumer fuel prices when assuming a low future world crude oil price. Resources for the Future also estimates a decrease in consumer fuel prices but this decrease is as a result of increased refinery efficiency (even with an estimated slight increase in the international crude oil price). ICF International Petroleum product prices would decline by 1.5 to 2.4 cents per gallon on average from 2015-2035. IHS Gasoline prices would decline by 9 to 13 cents per gallon on average from 2016- 2030. NERA Economic Consulting Petroleum product prices would decline by 3 cents per gallon on average from 2015-2035 in the reference case and 11 cents per gallon in the high case. Gasoline prices would decline by 3 cents per gallon in the reference case and 10 cents per gallon in the high case. Fuel prices would not be affected in a low world oil price case. Implications refer to the difference between the reference case and its baseline with export restrictions in place, and the difference between the high oil and gas recovery case and its corresponding baseline. The effect of removing crude oil export restrictions on domestic consumer fuel prices depends on several uncertainties, as we discussed in our September 2014 report. First, it would depend on the extent to which domestic versus international crude oil prices determine the domestic price of consumer fuels. A 2014 research study examining the relationship between domestic crude oil and gasoline prices concluded that low domestic crude oil prices in the Midwest during 2011 did not result in lower gasoline prices in that region. This research supports the assumption made in the four studies we reviewed that to some extent higher prices of some domestic crude oils as a result of removing crude oil export restrictions would not be passed on to consumer fuel prices. However, some stakeholders told us that this may not always be the case and that more recent or detailed data could show that lower prices for some domestic crude oils have influenced consumer fuel prices. The Merchant Marine Act of 1920, also known as the Jones Act, in general, requires that any vessel (including barges) operating between two U.S. ports be U.S.-built, -owned, and -operated. closure, especially those located in the Northeast. However, according to one stakeholder, domestic refiners still have a significant cost advantage in the form of less expensive natural gas, which is an important energy source for many refineries. For this and other reasons, one stakeholder told us they did not anticipate refinery closures as a result of removing export restrictions. The studies we reviewed for our September 2014 report, generally suggested that removing crude oil export restrictions may increase domestic crude oil production and may affect the environment and the economy: Crude oil production. Removing crude oil export restrictions may increase domestic crude oil production. Even with current crude oil export restrictions, given various scenarios, EIA projected that domestic production will continue to increase through 2020. If export restrictions were removed, according to the four studies we reviewed, the increased prices of domestic crude oil are projected to lead to further increases in crude oil production. Projections of this increase varied in the studies we reviewed—from a low of an additional 130,000 barrels per day on average from 2015 through 2035, according to the ICF International study, to a high of an additional 3.3 million barrels per day on average from 2015 through 2035 in NERA Economic Consulting’s study.almost 40 percent of production in April 2014. This is equivalent to 1.5 percent to Environment. Two of the studies we reviewed stated that the increased crude oil production that could result from removing the restrictions on crude oil exports may affect the environment. Most stakeholders we interviewed echoed this statement. This is consistent with what we found in a September 2012 report.we found that crude oil development may pose certain inherent environmental and public health risks. However, the extent of the risk is unknown, in part, because the severity of adverse effects depends on various location- and process-specific factors, including the location of future shale oil and gas development and the rate at which it occurs. It also depends on geology, climate, business practices, and regulatory and enforcement activities. The stakeholders who raised concerns about the effect of removing the restrictions on crude oil exports on the environment identified risks including those related to the quality and quantity of surface and groundwater sources; increases in greenhouse gas and other air emissions, and increases in the risk of spills from crude oil transportation. The economy. The four studies we reviewed suggested that removing crude oil export restrictions would increase the size of the economy. Three of the studies projected that removing export restrictions would lead to additional investment in crude oil production and increases in employment. This growth in the oil sector would—in turn—have additional positive effects in the rest of the economy. For example, NERA Economic Consulting’s study projected an average of 230,000 to 380,000 workers would be removed from unemployment through 2020 if export restrictions were eliminated in 2015. These employment benefits would largely disappear if export restrictions were not removed until 2020 because by then the economy would have returned to full employment. Two of the studies we reviewed suggested that removing export restrictions would increase government revenues, although the estimates of the increase vary. One study estimated that total government revenue would increase by a combined $1.4 trillion in additional revenue from 2016 through 2030, and another study estimated that U.S. federal, state, and local tax receipts combined with royalties from drilling on federal lands could increase by an annual average of $3.9 to $5.7 billion from 2015 through 2035. Chairman Conaway, Ranking Member Peterson, and Members of the Committee, this completes my prepared statement. I would be pleased to answer any questions that you may have at this time. If you or your staff members have any questions concerning this testimony, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions include Christine Kehr (Assistant Director), Quindi Franco, Alison O’Neill, and Kiki Theodoropoulos. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
After decades of generally falling U.S. crude oil production, technological advances in the extraction of crude oil from shale formations have contributed to increases in U.S. production. In response to these and other market developments, some have proposed removing the 4 decade old restrictions on crude oil exports, underscoring the need to understand how allowing crude oil exports could affect crude oil prices, and the prices of consumer fuels refined from crude oil, such as gasoline and diesel. This testimony discusses what is known about the pricing and other key potential implications of removing crude oil export restrictions. It is based on GAO's September 2014 report ( GAO-14-807 ), and information on crude oil production and prices updated in June 2015. For that report, GAO reviewed four studies issued in 2014 on crude oil exports; including two sponsored by industry and conducted by consultants, one sponsored by a research organization and conducted by consultants, and one conducted at a research organization. Market conditions have changed since these studies were conducted, underscoring some uncertainties surrounding estimates of potential implications of removing crude oil export restrictions. For its 2014 report, GAO also summarized the views of a nongeneralizable sample of 17 stakeholders including representatives of companies and interest groups with a stake in the outcome of decisions regarding crude oil export restrictions, as well as academic, industry, and other experts. In September 2014, GAO reported that according to studies it reviewed and stakeholders it interviewed, removing crude oil export restrictions would likely increase domestic crude oil prices, but could decrease consumer fuel prices, although the extent of price changes are uncertain and may vary by region. The studies identified the following implications for U.S. crude oil and consumer fuel prices: Crude oil prices . The four studies GAO reviewed estimated that if crude oil export restrictions were removed, U.S. crude oil prices would increase by about $2 to $8 per barrel—bringing them closer to international prices. Prices for some U.S. crude oils have been lower than international prices—for example, one benchmark U.S. crude oil averaged $52 per barrel from January through May 2015, while a comparable international crude oil averaged $57. In addition, one study found that, when assuming low future crude oil prices overall, removing export restrictions would have no measurable effect on U.S. crude oil prices. Consumer fuel prices. The four studies suggested that U.S. prices for gasoline, diesel, and other consumer fuels follow international prices. If domestic crude oil exports caused international crude oil prices to decrease, consumer fuel prices could decrease as well. Estimates of the consumer fuel price implications in the four studies GAO reviewed ranged from a decrease of 1.5 to 13 cents per gallon. In addition, one study found that, when assuming low future crude oil prices, removing export restrictions would have no measurable effect on consumer fuel prices. Some stakeholders cautioned that estimates of the price implications of removing export restrictions are subject to several uncertainties, such as the extent of U.S. crude oil production increases, and how readily U.S. refiners are able to absorb such increases. Some stakeholders further told GAO that there could be important regional differences in the price implications of removing export restrictions. The studies GAO reviewed and the stakeholders it interviewed generally suggested that removing crude oil export restrictions may also have the following implications: Crude oil production . Removing export restrictions may increase domestic production—over 8 million barrels per day in April 2014—because of increasing domestic crude oil prices. Estimates ranged from an additional 130,000 to 3.3 million barrels per day on average from 2015 through 2035. Environment . Additional crude oil production may pose risks to the quality and quantity of surface groundwater sources; increase greenhouse gas and other emissions; and increase the risk of spills from crude oil transportation. The economy . Three of the studies projected that removing export restrictions would lead to additional investment in crude oil production and increases in employment. This growth in the oil sector would—in turn—have additional positive effects in the rest of the economy, including for employment and government revenues.
CBP and APHIS have taken four major steps intended to strengthen the AQI program since the transfer of responsibilities following passage of the Homeland Security Act of 2002. To date, we have not done work to assess the implementation and effectiveness of these actions. First, CBP and APHIS expanded the hours of training on agricultural issues for CBP officers, whose primary duty is customs and immigration inspection, and for CBP agriculture specialists, whose primary duty is agricultural inspection. Specifically, newly hired CBP officers receive 16 hours of training on agricultural issues, whereas before the transfer to CBP, customs inspectors received 4 hours of agricultural training, and immigration inspectors received 2 hours. CBP and APHIS also expanded agriculture training for CBP officers at their respective ports of entry to help them make better-informed decisions on agricultural items at high- volume border traffic areas. Additionally, CBP and APHIS have standardized the in-port training program and have developed a national standard for agriculture specialists with a checklist of activities for agriculture specialists to master. These activities are structured into an 8- week module on passenger inspection procedures and a 10-week module on cargo inspection procedures. Based on our survey of agriculture specialists, we estimate that 75 percent of specialists hired by CBP believe that they received sufficient training (on the job and at the Professional Development Center) to enable them to perform their agriculture inspection duties. Second, CBP and APHIS have taken steps designed to better target shipments and passengers that potentially present a high risk to U.S. agriculture. Specifically, some CBP agriculture specialists received training and were given access to CBP’s Automated Targeting System, a computer system that, among other things, is designed to focus limited inspection resources on higher-risk passengers and cargo and facilitate expedited clearance or entry for low-risk passengers and cargo. This system gives agriculture specialists detailed information from cargo manifests and other documents that shipping companies are required to submit before the ship arrives in a port to help them select high-risk cargo for inspection. CBP and APHIS headquarters personnel also use this information to identify companies that had previously violated U.S. quarantine laws. For example, according to a senior APHIS official, the two agencies used this system to help identify companies that have used seafood containers to smuggle uncooked poultry products from Asia, which are currently banned because of concerns over avian influenza. Third, CBP and APHIS established a formal assessment process intended to ensure that ports of entry carry out agricultural inspections in accordance with the agricultural quarantine inspection program’s regulations, policies, and procedures. The process, called Joint Agency Quality Assurance Reviews, covers topics such as (1) CBP coordination with other federal agencies; (2) agriculture specialist training; (3) specialist access to regulatory manuals; and (4) specialist adherence to processes for handling violations at the port, inspecting passenger baggage and vehicles, and intercepting, seizing, and disposing of confiscated materials. The reviews address best practices and deficiencies at each port and make recommendations for corrective actions to be implemented within 6 weeks. For example, regarding best practices, a review of two ports found that the placement of CBP, APHIS, and Food and Drug Administration staff in the same facility enhanced their coordination. This review also lauded their targeting of non-agricultural products that are packed with materials, such as wood, that may harbor pests or diseases that could pose a risk to U.S. agriculture. Regarding deficiencies, this review found that the number of CBP agriculture specialists in each port was insufficient, and that the specialists at one of the ports were conducting superficial inspections of commodities that should have been inspected more intensely. According to CBP, the agency took actions to correct these deficiencies, although we have not evaluated those actions. In September 2007, CBP said that the joint review team had conducted 13 reviews in fiscal years 2004 through 2006, and 7 reviews were completed or underway for fiscal year 2007. Seven additional reviews are planned for fiscal year 2008. Lastly, in May 2005, CBP required each director in its 20 district field offices to appoint an agriculture liaison, with background and experience as an agriculture specialist, to provide CBP field office directors with agriculture-related input for operational decisions and agriculture specialists with senior-level leadership. The agriculture liaisons are to, among other things, advise the director of the field office on agricultural functions; provide oversight for data management, statistical analysis, and risk management; and coordinate agriculture inspection alerts. CBP officials told us that all district field offices had established the liaison position as of January 2006. Since the creation of the position, agriculture liaisons have facilitated the dissemination of urgent alerts from APHIS to CBP. They also provide information back to APHIS. For example, following a large increase in the discovery of plant pests at a port in November 2005, the designated agriculture liaison sent notice to APHIS, which then issued alerts to other ports. APHIS and CBP subsequently identified this agriculture liaison as a contact for providing technical advice for inspecting and identifying this type of plant pest. In fiscal year 2006, we surveyed a representative sample of CBP agriculture specialists regarding their experiences and opinions since the transfer of the AQI program from APHIS to CBP. In general, the views expressed by these specialists indicate that they believe that the agricultural inspection mission has been compromised. We note that morale issues are not unexpected in a merger such as the integration of the AQI mission and staff into CBP’s primary anti-terrorism mission. GAO has previously reported on lessons learned from major private and public sector experiences with mergers that DHS could use when combining its various components into a unified department. Among other things, productivity and effectiveness often decline in the period following a merger, in part because employees often worry about their place in the new organization. Nonetheless, based on the survey results, while 86 percent of specialists reported feeling very well or somewhat prepared for their duties as an agriculture specialist, many believed that the agriculture mission had been compromised by the transfer. Specifically, 59 percent of experienced specialists indicated that they are doing either somewhat or many fewer inspections since the transfer, and 60 percent indicated that they are doing somewhat or many fewer interceptions. 63 percent of agriculture specialists believed their port did not have enough specialists to carry out agriculture-related duties. Agriculture specialists reported that they spent 62 percent of their time on agriculture inspections, whereas 35 percent of their time was spent on non-agricultural functions such as customs and immigration inspections. In addition, there appear to be morale issues based on the responses to two open-ended questions: (1) What is going well with respect to your work as an agriculture specialist? and (2) What would you like to see changed or improved with respect to your work as an agriculture specialist? Notably, the question about what needs improving generated a total of 185 pages of comments—roughly 4 times more than that generated by the responses to our question on what was going well. Further, “Nothing is going well” was the second-most frequent response to the question on what is going well. We identified common themes in the agriculture specialists’ responses to our first question about what is going well with respect to their work as an agriculture specialist. The five most common themes were: Working relationships. An estimated 18 percent of agriculture specialists cited the working relationship among agriculture specialists and CBP officers and management as positive. These specialists cited increasing respect and interest by non-specialists in the agriculture mission, and the attentiveness of CBP management to agriculture specialists’ concerns. Nothing. An estimated 13 percent of agriculture specialists reported that nothing is going well with their work. For example, some respondents noted that the agriculture inspection mission has been compromised under CBP and that agriculture specialists are no longer important or respected by management. Salary and Benefits. An estimated 10 percent of agriculture specialists expressed positive comments about their salary and benefits, with some citing increased pay under CBP, a flexible work schedule, increased overtime pay, and retirement benefits as reasons for their views. Training. An estimated 8 percent of agriculture specialists identified elements of classroom and on-the-job training as going well. Some observed that new hires are well trained and that agriculture-related classroom training at the Professional Development Center in Frederick, Maryland, is adequate for their duties. General job satisfaction. An estimated 6 percent of agriculture specialists were generally satisfied with their jobs, reporting, among other things, that they were satisfied in their working relationships with CBP management and coworkers and that they believed in the importance of their work in protecting U.S. agriculture from foreign pests and diseases. In contrast, agriculture specialists wrote nearly 4 times as much in response to our question about what they would like to see changed or improved with respect to their work as agriculture specialists. In addition, larger proportions of specialists identified each of the top five themes. Declining mission. An estimated 29 percent of agriculture specialists were concerned that the agriculture mission is declining because CBP has not given it adequate priority. Some respondents cited the increase in the number of cargo items and flights that are not inspected because of staff shortages, scheduling decisions by CBP port management, and the release of prohibited or restricted products by CBP officers. Working relationships. An estimated 29 percent of the specialists expressed concern about their working relationships with CBP officers and management. Some wrote that CBP officers at their ports view the agriculture mission as less important than CBP’s other priorities, such as counternarcotics and anti-terrorism activities. Others noted that CBP management is not interested in, and does not support, agriculture inspections. CBP chain of command. An estimated 28 percent of agriculture specialists identified problems with the CBP chain of command that impede timely actions involving high-risk interceptions, such as a lack of managers with an agriculture background and the agency’s rigid chain-of-command structure. For example, agriculture specialists wrote that requests for information from USDA pest identification experts must be passed up the CBP chain of command before they can be conveyed to USDA. Training. An estimated 19 percent of agriculture specialists believed that training in the classroom and on the job is inadequate. For example, some respondents expressed concern about a lack of courses on DHS’s targeting and database systems, which some agriculture specialists use to target high-risk shipments and passengers. Also, some agriculture specialists wrote that on-the-job training at their ports is poor, and that CBP officers do not have adequate agriculture training to recognize when to refer items to agriculture specialists for inspection. Lack of equipment. An estimated 17 percent of agriculture specialists were concerned about a lack of equipment and supplies. Some respondents wrote that the process for purchasing items under CBP results in delays in acquiring supplies and that there is a shortage of agriculture-specific supplies, such as vials, gloves, and laboratory equipment. These themes are consistent with responses to relevant multiple-choice questions in the survey. For example, in response to one of these questions, 61 percent of agriculture specialists believed their work was not respected by CBP officers, and 64 percent believed their work was not respected by CBP management. Although CBP and APHIS have taken a number of actions intended to strengthen the AQI program since its transfer to CBP, several management problems remain that may leave U.S. agriculture vulnerable to foreign pests and diseases. Most importantly, CBP has not used available data to evaluate the effectiveness of the program. These data are especially important in light of many agriculture specialists’ views that the agricultural mission has been compromised and can help CBP determine necessary actions to close any performance gaps. Moreover, at the time of our May 2006 review, CBP had not developed sufficient performance measures to manage and evaluate the AQI program, and the agency had allowed the agricultural canine program to deteriorate. Furthermore, based on its staffing model, CBP does not have the agriculture specialists needed to perform its AQI responsibilities. CBP has not used available data to monitor changes in the frequency with which prohibited agricultural materials and reportable pests are intercepted during inspection activities. CBP agriculture specialists record monthly data in the Work Accomplishment Data System for each port of entry, including (1) arrivals of passengers and cargo to the United States via airplane, ship, or vehicle; (2) agricultural inspections of arriving passengers and cargo; and (3) inspection outcomes, i.e., seizures or detections of prohibited (quarantined) agricultural materials and reportable pests. As of our May 2006 report, CBP had not used these data to evaluate the effectiveness of the AQI program. For example, our analysis of the data for the 42 months before and 31 months after the transfer of responsibilities from APHIS to CBP shows that average inspection and interception rates have changed significantly in some geographical regions of the United States, with rates increasing in some regions and decreasing in others. (Appendixes I and II provide more information on average inspection and interception rates before and after the transfer from APHIS to CBP.) Specifically, average inspection rates declined significantly in the Baltimore, Boston, Miami, and San Francisco district field offices, and in preclearance locations in Canada, the Caribbean, and Ireland. Inspection rates increased significantly in seven other districts—Buffalo, El Paso, Laredo, San Diego, Seattle, Tampa, and Tucson. In addition, the average rate of interceptions decreased significantly at ports in six district field offices—El Paso, New Orleans, New York, San Juan, Tampa, and Tucson—while average interception rates have increased significantly at ports in the Baltimore, Boston, Detroit, Portland, and Seattle districts. Of particular note are three districts that have experienced a significant increase in their rate of inspections and a significant decrease in their interception rates since the transfer. Specifically, since the transfer, the Tampa, El Paso, and Tucson districts appear to be more efficient at inspecting (e.g., inspecting a greater proportion of arriving passengers or cargo) but less effective at interceptions (e.g., intercepting fewer prohibited agricultural items per inspection). Also of concern are three districts—San Juan, New Orleans, and New York—that are inspecting at about the same rate, but intercepting less, since the transfer. When we showed the results of our analysis to senior CBP officials, they were unable to explain these changes or determine whether the current rates were appropriate relative to the risks, staffing levels, and staff expertise associated with individual districts or ports of entry. These officials also noted that CBP has had problems interpreting APHIS data reports because CBP lacked staff with expertise in agriculture and APHIS’s data systems in some district offices. As of our May 2006 report, CBP had not yet completed or implemented its plan to add agriculture- related data to its system for monitoring customs inspections. However, in September 2007, CBP said it had taken steps to use these data to evaluate the program’s effectiveness. For example, CBP publishes a monthly report that includes analysis of efficiency inspections, arrivals, exams, and seizures of prohibited items, including agricultural quarantine material and pest interceptions, for each pathway. CBP also conducts a mid-year analysis of APHIS and CBP data to assess agricultural inspection efficiency at ports of entry. While these appear to be positive steps, we have not assessed their adequacy to measure the AQI program’s effectiveness. A second management problem for the AQI program is an incomplete set of performance measures to balance multiple responsibilities and demonstrate results. As of our May 2006 report, CBP had not developed and implemented its own performance measures for the program. Instead, according to CBP officials, CBP carried over two measures that APHIS had used to assess the AQI program before the transfer: the percentages of international air passengers and border vehicle passengers that comply with program regulations. However, these measures addressed only two pathways for agricultural pests, neglecting other pathways such as commercial aircraft, vessels, and truck cargo. Further, these performance measures did not provide information about changes in inspection and interception rates, which could help assess the efficiency and effectiveness of agriculture inspections in different regions of the country or at individual ports of entry. They also did not address the AQI program’s expanded mission—to prevent agro-terrorism while facilitating the flow of legitimate trade and travel. In early 2007, a joint team from CBP and APHIS agreed to implement additional performance measures for AQI activities in all major pathways at ports of entry. Specifically, CBP said that in fiscal year 2007 it implemented measures for the percentages of land border, air, and maritime regulated cargo and shipments in compliance with AQI regulations. Furthermore, the agency plans to add additional performance measures such as percentage of passengers, vehicles, or mail in compliance in fiscal years 2008 and 2009. However, we have not evaluated the adequacy of these new performance measures for assessing the AQI program’s effectiveness at intercepting foreign pests and diseases. Third, the number and proficiency of canine teams decreased substantially between the time of the transfer, March 2003, and the time of our review, May 2006. In the past, these dogs have been a key tool for targeting passengers and cargo for detailed inspections. Specifically, APHIS had approximately 140 canine teams nationwide at the time of the transfer, but CBP had only 80 such teams at the time of our review. With regard to proficiency, 60 percent of the 43 agriculture canine teams tested by APHIS in 2005 failed proficiency tests. These tests require the dog to respond correctly in a controlled, simulated work environment and ensure that dogs are working effectively to catch potential prohibited agricultural material. In general, canine specialists we interviewed expressed concern that the proficiency of their dogs was deteriorating due to a lack of working time. That is, the dogs were sidelined while the specialists were assigned to other duties. In addition, based on our survey results, 46 percent of canine specialists said they were directed to perform duties outside their primary canine duties daily or several times a week. Furthermore, 65 percent of canine specialists indicated that they sometimes or never had funding for training supplies. Another major change to the canine program, following the transfer, was CBP’s elimination of all canine management positions. Finally, based on its staffing model, CBP lacks adequate numbers of agriculture specialists to accomplish the agricultural mission. The Homeland Security Act authorized the transfer of up to 3,200 AQI personnel from USDA to DHS. In March 2003, APHIS transferred a total of 1,871 agriculture specialist positions, including 317 vacancies, to CBP and distributed those positions across CBP’s 20 district field offices, encompassing 139 ports of entry. Because of the vacancies, CBP lacked adequate numbers of agriculture specialists from the beginning and had little assurance that appropriate numbers of specialists were staffed at each port of entry. Although CBP has made some progress in hiring agriculture specialists since the transfer, we previously reported that CBP lacked a staffing model to ensure that more than 630 newly hired agriculture specialists were assigned to the ports with the greatest need, and to ensure that each port had at least some experienced specialists. Accordingly, in May 2006 we recommended that APHIS and CBP work together to develop a national staffing model to ensure that agriculture staffing levels at each port are sufficient. Subsequently, CBP developed a staffing model for its ports of entry and provided GAO with its results. Specifically, as of mid-August 2007, CBP said it had 2,116 agriculture specialists on staff, compared to 3,154 such specialists needed according to the model. The global marketplace of agricultural trade and international travel has increased the number of pathways for the movement and introduction into the United States of foreign and invasive agricultural pests and diseases such as foot-and-mouth disease and avian influenza. Given the importance of agriculture to the U.S. economy, ensuring the effectiveness of federal programs to prevent accidental or deliberate introduction of potentially destructive organisms is critical. Accordingly, effective management of the AQI program is necessary to ensure that agriculture issues receive appropriate attention. Although we have reported that CBP and APHIS have taken steps to strengthen agricultural quarantine inspections, many agriculture specialists believe that the agricultural mission has been compromised. While morale issues, such as the ones we identified, are to be expected in the merger establishing DHS, CBP had not used key data to evaluate the program’s effectiveness and could not explain significant increases and decreases in inspections and interceptions. In addition, CBP had not developed performance measures to demonstrate that it is balancing its multiple mission responsibilities, and it does not have sufficient agriculture specialists based on its staffing model. Until the integration of agriculture issues into CBP’s overall anti-terrorism mission is more fully achieved, U.S. agriculture may be left vulnerable to the threat of foreign pests and diseases. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have at this time. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact Lisa Shames at (202) 512- 3841 or shamesl@gao.gov. Key contributors to this testimony were James Jones, Jr., Assistant Director, and Terrance Horner, Jr. Josey Ballenger, Kevin Bray, Chad M. Gorman, Lynn Musser, Omari Norman, Alison O’Neill, and Steve C. Rossman also made important contributions. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-1240T. Washington, D.C.: September 18, 2007. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-454. Washington, D.C.: August 17, 2007. Customs Revenue: Customs and Border Protection Needs to Improve Workforce Planning and Accountability. GAO-07-529. Washington, D.C.: April 12, 2007. Homeland Security: Agriculture Specialists’ Views of Their Work Experiences after Transfer to DHS. GAO-07-209R. Washington, D.C.: November 14, 2006. Invasive Forest Pests: Recent Infestations and Continued Vulnerabilities at Ports of Entry Place U.S. Forests at Risk. GAO-06-871T. Washington, D.C.: June 21, 2006. Homeland Security: Management and Coordination Problems Increase the Vulnerability of U.S. Agriculture to Foreign Pests and Disease. GAO- 06-644. Washington, D.C.: May 19, 2006. Homeland Security: Much Is Being Done to Protect Agriculture from a Terrorist Attack, but Important Challenges Remain. GAO-05-214. Washington, D.C.: March 8, 2005. Results-Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations. GAO-03-669. Washington, D.C.: July 2, 2003. Mergers and Transformation: Lessons Learned for a Department of Homeland Security and Other Federal Agencies. GAO-03-293SP. Washington, D.C.: November 14, 2002. Homeland Security: Critical Design and Implementation Issues. GAO- 02-957T. Washington, D.C.: July 17, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
U.S. agriculture generates over $1 trillion in economic activity annually, but concerns exist about its vulnerability to foreign pests and diseases. Under the agricultural quarantine inspection (AQI) program, passengers and cargo are inspected at U.S. ports of entry to intercept prohibited material and pests. The Homeland Security Act of 2002 transferred responsibility for inspections from the U.S. Department of Agriculture's (USDA) Animal and Plant Health Inspection Service (APHIS) to the Department of Homeland Security's (DHS) Customs and Border Protection (CBP). APHIS retained some AQI-related responsibilities, such as policy setting and training. This testimony is based on issued GAO reports and discusses (1) steps DHS and USDA took that were intended to strengthen the AQI program, (2) views of agriculture specialists of their work experiences since the transfer, and (3) management problems. As part of these reports, GAO surveyed a representative sample of agriculture specialists on their work experiences, analyzed inspection and interception data, and interviewed agency officials. CBP and APHIS have taken steps intended to strengthen the AQI program since transfer of inspection responsibilities from USDA to DHS in March 2003. Specifically, CBP and APHIS have expanded the hours and developed a national standard for agriculture training; given agricultural specialists access to a computer system that is to better target inspections at ports; and established a joint review process for assessing compliance with the AQI program on a port-by-port basis. In addition, CBP has created new agricultural liaison positions at the field office level to advise regional port directors on agricultural issues. We have not assessed the implementation and effectiveness of these actions. However, GAO's survey of CBP agriculture specialists found that many believed the agriculture inspection mission had been compromised by the transfer. Although 86 percent of agriculture specialists reported feeling very well or somewhat prepared for their duties, 59 and 60 percent of specialists answered that they were conducting fewer inspections and interceptions, respectively, of prohibited agricultural items since the transfer. When asked what is going well with respect to their work, agriculture specialists identified working relationships (18 percent), nothing (13 percent), salary and benefits (10 percent), training (10 percent), and general job satisfaction (6 percent). When asked what areas should be changed or improved, they identified working relationships (29 percent), priority given to the agriculture mission (29 percent), problems with the CBP chain of command (28 percent), training (19 percent), and inadequate equipment and supplies (17 percent). Based on private and public sector experiences with mergers, these morale issues are not unexpected because employees often worry about their place in the new organization. CBP must address several management problems to reduce the vulnerability of U.S. agriculture to foreign pests and diseases. Specifically, as of May 2006, CBP had not used available inspection and interception data to evaluate the effectiveness of the AQI program. CBP also had not developed sufficient performance measures to manage and evaluate the AQI program. CBP's measures focused on only two pathways by which foreign pests and diseases may enter the country and pose a threat to U.S. agriculture. However, in early 2007, CBP initiated new performance measures to track interceptions of pests and quarantine materials at ports of entry. We have not assessed the effectiveness of these measures. In addition, CBP has allowed the agricultural canine program to deteriorate, including reductions in the number of canine teams and their proficiency. Lastly, CBP had not developed a risk-based staffing model for determining where to assign agriculture specialists. Without such a model, CBP did not know whether it had an appropriate number of agriculture specialists at each port. Subsequent to our review, CBP developed a model. As of mid-August 2007, CBP had 2,116 agriculture specialists on staff, compared with 3,154 specialists needed, according to the staffing model.
Several measures of price are commonly used within the health care sector to measure the price of prescription drugs. These varying price measures are due to the different prices that drug manufacturers and retail pharmacies charge different purchasers, and drug prices can vary substantially depending on the purchaser. (See fig. 1.) The U&C price, the retail price for a drug, is the price an individual without prescription drug coverage would pay at a retail pharmacy. The U&C price includes the acquisition cost of the drug paid by the retail pharmacy and a markup charged by the pharmacy. AWP is the average of the list prices or sticker price that a manufacturer of a drug suggests wholesalers charge pharmacies. AWP is typically less than the U&C price, which includes the pharmacy’s own markup. AWP is not the actual price that large purchasers normally pay. Nevertheless, AWP is part of the formula used by many state Medicaid programs and private third-party payers to reimburse retail pharmacies. AMP is the average of prices paid to a manufacturer by wholesalers for a drug distributed to the retail pharmacy class of trade, after subtracting any account cash discounts or other price reductions. CMS uses AMP in determining rebates drug manufacturers must provide, as required by the Omnibus Budget Reconciliation Act of 1990, to state Medicaid programs as a condition for the federal contribution to Medicaid spending for the manufacturers’ outpatient prescription drugs. For brand drugs, the minimum rebate amount is the number of units of the drug multiplied by 15.1 percent of the AMP. From January 2000 through December 2004, the average U&C prices for a typical 30-day supply of 96 prescription drugs frequently used by BCBS FEP Medicare and non-Medicare enrollees increased 24.5 percent. The average U&C prices for 75 prescription drugs frequently used by Medicare beneficiaries and for 76 prescription drugs frequently used by non- Medicare enrollees increased at similar rates. The average U&C prices for 50 frequently used brand drugs increased three times faster than the average U&C prices for 46 frequently used generic drugs. From January 2000 through December 2004, the average U&C price collected from retail pharmacies by PACE and EPIC for a 30-day supply for 96 prescription drugs frequently used by BCBS FEP Medicare beneficiaries and non-Medicare enrollees increased 24.5 percent, a 4.6 percent average annual rate of increase. (See fig. 2.) During the same period, using nationwide data from the Bureau of Labor Statistics (BLS), prices for prescription drugs and medical supplies for all urban consumers increased 21.3 percent, a 4.0 percent average annual rate of increase. Additionally, using BLS data, prices for all consumer items for all urban consumers—the Consumer Price Index—increased 12.7 percent, a 2.5 percent average annual rate of increase from January 2000 through December 2004. While U&C prices increased each year from 2000 through 2004, the greatest annual rate of increase—6.1 percent—occurred from January 2002 to January 2003. (See fig. 3.) Since then, annual rates of increase have been less, increasing 5.2 percent from January 2003 to January 2004 and 4.2 percent from January 2004 to December 2004. Twenty drugs, representing 33 percent of BCBS FEP prescriptions for the 96 drugs we reviewed, accounted for 64 percent of the total increase in the U&C price index from January 2000 through December 2004. The drug with the largest effect on the price index was Lipitor 10mg, which accounted for 6.6 percent of the total increase. Nineteen of the 20 drugs were brand drugs and 1 was a generic drug, Hydrocodone/Acetaminophen 5/500mg. The twenty drugs accounting for the largest changes in the U&C price index are listed below. From January 2000 through December 2004, the average U&C prices collected by PACE and EPIC for 75 prescription drugs frequently used by BCBS FEP Medicare beneficiaries increased at a similar rate as the average U&C prices for 76 prescription drugs frequently used by BCBS FEP non-Medicare enrollees. (See fig. 4.) The prices of 75 Medicare drugs increased 24.0 percent, a 4.5 percent average annual rate of increase. The prices of 76 non-Medicare drugs increased 24.8 percent, a 4.6 percent average annual rate of increase. From January 2000 through December 2004, the average U&C price (based on PACE and EPIC data) for 50 frequently used brand drugs rose three times faster than the average U&C price for 46 frequently used generic drugs. (See fig. 5.) Specifically, the average U&C price for brand drugs increased 28.9 percent, a 5.3 percent average annual rate of increase, whereas U&C prices for generic drugs increased 9.4 percent, a 1.8 percent average annual rate of increase. From the first quarter of 2000 through the fourth quarter of 2004, AMPs and U&C prices for the 50 brand drugs increased at similar rates, but AWPs increased at a faster rate. The quarterly AWPs for 50 brand prescription drugs increased 31.6 percent, a 6.0 percent average annual rate of increase. For these same 50 drugs, the quarterly AMPs increased 28.2 percent, a 5.4 percent average annual rate of increase, while the average quarterly U&C prices increased 27.5 percent, a 5.2 percent average annual rate of increase. Over the entire period, the AWP index increased about 3 to 4 percentage points more than the AMP or U&C price indexes. (See fig. 6.) The difference between the levels of AWP and U&C prices for brand drugs narrowed slightly during the time period we analyzed. Whereas in the first quarter of 2000 AWP was on average about 91 percent of the U&C price for the same drug, by the fourth quarter of 2004 AWP was on average about 94 percent of the U&C price. In contrast, AMP stayed a similar portion of U&C in first quarter 2000 and fourth quarter 2004, with the AMP on average about 72 percent of the U&C price. Ten brand drugs in each index, representing one-third or more of the prescriptions for the 50 brand drugs, accounted for almost 50 percent of the increase for the quarterly AMP, AWP, and U&C price indexes. Eight of these 10 drugs were the same across all three price indexes. The drug accounting for the largest portion of the change in the AMP and AWP indexes was Celebrex 200mg, accounting for 8.6 percent of the increase for AMP and 7.5 percent for AWP. Lipitor 10mg was the drug accounting for the largest portion of the change in the quarterly U&C price index and accounted for 7.2 percent of the increase for the 50 brand drugs. (See fig. 7.) From 2000 through 2004, retail prices for drugs frequently used by Medicare beneficiaries increased 24.0 percent—an average rate of 4.5 percent per year. In general, higher drug prices mean higher spending by consumers and health insurance sponsors, including employers and federal and state governments. With brand drug prices increasing three times as fast as generic drug prices, public and private health insurance sponsors will likely continue to focus on strategies to encourage increased use of generic drugs when available. Starting in 2006, with the introduction of the Medicare prescription drug benefit, Medicare will be paying claims for a wider array of drugs and, as a result, the federal government will be affected more than previously by rising drug prices. We found that from 2000 through 2004, on average the AWPs for 50 frequently used brand drugs rose 0.8 percent per year faster than the retail prices for these same drugs. A continuation of this difference between AWP and retail prices increases could affect many Medicaid programs and private third-party payers that base their reimbursement of drug claims on AWPs. We provided a draft of this report to CMS, PACE, EPIC, and BCBS FEP. In commenting on this report, CMS highlighted the discounts and price information tools that will be available under the Medicare drug benefit. CMS also stated that neither the U&C price nor AWP reflect discounts, such as manufacturers’ discount programs, or other price concessions affecting a drug’s price. We noted in the report that U&C represents the retail pharmacy price paid by consumers without insurance. The U&C does not reflect prices available from other sources, such as mail order pharmacies. We also noted that AWP is a list price that is not the actual price paid by large purchasers. We agree that consumers may be able to obtain lower prices than reflected by the U&C and AWP. However, the focus of our analysis was to examine price trends rather than price levels, and U&C and AWP are consistent measures used to assess price trends. Further, increases in the published AWP may increase what many public or private third-party purchasers pay for prescription drugs because AWP is often included in the formula to calculate payments to pharmacies. Additionally, CMS suggested that we examine the effect on prices when generic alternatives are introduced. We agree that the introduction of generic drugs can reduce consumer payments for drugs. Examining changes in consumer spending for drugs, which are also affected by changes in utilization and the introduction of new drug alternatives, would be useful, but was beyond the scope of this report in examining price trends for frequently-used brand and generic drugs. PACE and BCBS provided technical comments that we incorporated as appropriate; EPIC stated that it did not have any comments. As agreed with your offices, unless you publicly announce the contents earlier, we plan no further distribution of this report until 30 days after its date. We will then send copies of this report to the Administrator of CMS and other interested parties. We will also provide copies to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please call me at (202) 512-7114 or kanofm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To examine the change in retail prices for prescription drugs frequently used by Medicare beneficiaries and other individuals with health insurance, we used data from the Blue Cross and Blue Shield (BCBS) Federal Employee Program (FEP) to select the 100 prescription drugs most frequently dispensed through retail pharmacies in 2003 for BCBS FEP Medicare enrollees and the 100 most frequently dispensed for BCBS FEP non-Medicare enrollees. Combined, these two lists included 133 unique drugs. We obtained average monthly usual and customary (U&C) prices reported by retail pharmacies to Pennsylvania’s Pharmaceutical Assistance Contract for the Elderly (PACE) program from January 2000 through December 2004 and New York’s Elderly Pharmaceutical Insurance Coverage (EPIC) program from August 2000 through December 2004. We collected prices based on a specific strength, dosage form, and common number of units (such as pills), typically for a 30-day supply. Based on combined PACE and EPIC data, 96 of the 133 drugs we selected had prices reported for every month from January 2000 through December 2004. We analyzed price trends on a monthly basis from January 2000 through December 2004 for these 96 drugs. Of the 96 drugs, 75 were among those most frequently used by BCBS FEP Medicare enrollees, and 76 were among those most frequently used by BCBS FEP non-Medicare enrollees. Fifty-five of the 96 drugs were frequently used by both BCBS Medicare enrollees and non-Medicare enrollees. We first determined the total number of prescriptions in 2003 for the drugs we selected dispensed to BCBS FEP Medicare enrollees and the total number of prescriptions dispensed to BCBS FEP non-Medicare enrollees. Separately for drugs frequently used by Medicare and by non- Medicare enrollees, we calculated the share of the total number of BCBS FEP prescriptions attributed to each drug. The price of each drug was then weighted by its relative share of total Medicare or total non-Medicare prescriptions in 2003 to calculate the average price for frequently used Medicare drugs and the average price for frequently used non-Medicare drugs for each month from January 2000 through December 2004. We standardized these averages to create a Medicare price index and a non- Medicare price index, each with a value of 100 as of January 2000. We also separately analyzed monthly trends in U&C prices for brand and generic drugs frequently used by BCBS FEP enrollees. Of the 96 drugs, 50 were brand drugs and 46 were generic drugs. Similar to our calculation of Medicare and non-Medicare price indexes, we calculated indexes for brand drugs and generic drugs based on each drug’s share of the total number of brand or generic prescriptions dispensed to BCBS FEP enrollees in 2003. To examine the change in retail prices for frequently used drugs compared to other drug price benchmarks, we compared an index based on the U&C prices reported by PACE and EPIC for 50 brand drugs to indexes based on the average manufacturer prices (AMP) and average wholesale prices (AWP) for these 50 drugs on a quarterly basis from the first quarter of 2000 through the fourth quarter of 2004. The Centers for Medicare & Medicaid Services (CMS) requires manufacturers to report AMP within 30 days of the end of each calendar quarter. Manufacturers submit AWPs on a periodic basis to publishers of drug-pricing data, such as First DataBank. Using the National Drug Codes (NDC) reported by PACE and EPIC for the U&C prices for the 50 brand drugs, we obtained per unit AMPs from CMS and per unit AWPs from First DataBank associated with each NDC. For each drug, we calculated a quarterly AMP and a quarterly AWP by multiplying the per unit price by the most common number of units for a 30-day supply. We created an AMP and AWP index by weighting the 50 brand drugs by the number of prescriptions in 2003 from BCBS FEP. Similarly, we recalculated the U&C price for the 50 brand drugs on a quarterly basis to make comparisons to AMP and AWP. We also determined how much each drug’s change in price contributed to the overall change in price for the 50 brand drugs for AMPs, AWPs, and U&C prices. We measured the share each drug contributed to the overall index by comparing the ratio of (1) each drug’s price change from January 2000 through December 2004 multiplied by its weight based on BCBS FEP prescriptions, to (2) the sum of all drugs price changes multiplied by their associated weights. Our analyses are limited to drugs most frequently used by Medicare beneficiaries and by non-Medicare enrollees in the 2003 BCBS FEP. Additionally, our analyses using U&C prices are limited to prices reported by retail pharmacies in Pennsylvania to the PACE program and by retail pharmacies in New York to the EPIC program. We reviewed the reliability of data from BCBS FEP, CMS, First DataBank, EPIC, and PACE, including screening for outlier prices in the PACE and EPIC data and ensuring that the price trends and frequently used drugs were consistent with other data sources. We determined that these data were sufficiently reliable for our purposes. We performed our work from April 2004 through July 2005 in accordance with generally accepted government auditing standards. Table 1 lists the 96 drugs used in constructing monthly U&C price indexes from January 2000 through December 2004. Fifty of the 96 drugs are brand drugs and were also used in examining price changes in AMP, AWP, and U&C on a quarterly basis from first quarter 2000 through fourth quarter 2004. Of the 96 drugs, 75 were frequently used by Medicare beneficiaries and 76 were frequently used by non-Medicare enrollees, with 55 of these drugs frequently used by both Medicare beneficiaries and non-Medicare enrollees. In addition to the contact named above, John E. Dicken, Director; Rashmi Agarwal; Jessica L. Cobert; Martha Kelly, Matthew L. Puglisi; and Daniel S. Ries made key contributions to this report.
Prescription drug spending has been the fastest growing segment of national health expenditures. As the federal government assumes greater financial responsibility for prescription drug expenditures with the introduction of Medicare part D, federal policymakers are increasingly concerned about prescription drug prices. GAO was asked to examine the change in retail prices and other pricing benchmarks for drugs frequently used by Medicare beneficiaries and other individuals with health insurance from 2000 through 2004. To examine the change in retail prices from 2000 through 2004, we obtained usual and customary (U&C) prices from two state pharmacy assistance programs for drugs frequently used by Medicare beneficiaries and non-Medicare enrollees in the 2003 Blue Cross and Blue Shield (BCBS) Federal Employee Program (FEP). The U&C price is the price an individual without prescription drug coverage would pay at a retail pharmacy. Additionally, we compared the change in U&C prices for brand drugs from 2000 through 2004 to the change in two pricing benchmarks: average manufacturer price (AMP), which is the average of prices paid to manufacturers by wholesalers for drugs distributed to the retail pharmacy class of trade, and average wholesale price (AWP), which represents the average of list prices that a manufacturer suggests wholesalers charge pharmacies. We found the average U&C prices at retail pharmacies reported by two state pharmacy assistance programs for a 30-day supply of 96 drugs frequently used by BCBS FEP Medicare and non-Medicare enrollees increased 24.5 percent from January 2000 through December 2004. Of the 96 drugs: Twenty drugs accounted for nearly two-thirds of the increase in the U&C price index; the increase in average U&C prices for 75 prescription drugs frequently used by Medicare beneficiaries was similar to the increase for 76 prescription drugs frequently used by non-Medicare enrollees; and the average U&C prices for 50 frequently used brand prescription drugs increased three times as much as the average for 46 generic frequently used prescription drugs. AWPs increased at a faster rate than AMPs and U&C prices for the 50 frequently used brand drugs from first quarter 2000 through fourth quarter 2004. Ten drugs in each index accounted for almost 50 percent of the increase for AMP, AWP, and U&C prices. Eight of these 10 drugs were consistent across the three price indexes. The Centers for Medicare & Medicaid Services (CMS), two state pharmacy assistance programs, and BCBS FEP reviewed a draft of this report. While CMS noted that U&C and AWP do not reflect discounts in a drug's price, this report's focus was to examine price trends rather than price levels. Technical comments were incorporated as appropriate.
The DI, SSI, and VA programs are three separate federal disability programs that differ in their underlying intent, populations they serve, and the specific approach used by SSA and VA to assess disability. Yet, each program provides financial assistance to individuals with a reduced capacity to work due to a physical or mental impairment. Program beneficiaries also have a connection to vocational assistance that can help program beneficiaries minimize the economic loss resulting from their disabilities. All three programs have experienced growth in recent years. The amount of cash benefits paid to program beneficiaries has increased over the past 10 years (see fig. 1). In 2001, DI provided $54.2 billion in cash benefits to 5.3 million disabled workers, SSI provided $19.0 billion in federal cash benefits to 3.7 million disabled and blind individuals age 18-64, and VA provided $16.5 billion in disability compensation benefits to about 2.3 million veterans. Since 1991, the cash benefits for these programs increased by 69 percent, 55 percent, and 32 percent, respectively (adjusted for inflation). In addition, since 1991 the number of DI, SSI, and VA beneficiaries grew by 65 percent, 53 percent, and 6 percent, respectively. The size of the programs could grow in the years ahead. In fact, DI and SSI are expected to grow significantly over the next decade. By 2010, SSA expects worker applications for DI to increase by as much as 32 percent over 2000 levels. In 2000, VA predicted that while the number of veterans receiving disability benefits will decrease approximately 18 percent over the next 10 years, the caseload will decline annually by less than 1 percent during this time period. VA explained that veterans will likely incur more disabilities than the past because, for example, veterans of the all- volunteer force are older at time of discharge with longer periods of service, and also because better outreach and access makes veterans more aware of benefits to which they are entitled. Moreover, VA’s estimate of the number of veterans assumed the United States would not be engaged in any major global or regional conflict. The recent war on terrorism, however, could affect VA’s future projections on the size of the disabled veterans population. SSA provides disability benefits to people found to be work disabled under the DI or SSI program. Established in 1956, DI is an insurance program that provides benefits to workers who are unable to work because of severe long-term disability. In 2000, the most common impairments among DI’s disabled workers were mental disorders and musculoskeletal conditions (see fig. 2). These two conditions also were the fastest growing conditions since 1986, increasing by 7 and 5 percentage points, respectively. Workers who have worked long enough and recently enough are insured for coverage under the DI program. DI beneficiaries receive cash assistance and, after a 24-month waiting period, Medicare coverage. Once found eligible for benefits, disabled workers continue to receive benefits until they die, return to work and earn more than allowed by program rules, are found to have medically improved to the point of having the ability to work, or reach full retirement age (when disability benefits convert to retirement benefits). To help ensure that only eligible beneficiaries remain on the rolls, SSA is required by law to conduct continuing disability reviews for all DI beneficiaries to determine whether they continue to meet the disability requirements of the law. SSI, created in 1972, is an income assistance program that provides cash benefits for disabled, blind, or aged individuals who have low income and limited resources. In 2000, the most common impairments among the group of SSI blind and disabled adults age 18-64 were mental disorders and mental retardation (see fig. 3). Mental disorders was the fastest growing condition among this population since 1986, increasing by 9 percentage points. Unlike the DI program, SSI has no prior work requirement. In most cases, SSI eligibility makes recipients eligible for Medicaid benefits. SSI benefits terminate for the same reasons as DI benefits, although SSI benefits also terminate when a recipient no longer meets SSI income and resource requirements (SSI benefits do not convert to retirement benefits when the individual reaches full retirement age). The law requires that continuing disability reviews be conducted for some SSI recipients for continuing eligibility. The Social Security Act’s definition of disability under DI and SSI is the same: an individual must have a medically determinable physical or mental impairment that (1) has lasted or is expected to last at least 1 year or to result in death and (2) prevents the individual from engaging in substantial gainful activity (SGA). Moreover, the definition specifies that for a person to be determined to be disabled, the impairment must be of such severity that the person not only is unable to do his or her previous work but, considering his or her age, education, and work experience, is unable to do any other kind of substantial work that exists in the national economy. (See app. III for a more complete description of SSA’s five-step process to determine DI and SSI eligibility.) While not expressly required by law to update the criteria used in the disability determination process, SSA has stated that it would update them to reflect current medical criteria and terminology. Over the years, SSA has periodically ensured that the medical information and the structure of its Listing of Impairments—which describe impairments that are presumed by the agency to be severe enough to prevent a person from doing substantial gainful activity—were both acceptable for program purposes and consistent with current medical thinking. The last general update to the Listing of Impairments (also known as the Medical Listings) occurred in 1985, at which time expiration dates ranging from 3 to 8 years were inserted for individual body systems to ensure the agency periodically reviews and if necessary, updates the Medical Listings. The statutes establishing the DI and SSI programs presume that disability, for program eligibility, is long-term and based on an either-or decision. That is, a person is either capable or incapable of engaging in substantial gainful work. However, the Social Security Act allows beneficiaries to use a “ticket” issued by the Commissioner of SSA to obtain free employment services, vocational rehabilitation services, or other services to find employment. Also, Congress has established various work incentives intended to safeguard cash and health benefits while a beneficiary tries to return to work. Despite these provisions, few DI and SSI beneficiaries have left the rolls to return to work, although the ticket program may have an impact on future rates. The either-or process produces a strong incentive for applicants to establish their inability to work to qualify for benefits, and work-related supports and services (including health coverage) are offered only after individuals have completed the eligibility process. Yet our past work found that DI beneficiaries believe that health interventions—such as medical procedures, medications, physical therapy, and psychotherapy—are primary factors in assisting them to work. VA’s disability program compensates veterans for the average loss in earning capacity in civilian occupations that results from injuries or conditions incurred or aggravated during military service. In 2000, the most common impairment category among all disabled veterans was illness and injury to bones and joints (see fig. 4). This impairment category also experienced the fastest growth among the disabled veteran population since 1986, increasing by 6 percentage points. VA’s program is similar to the DI and SSI programs in that all three programs provide cash benefits to persons whose physical or mental impairments have been deemed to reduce their ability to earn a living.However, VA relies upon an average reduction in earning capacity across a group of individuals with a similar condition rather than the actual reduction for an individual veteran applying for benefits. As a result, a veteran with a disability is entitled to disability cash benefits whether or not employed and regardless of the amount earned. The cash benefit level is based on the “percentage evaluation,” commonly called the disability rating, that represents the average loss in earning capacity associated with the severity of physical and mental conditions. VA uses its Schedule for Rating Disabilities to determine which disability rating to assign to a veteran’s particular condition. Ratings for individual diagnoses in the schedule range from 0 percent to 100 percent. For example, VA presumes that the loss of a foot as a result of military service results in a 40 percent impairment in earning capacity, on average, among veterans with this injury. All veterans who lose a foot as a result of military service, therefore, are entitled to a 40 percent disability rating. Unlike the DI and SSI programs, the law does not specifically require VA to conduct continuing disability reviews to determine whether veterans continue to meet the disability requirements of the law. The Schedule for Rating Disabilities was first developed in 1919 and had its last major revision in 1945. Two major studies have been conducted since the implementation of the 1945 version of the schedule to determine whether the schedule constitutes an adequate basis for compensating veterans with service-connected conditions. One was conducted by a presidential commission in the mid-1950s and a second by VA in the late 1960s. Both concluded, for various reasons, that at least some disability ratings in the schedule did not accurately reflect the average impairment in earning capacity among disabled veterans and needed to be adjusted. The law states that VA shall, from time to time, readjust the schedule based upon experience. Keeping the schedule current is important because cash benefits are based on the schedule. We previously reported, however, that VA’s rating schedule that was being used in the late 1980s had not been adjusted to incorporate the results of many recent medical advances, and as a result, some veterans may be undercompensated and others may be overcompensated for their service-connected disability. Further, we recommended that VA (1) prepare a plan for a comprehensive review of the rating schedule and, based on the results, revise medical criteria accordingly and (2) implement a procedure for systematically reviewing the rating schedule to keep it updated. Veterans with a service-connected disability rated at 20 percent or higher who are found by VA to have an employment handicap can receive rehabilitation services. Eligible veterans can receive vocational counseling, training, job search assistance, and supportive rehabilitation services. In addition, VA offers veterans a medical benefits package that provides a full range of outpatient and inpatient services, including primary and specialty care as well as drugs. Recent scientific advances in medicine and assistive technology and changes in the nature of work and the types of jobs in our national economy have generally enhanced the potential for people with disabilities to perform work-related activities. Advances in medicine have afforded the scientific community a deeper understanding of and ability to treat disease and injury. Medical advancements in treatment (such as organ transplantations), therapy, and rehabilitation have reduced the severity of some medical conditions and have allowed individuals to live with greater independence and function in settings such as the workplace. Also, assistive technologies—such as advanced wheelchair design, a new generation of prosthetic devices, and voice recognition systems—afford greater capabilities for some people with disabilities than were available in the past. At the same time, the nature of work has changed in recent decades as the national economy has moved away from manufacturing-based jobs to service- and knowledge-based employment. In the 1960s, earning capacity became more related to a worker’s skills and training than to his or her ability to perform physical labor. Following World War II and the Korean Conflict, advancements in technology, including computers and automated equipment, reduced the need for physical labor. The goods-producing sector’s share of the economy—mining, construction, and manufacturing—declined from about 44 percent in 1945 to about 18 percent in 2000. The service-producing industry’s share, on the other hand—such areas as wholesale and retail trade; transportation and public utilities; federal, state and local government; and finance, insurance, and real estate—increased from about 57 percent in 1945 to about 72 percent in 2000. Although certain jobs in the service economy continue to be physically demanding—a cashier in a fast food restaurant might be expected to stand for most of his or her shift—other service- and knowledge-based jobs can allow greater participation for persons with physical limitations. In addition, telecommuting and part-time work provide other options for persons with disabilities. However, some labor market trends—such as an increasing pace of change in office environments and the need for adaptability—can pose particular challenges for some persons, such as those with severe mental illness and learning disabilities. Moreover, other trends—such as downsizing and the growth in contingent workers—can limit job security and benefits, like health insurance, that most persons with disabilities require for participation in the labor force. Whether these changes make it easier or more difficult for a person with a disability to work appears to depend very much on the individual’s impairment and other characteristics, according to experts. Social change has promoted the goals of greater inclusion of and participation by people with disabilities in the mainstream of society, including adults at work. For instance, over the past 2 decades, people with disabilities have sought to remove environmental barriers that impede them from fully participating in their communities. Moreover, the Americans with Disabilities Act supports the full participation of people with disabilities in society and fosters the expectation that people with disabilities can work and have the right to work. The Americans with Disabilities Act prohibits employers from discriminating against qualified individuals with disabilities and requires employers to make reasonable workplace accommodations unless it would impose an undue hardship on the business. The disability criteria used by the DI, SSI, and VA disability programs to help determine who is qualified to receive benefits have not been fully updated to reflect scientific advances. Both SSA and VA are currently in the midst of a process that began around the early 1990s to update the medical criteria they use to make eligibility decisions, but the progress is slow. The updates include dropping or adding conditions that qualify one for benefits, modifying criteria needed to establish the presence and severity of certain medical conditions, and wording changes for clarification and guidance in making decisions. Agencies report that they made some of these changes due to medical advances in treatment that have reduced the severity and occurrence of some medical conditions. Nevertheless, the statutory and regulatory design of these programs limits the role of treatment in determining who is disabled. Therefore, treatment advances, by definition, have not been folded into the updates. Moreover, because of the statutory design of these programs, the role of assistive technologies is not recognized in making disability decisions. Consequently, the updates have not fully incorporated innovations in this field, such as advanced prosthetics and wheelchair designs. SSA’s current effort to update the disability criteria began in the early 1990s. To conduct the current update, SSA gathers feedback on relevant medical issues from state officials who help the agency make disability decisions. In addition, SSA has in-house expertise to help the agency keep abreast of the medical field and identify aspects of the medical criteria that need to be changed. SSA staff develop the proposed changes and forward them for internal, including legal and financial, review. Next, SSA publishes the proposed changes in the Federal Register and solicits comments from the public for 60 days. SSA considers the public comments, makes necessary adjustments, and publishes the final changes in the Federal Register. Between 1991 and 1993, SSA published for public comment the changes it was proposing to make to 7 of the 14 body systems in its Medical Listings. By 1994, the proposed changes to 5 of these 7 body systems were finalized, although SSA told us that changes to 2 systems were relatively minor. SSA’s efforts to update the Medical Listings were curtailed in the mid-1990s due to staff shortages, competing priorities, and lack of adequate research on disability issues. Since the mid-1990s, we, SSA’s Office of the Inspector General, and the Social Security Advisory Board have expressed concern that SSA was not updating the Medical Listings regularly but simply extending the expiration dates that were originally developed by SSA so as to ensure that it would conduct the updates. In fact, the Office of the Inspector General recommended that SSA develop a performance measure of its update activities for inclusion in SSA’s annual performance plan. SSA did not agree with the recommendation, responding that revisions to the Medical Listings are subject to some factors not fully in their control (e.g., progression of scientific advances, input from experts and the public, and shifting congressional priorities), which can affect timing and prioritization of effort. In our view, these uncertainties—in addition to the size and costs of the programs—in fact elevate the need for establishing a time frame to ground SSA in its efforts and help keep the agency on track. Moreover, SSA is allowed to revise performance measures in its annual plans. SSA resumed updating the Medical Listings in 1998. Since then, SSA has taken some positive steps in updating portions of the medical criteria it uses to make eligibility decisions, although progress is slow. As of early 2002, SSA has published the final updated criteria for 1 of the 9 remaining body systems not updated in the early 1990s (musculoskeletal) and a portion of a second body system (mental disorders). SSA also plans to update again the 5 body systems that were updated in the early 1990s. In addition, SSA has asked the public to comment on proposed changes for several other body systems. During the course of our work, SSA initially indicated to us that the agency planned to publish proposed changes for all body systems by 2002 and submit changes to the Office of Management and Budget for final clearance by 2003. Recently, the new administration at SSA (a new commissioner was confirmed in November 2001) reviewed the schedule and timing for the revisions. The results of this review pushed back the completion date for publishing proposed changes for all remaining body systems to the end of 2003. The revised schedule, as of May 2002, is shown in table 1. SSA’s slow progress in completing the updates could undermine the purpose of incorporating medical advances into its medical criteria. For example, the criteria for musculoskeletal conditions—a common impairment among persons entering DI—were updated in 1985. Then, in 1991, SSA began developing new criteria and published its proposed changes in 1993 but did not finalize the changes until 2002; therefore, changes made to the musculoskeletal criteria in 2002 were essentially based on SSA’s review of the field in the early 1990s. SSA officials told us that in finalizing the criteria, they reviewed the changes identified in the early 1990s and found that little had taken place since then to warrant changes to the proposed criteria. However, given the advancements in medical science since 1991, it may be difficult for SSA to be certain that all applicable medical advancements are in fact included in the most recent update. Similarly, we are concerned about the time frames for completing the full update on the criteria for another major impairment category— mental disorders. While SSA finalized in 2000 a portion of the changes for mental disorders first proposed in 1991, the agency deferred action on the remaining portion pending further review. SSA recently announced plans to publish these proposed changes by November 2003. Keeping to a set schedule and making necessary updates could help SSA minimize the use of outmoded criteria in a large number of disability decisions. For example, SSA used the criteria for musculoskeletal conditions that were developed in 1985 until 2001. This means that in the year prior to the update—2000—SSA allowed 222,750 adults to enter the DI or SSI program on the basis of medical criteria that were 15 years old. VA has made more progress than SSA in updating the medical criteria used to evaluate its disability claims, but overall the process is slow. In 1989, VA hired a contractor to bring together practicing physicians to review and develop updated criteria for several of the body systems contained in the Schedule for Rating Disabilities. The practicing physicians, who were organized by teams according to specific body systems, were tasked with proposing changes that were consistent with modern medical practice and stated in a manner that could be easily interpreted by rating personnel. The results of the teams’ efforts were reviewed by VA in-house staff. After making necessary adjustments, the proposed changes were forwarded to various VA offices for review. Proposed changes were published in the Federal Register and opened for a 60-day comment period. As of March 2002, VA had finalized the criteria for 11 of 16 body systems. VA is currently reviewing the remaining body systems. VA has generally taken more than 5 years to complete the update for each body system (see fig. 5). VA has not yet completed updating the medical criteria for several important body systems. For example, criteria used for evaluating orthopedic impairments were last updated in 1986. Yet the number of veterans with a disabling orthopedic condition has risen significantly in the past decade, outpacing the number of veterans receiving benefits under any other single disability group. Therefore, veterans with an orthopedic impairment who applied for VA disability benefits since 1996 were evaluated with medical criteria that were at least 10 years old. We found two factors contributing to the amount of time to update VA’s medical criteria. First, the review given to the proposed changes is lengthy. VA’s legal counsel as well as other entities within VA, such as the Veterans Health Administration, Office of Congressional and Legislative Affairs, and Office of Inspector General, review all proposed changes to the Schedule for Rating Disabilities. The Office of Management and Budget also reviews the changes. This entire review process can take up to 3 years. Second, the number of staff assigned to coordinate the updates at VA also contributes to the lengthy time to complete the updates. For example, one staff person is assigned less than half time to coordinate the update efforts. VA does not have a well-defined plan to conduct the next round of medical updates. Although VA provided us with a statement acknowledging the need to re-review the medical criteria in the future, it had neither a strategy nor time frame for completing the task. SSA has made various types of changes to the Medical Listings thus far. As shown in table 2, these changes, including the proposed changes released to the public for comment, add or delete qualifying conditions; modify the criteria for certain physical or mental conditions; and clarify and provide additional guidance in making disability decisions. In addition, SSA has made a number of editorial changes. In recognition of medical advances, VA has also made several types of changes to its Schedule for Rating Disabilities during the current update. As shown in table 3, the types of changes have been quite similar to changes made by SSA. Revisions generally consist of (1) adding, deleting, and reorganizing medical conditions in the Schedule for Rating Disabilities; (2) revising the criteria for certain qualifying conditions; and (3) wording changes for clarification or reflection of current medical terminology. VA also has made a number of editorial changes. Program design issues have limited the extent that advances in medicine and technology have been incorporated into the DI, SSI, and VA’s disability decision making. SSA has indicated that the updates are being made in recognition of medical advances in treatment and technology, and we found examples in SSA’s publications in the Federal Register of this occurring. Our methodology for this study, however, does not allow us to determine the extent of SSA’s efforts to incorporate medical advances into the Medical Listings. Nevertheless, the design of these programs limits the role of treatment in deciding who is disabled. SSA’s regulations require that in order to receive benefits, claimants must follow treatment prescribed by the individual’s physician if the treatment can restore his or her ability to work. The implication of this regulation is that if an individual is not prescribed treatment, SSA does not consider the possible effects of treatment in the disability decision, even if the treatment could make the difference between being able and not being able to work. Moreover, the programs do not require individuals to receive nonprescribed treatment before or during the time they are assessed for eligibility. Thus, treatments that can help restore functioning to persons with certain impairments may not be factored into the disability decision for some applicants. This limited role of treatment means, by definition, the updates have not fully captured the benefits that treatments can provide to persons with certain impairments. For example, medications to control severe mental illness, arthritis treatments to slow or stop joint damage, total hip replacements for severely injured hips, and drugs and physical therapies to possibly improve the symptoms associated with multiple sclerosis are not automatically factored into SSA’s decision making for determining the extent that impairments affect people’s ability to work. Additionally, this limited approach to treatment raises an equity issue: Applicants whose treatment allows them to work could be denied benefits while applicants with the same condition who have not been prescribed treatment could be allowed benefits. While some of VA’s changes to the Schedule for Rating Disabilities reflect advances in medicine, the changes have generally not incorporated the potential benefits of treatment. While treatment can improve an individual’s ability to function in the workplace, the program is not designed to factor in the potential benefits of treatment when evaluating a veteran’s service-connected disability. That is, veterans applying for disability benefits—much like, for example, workers applying for DI benefits—are not required to undergo treatment before or after they are given a disability rating. Moreover, the VA program does not, unlike DI and SSI, factor in the potential effect of prescribed treatment on an applicants’ abilities. As with treatment, the benefits of innovations in assistive technologies— such as advanced prosthetics and wheelchair designs—have not been fully incorporated into DI, SSI, and VA disability criteria because the statutory design of these programs does not recognize these advances in disability decision making. That is, programs are not designed to assess an applicant’s ability to work under corrected conditions. Conceivably, using innovations such as a prosthetic device could reduce the limiting nature of an applicant’s impairment and could also reduce, if programs were designed differently, eligibility for or the amount of cash benefits. And some technologies may not involve sophisticated electronics. For example, a factory worker with a back impairment who works on an assembly line could benefit from an ergonomic stool or chair and matting that would cushion the floor and reduce fatigue. According to VA, technological advances, such as voice recognition devices—which can help people who do not have the use of their hands to interact with a computer—are not considered during the rating process to determine the extent to which technology could improve a veteran’s earning capacity. The disability criteria used by DI, SSI, and VA programs for determining who is disabled have not incorporated labor market changes. In determining the effect that impairments have on individuals’ earning capacity, programs continue to use outdated information about the types and demands of jobs in the economy. Given the nature of today’s economy, which offers varied opportunities for work, agencies’ use of outdated information raises questions about the validity of disability decisions. For an applicant who does not have an impairment that SSA presumes is severe enough ordinarily to prevent an individual from engaging in substantial gainful activity, SSA evaluates whether the individual is able to work despite his or her limitations. Individuals who are unable to perform their previous work and other work in the national economy are awarded benefits. SSA relies upon the Department of Labor’s Dictionary of Occupational Titles (DOT) as its primary database to make this determination; however, Labor has not updated DOT since 1991 and does not plan to do so. Since 1993, Labor has been working on a replacement for the DOT called the Occupational Information Network (O*NET). It contains information on about 970 occupational categories, while DOT had 13,000 occupational titles. Labor and SSA officials recognize that O*NET cannot be used in its current form in the DI and SSI disability determination process. The O*NET, for example, does not contain SSA-needed information on the amount of lifting or mental demands associated with particular jobs. The agencies have discussed ways that O*NET might be modified or supplemental information collected to meet SSA’s needs, but no definitive solution has been identified. SSA officials have indicated that an entirely new occupational database could be needed to meet SSA’s needs, but such an effort could take many years to develop, validate, and implement. Meanwhile, as new jobs and job requirements evolve in the national economy, SSA’s reliance upon an outdated database further distances the agency from the current market place. The percentage ratings used in VA’s Schedule for Rating Disabilities are still primarily based on physicians’ and lawyers’ estimates made in 1945 about the effects that service-connected impairments have on the average individual’s ability to perform jobs requiring manual or physical labor. Although VA is revising the Schedule for Rating Disabilities’ medical criteria, the estimates of how impairments affect veterans’ earnings have generally not been reexamined. As a result, changes in the nature of work that have occurred in the past 57 years—which potentially affect the extent to which disabilities limit one’s earning capacity—are overlooked by the program’s criteria. For example, in an increasingly knowledge- based economy, one could consider whether earning capacity is still reduced, on average, by 40 percent for loss of a foot. VA recognizes that there have been significant changes in the nature of work, but does not believe that these changes need to be reflected in the disability ratings. One official noted that a disability rating is essentially an indication of medical severity: the more severe the medical condition, then the higher the rating. Moreover, it was stated, changes in the nature of work are captured in the types of vocational rehabilitation services offered to veterans (e.g., veterans could receive computer skills training). Finally, the official noted that disability compensation should not be adjusted if an individual veteran is able to work despite a disabling condition. In the past, we suggested to Congress that it may wish to consider directing VA to determine whether VA ratings correspond to veterans’ average loss in earning capacity and adjust disability ratings accordingly.VA responded to us that the schedule, as constructed, represents a consensus among Congress, VA, and the veteran community, and that the ratings generally represent an equitable method to determine disability compensation. In conducting the work for our present assignment, VA told us that they believe the consensus remains and the ratings continue to generally represent an equitable approach. We continue to believe, however, that changes in the nature of work afford some veterans with a disability the opportunity to become more fully employed and that the current estimates of the average reduction in earning capacity should be reviewed. Further, we believe that updating disability criteria is consistent with the law. Incorporating scientific advances and labor market changes into DI, SSI, and VA programs can occur within the existing program design and at a more fundamental level. Within the context of the programs’ existing statutory and regulatory design, agencies will need to continue updating the criteria they use to determine which applicants have physical and mental conditions that limit their ability to work. As we noted above, agencies began this type of update in the early 1990s, although their efforts have focused much more on the medical portion than labor market issues. In addition to continuing their medical updates, SSA and VA need to vigorously expand their efforts to more closely examine labor market changes. SSA’s results could yield updated information they use to make decisions about whether or not applicants have the ability to perform their past work or any work that exists in the national economy. VA’s results could yield updates to the average loss in earning capacity resulting from service-connected injuries and conditions. More fundamentally, SSA and VA could consider the impact that scientific advances and labor market changes have on the programs’ basic orientation. Whereas programs currently are grounded in assessing and providing benefits based on incapacities, fully incorporating the scientific and labor market issues we highlight in this report implies that agencies would assess individuals with physical and mental conditions under corrected conditions for employment in an economy increasingly different from that which existed when these programs were first designed. Factoring medical and technological advances more fully into the DI, SSI, and VA programs implies that some if not many applicants would receive up-front assistance—including help in finding and maintaining employment—to help agencies evaluate individuals under their fullest potential to work. In fact, the types of beneficiaries who currently might have benefited from such assistance but have not received either timely medical or vocational assistance (for example, DI beneficiaries during the 24-month wait period for Medicare benefits) could get a package of up- front service under a new approach. Moreover, reorienting programs in this direction is consistent with increased expectations of people with disabilities and the integration of people with disabilities into the workplace, as reflected in the Americans with Disabilities Act. However, for people with disabilities who do not have a realistic or practical work option, long-term cash support is likely the best option. In reexamining the fundamental concepts underlying the design of the DI, SSI, and VA programs, approaches used by other disability programs may offer some valuable insights. For example, our prior review of three private disability insurers shows that they have fundamentally reoriented their disability systems toward building the productive capacities of people with disabilities, while not jeopardizing the availability of cash benefits for people who are not able to return to the labor force. These systems have accomplished this reorientation while using a definition of disability that is similar to that used by SSA’s disability programs.However, it is too early to fully measure the effect of these changes. In these private disability systems, the disability eligibility assessment process evaluates a person’s potential to work and assists those with work potential to return to the labor force. This process of identifying and providing services intended to enhance a person’s productive capacity occurs early after disability onset and continues periodically throughout the duration of the claim. In contrast, SSA’s eligibility assessment process encourages applicants to concentrate on their incapacities, and return-to- work assistance occurs, if at all, only after an often lengthy process of determining eligibility for benefits. SSA’s process focuses on deciding who is impaired sufficiently to be eligible for cash payments, rather than on identifying and providing the services and supports necessary for making a transition to work for those who can. While cash payments are important to individuals, the advances and changes discussed in this report suggest the option to shift the disability programs’ priorities to focus more on work. We recognize that re-examining the programs at the broader level raises a number of significant policy issues, including the following: Program design and benefits offered. Agencies would need to consider the impact on program design, including fundamental issues of basic eligibility structure and benefits and services provided. Would the definition of disability change? To what extent would programs require some beneficiaries to accept assistance to enhance work capacities as a precondition for benefits versus relying upon work incentives, time-limited benefits, or other means to encourage individuals to maximize their capacity to work? Would persons whose work potential is significantly increased due to medical and technological assistance receive the same cash benefits that are currently provided? Would criteria need to be established to identify persons whose severity presumes a basis for permanent cash benefits? Would program recipients with earned income above a certain level still be eligible for no-cost assistance or do they begin to help pay for the support? To change program design, what can be done through the regulatory process and what requires legislative action? Accessibility. Agencies would need to address the accessibility of medical and technological advances for program beneficiaries. Are new mechanisms needed to provide sufficient access to needed services? In the case of DI and SSI, what is the impact on the ties with the Medicare and Medicaid programs? For VA, accessibility issues may not be as critical because of existing links to health and vocational rehabilitation benefits provided by VA. Cost. Agencies would need to address cost implications, including the issue of who will pay for the medical and assistive technologies (will beneficiaries be required to defray costs?). For example, would the cost of providing treatment and assistive technologies in the disability programs be higher than cash expenditures paid over the long-term? The cost to provide medical and technological treatment could be quite high for some program recipients, although much less for others. Moreover, net costs would need to be considered, as some expenditures could be offset with cost savings by paying reduced benefits. Integration with other program components. Agencies would need to address how to integrate a new emphasis on medical and technological assistance when making disability determinations with the health care and vocational assistance already currently available to program beneficiaries. Notably, VA’s program components of cash assistance, vocational rehabilitation, and medical care may uniquely position the agency to develop an integrated model and evaluate the results. During our work, VA officials pointed out that vocational rehabilitation services are already available to veterans to help them return to work and that such services include incorporating the advances and changes addressed in this report. Yet, the restorative benefits of medical, technological, or vocational interventions are not considered when VA makes an initial assessment of the economic losses that result from a condition or injury. With a limited amount of program funding, integrating these program components may help VA to equitably distribute program funds among veterans with disabilities. Agencies’ research efforts could help address these broader policy issues. In fact, SSA is beginning to conduct a number of studies that recognize that medical advances and social changes require the disability programs to evolve. SSA’s 2002 annual performance plan contains a strategic objective to promote policy change based on research, evaluation, and analysis. SSA has funded a project to design a study that would assess the extent to which the Medical Listings are a valid measure of disability, and began work to design a study for SSA to identify the most salient job demands in comparison to applicants’ residual functional capacity. Additionally, SSA is sponsoring the National Study of Health and Activity, a project intended to enable SSA to estimate how many adults live in the United States who meet the definition of disability used by SSA and to better understand the relationship between disability, work, health care, and community. Also, SSA has funded a study to examine the impact and cost of assistive technology on employment of persons with spinal cord injuries and the associated costs. Finally, SSA had planned to conduct a demonstration project to determine the impact of medicine and therapy on beneficiaries with mood disorders such as major depressive disorder and bipolar disorder in returning them to work. The project was partly in response to evidence found by SSA that some beneficiaries with mood disorders had not received promising treatment. SSA has placed the project on hold while it reconsiders the purpose of the project. Such research projects could provide important insight into ways that medical and technological advances can help persons with disabilities work and live independently. The research could also begin to provide important information about the cost and outcomes of program changes that bring up-front help to individuals receiving or applying for disability benefits. Nevertheless, individually, these studies do not directly or systematically address many of the implications of factoring in medical advances and assistive technologies more fully into the DI and SSI programs. Given the large size of the DI, SSI, and VA programs, it is incumbent that they remain current with medical advances and the changes in the demands and opportunities in the world of work. Updating disability criteria within existing program structures is prudent, not only as a means to best ensure program integrity, but also for agencies to meet their fiduciary responsibilities for public funds. We recognize the challenge to updating disability criteria. Yet we have concerns that while agencies are making some progress, their commitment to this effort appears to be inconsistent with the stakes involved: medical updates have been slow and there are few written strategies for performing timely updates in the years ahead. Moreover, these agencies have done little to better take into consideration the implications of labor force changes on the ability of persons with disabilities to earn a living. To the extent that SSA and VA do not update criteria used to reach disability decisions, they cannot ensure their disability decisions are valid. Updating the disability criteria within the context of current program design will not fully capture the work-enhancing opportunities afforded by recent scientific advances and labor market changes. That is, current program design does not assess individuals under corrected conditions. To fully capture these advances and changes, policymakers would need to comprehensively re-examine some fundamental aspects of the DI, SSI, and VA programs, including the type, timing, and conditions of providing assistance to persons with physical and mental conditions. Such an examination is a complex but increasingly important undertaking. Indeed, Congress’ approach to these issues could be quite different given the unique characteristics of each program. But nevertheless, without a comprehensive analysis about alternatives and their impacts, it is likely that little progress will be made. To further advance the discussion of issues raised in this report, we recommend that the Commissioner of Social Security take the following actions: Use SSA’s annual performance plan to delineate strategies for and progress in periodically updating the Medical Listings and labor market data used in its disability determination process. Study and report to Congress the effect that a comprehensive consideration of medical treatment and assistive technologies would have on the DI and SSI programs’ eligibility criteria and benefit package. The analysis should estimate the effects on the size, cost, and management of these and other relevant programs and identify the legislative action, if any, necessary to initiate and fund such change. To further advance the discussion of issues raised in this report, we recommend that the Secretary of Veterans Affairs take the following actions: Use VA’s annual performance plan to delineate strategies for and progress in periodically updating the Schedule for Rating Disabilities and labor market data used in its disability determination process. Study and report to Congress the effect that a comprehensive consideration of medical treatment and assistive technologies would have on the VA disability programs’ eligibility criteria and benefit package. The analysis should estimate the effects on the size, cost, and management of the program and other relevant VA programs and identify the legislative action, if any, necessary to initiate and fund such change. We sent a draft of this report to SSA, VA, and the Department of Labor for comments. SSA and VA submitted comments to us, which are reproduced, respectively, in appendixes I and II. Our responses to their comments appear below. In addition, technical comments and clarifications from these two agencies were incorporated as appropriate. SSA concurred with our recommendation to use its annual performance plan to delineate strategies for, and progress in, periodically updating the Medical Listings and labor market data used in its disability determination process, and it cited the strategic objective in its 2003 performance plan to promote policy changes that take account of changing needs based on medical, technological, demographic, job market, and societal trends. However, the performance goals associated with this objective do not refer specifically to updating either the Listings or labor market data. We believe such specific measurable goals are needed in light of the many years that have passed since DI and SSI disability criteria have been fully updated. In addition, SSA provided several other comments on our findings concerning the agency’s efforts to update the disability criteria. First, SSA mentioned it is unable to determine why our report concludes that the DI and SSI updates do not reflect medical advances, citing their published commitment to do so and our recognition in the report of the agency’s efforts to incorporate some medical updates into the Listings. We do not dispute SSA’s contention, which is similar to a point also made by VA, that the agency considers the effects of treatment, medication, and assistive technologies in some if not many updates to the Listings. However, the issues we raise are at a more fundamental level. Our report specifically states that, under the statutory and regulatory design of these programs, SSA does not automatically evaluate individuals applying for benefits under corrected conditions. Thus, it is our belief that the programs themselves have not been fully updated to reflect scientific advances, because interventions that could enhance individuals’ productive capacities are not, by design, factored into the disability decision-making process. Second, SSA commented that the DOT, even though it has not been revised since 1991, remains the most complete and up-to-date source of comprehensive occupational information. While characterizing the database in this manner may be technically accurate, the database was generally recognized as outdated by SSA and Labor officials we interviewed, and we note that Labor does not plan to update the database. Similarly, SSA commented that creating a new database on jobs in today’s economy for DI and SSI decision making is only one alternative (and, as SSA notes, an unlikely and undesirable one). In our view, absent a significant change in the decision-making process, SSA has only a few options: it will need to either modify the database that Labor developed to replace the DOT, modify the DOT, or develop a new database. Each option could require substantial effort, and regardless of which approach the agency selects, it will need to update the job-related information it uses. Regarding our recommendation that SSA study and report to Congress the effect that a comprehensive consideration of medical treatment and assistive technologies would have on DI and SSI’s eligibility criteria and benefit package, SSA again states that it already considers in its Listings the effect that new medical treatment and assistive technologies would have on these two disability programs. Moreover, it states, the agency is not reluctant to promulgate regulatory changes or to suggest any legislative changes it considers appropriate as the need for change arises. We do not agree that SSA currently meets our recommendation. Our recommendation underscores the need to move beyond updating the disability decision-making process within the existing program design. Instead, SSA needs to make a more systematic study of options that would maximize an individual’s work potential by focusing on early and appropriate supports and interventions that take advantage of the advances and changes we identify in this report. As we note in the report, SSA has several research studies that could provide useful information in consideration of the larger design issues. Yet these studies do not directly or systematically address many of the implications of factoring in medical advances and assistive technologies more fully into the DI and SSI programs. The agency needs to lay out a master plan to systematically explore these larger policy and design issues. VA did not concur with our recommendation to use its annual performance plan to delineate strategies for and progress of periodically updating the Schedule for Rating Disabilities and labor market data used in its disability determination process. VA stated that developing timetables for future updates to the Schedule for Rating Disabilities is inappropriate while its initial review is ongoing. We continue to believe that VA needs to include measurable goals about how and when it will complete the current round of medically-focused updates as well as future updates. VA should incorporate this information into its plan because portions of the Schedule for Rating Disabilities still remain to be updated and the agency has taken years to update individual body systems. In addition, VA should now begin to develop strategies for the next round of updates because portions of the Schedule for Rating Disabilities updated during the current round were completed about 8 years ago and were based on expert input collected about 12 years ago. As such, it is important to begin planning for the next cycle of review. VA’s annual performance plan can help the agency hold itself accountable for ensuring that disability ratings are based on current information. VA also did not concur with our recommendation to use its annual performance plan to discuss strategies and progress on updating the Schedule for Rating Disabilities because the agency does not plan to initiate an economic validation study or a revision of the Schedule for Rating Disabilities based on economic factors. The agency stated that prior attempts to change the Schedule for Rating Disabilities by conducting an economic validation were met with dissatisfaction among Congress, the veteran community, and VA. Moreover, VA noted that it believes the Schedule for Rating Disabilities is medically based; represents a consensus among Congress, VA and the veteran community; and has been a valid basis for equitably compensating America’s veterans for many years. We do not disagree that validating the Schedule for Rating Disabilities could lead to significant if not controversial changes, and the Schedule for Rating Disabilities does have a medical component and has been used as a basis for disability compensation for years. However, our analysis of the extent to which the VA—as well as DI and SSI—disability criteria were updated was grounded in the current law that authorizes this program. The law states that veterans are entitled to compensation for the average reduction in earning capacity for injuries incurred or aggravated while in service. Because earning capacity is clearly linked to the types and demands of jobs in the economy, and given that the economy has changed over time, updating the Schedule for Rating Disabilities based on labor market changes is sound administrative policy. Moreover, the concept of disability has changed significantly since the economic data assumptions in the Schedule for Rating Disabilities were last updated in 1945, further supporting the need to keep current with workforce requirements and opportunities. In addition, VA did not agree with our finding that VA disability criteria have not been fully updated based on medical advances, noting that disabilities are commonly evaluated based on disabling effects while on treatment. We do not dispute VA’s contention that it recognizes the effects of treatment, medication, and assistive technologies that have been received by veterans in some, if not many, of its disability ratings. Much like our response to a similar comment made by SSA, our conclusion is based on the overall design of the program rather than on whether specific ratings have been updated to reflect treatment options. VA does not automatically evaluate a veteran’s average reduction in earning capacity under corrected conditions when making a decision about benefit eligibility and as such, a veteran not receiving a medical intervention or assistive technology that could increase work capacity is not evaluated according to his or her potential or actual capacity to work. Again, although VA’s current approach is consistent with program design, it also downplays the role that medical and technological advances can play in helping enhance work capacity. Consequently, we conclude that the program is not fully aligned with medical and technological advances. Finally, VA did not concur with our recommendation that it study and report to Congress the effect that such a comprehensive consideration of medical treatment and assistive technologies would have on the program. VA believes moving in this direction would present a radical change from the current program, and the agency raised questions about whether Congress and the veteran community would support the idea. We believe that our society is very different from the times when VA and SSA disability programs were first designed. In addition to scientific advances and economic changes, expectations for people with disabilities are different. We believe more information is needed about the effects of a fuller consideration of these advances and changes on the program. VA should systematically study the implications of such changes and provide the results to Congress to facilitate future decision making. Copies of this report are being sent to appropriate congressional committees and other interested parties. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-9889. Other contacts and staff acknowledgments are listed in appendix IV. 1. VA cites the 1955 President’s Commission on Veterans’ Pensions (commonly called the Bradley Commission) as support that VA’s disability ratings represent noneconomic factors, such as pain and suffering, in addition to average loss of earnings. However, as we reported in 1997, “the Commission’s overall recommendation with regard to the Schedule was that it should be revised thoroughly on the basis of factual data to ensure that it reflects veterans’ average reduction in earning capacity, as required by law. The Commission stated that the basic purpose of the program is economic maintenance and, therefore, it is appropriate to compare periodically the average earnings of the working population and the earnings of disabled veterans…” Even if the ratings are intended to reflect noneconomic factors, this does not negate the need for updating the schedule due to changes in the labor market. The extent to which, if at all, disability compensation reflects noneconomic factors is a policy issue which lies beyond the scope of this report. 2. We recognize that veterans who are paid disability benefits can also be receiving various types of treatment and assistance. Our recommendation reflects the need for more information on the implications of integrating the effects of treatment and assistance into the disability determination process, including the process to determine (1) the impact of physical and mental conditions on earnings and (2) the appropriate type and timing of benefits—such as cash, medical, and vocational assistance—to minimize the reduction of earnings associated with the disabilities. 3. We recognize that the link between medical impairments and the ability to work is complex and difficult to measure and can be affected by other factors like social support and individual motivation. Yet the VA program, by legislative design, compensates for loss in earning capacity that results from injuries or medical conditions. Thus, we believe, it is important to maintain good data on the skills and demands in the labor market to provide the best estimate of loss in earning capacity that is reasonably associated with particular injuries and conditions. In our 1997 report, we lay out options for the design and methodology for estimating loss in earnings among veterans with disabilities. But VA’s comment underscores the larger point we are making: Past assumptions that underlie these programs are increasingly outmoded as the confluence of scientific, economic, and social forces are redefining the relationship between impairments and abilities. Additional information on how programs can take advantage of this change will help Congress make better-informed decisions on disability policy. 4. We recognize that veterans can work and still receive disability compensation benefits. In fact, at the beginning of fiscal year 2002, two-thirds of veterans had a rating at 30 percent or less, implying that many veterans receiving disability compensation are working. Moreover, we recognize that VA’s use of an “average” reduction in earnings capacity implies that some veterans rated at 100 percent are employed, including those without an actual reduction in earnings. See comment 1 for our response to VA’s point that benefits may be partially compensated on noneconomic factors. 5. See the third paragraph of our response to VA comments in the body of the letter (p. 33). 6. As we report in 1997, VA conducted the Economic Validation of the Rating Schedule (ECVARS) in the 1960s in response to the Bradley Commission recommendations and recurring criticisms that ratings in the schedule were not accurate. This study was designed to estimate the average loss in earning capacity among disabled veterans by calculating the difference between the earnings of disabled veterans, by condition, and the earnings of nondisabled veterans, controlling for age, education, and region of residence. On the basis of the results, VA concluded that of the approximately 700 diagnostic codes reviewed, the ratings for 330 overestimated veterans’ average loss in earnings due to their conditions, and about 75 underestimated the average loss among veterans. To determine whether an applicant qualifies for DI or SSI disability benefits, SSA uses a five-step sequential evaluation process. In the first step, an SSA field office determines if an applicant is working at the level of substantial gainful activity and whether he or she meets the applicable nonmedical eligibility requirements (for example, residency, citizenship, Social Security insured status for DI, and income and resources for SSI). An applicant who is found to be not working or working but earning less than the substantial gainful activity level (minus allowable exclusions), and who meets the nonmedical eligibility requirements, has his or her case forwarded to a state Disability Determination Service (DDS) office. Applicants who do not meet these requirements, regardless of medical condition, are denied benefits. DDS offices gather medical, vocational, and other necessary evidence to determine if applicants are disabled under the Social Security law. In step two, the DDS office determines if the applicant has an impairment or combination of impairments that is severe and could be expected to last at least 12 months. According to SSA standards, a severe impairment is one that significantly limits an applicant’s ability to do “basic work activities,” such as standing, walking, speaking, understanding, and carrying out simple instructions; using judgment; responding appropriately to supervision; and dealing with change. The DDS office collects all necessary medical evidence, either from those who have treated the applicant or, if that information is insufficient, from an examination conducted by an independent source. Applicants with severe impairments that are expected to last at least 12 months proceed to the third step in the disability determination process; applicants without such impairments are denied benefits. At step three, the DDS office compares the applicant’s condition with the Listing of Impairments (the Medical Listings) developed by SSA. The Medical Listings describe medical conditions that, according to SSA, are severe enough ordinarily to prevent an individual from engaging in substantial gainful activity. An applicant whose impairment is cited in the Medical Listings or whose impairment is equally as severe or more severe than those impairments in the Medical Listings and who is not engaging in substantial gainful activity is found to be disabled and awarded benefits. An applicant whose impairment is not cited in the Medical Listings or whose impairment is less severe than those cited in the Medical Listings is evaluated further to determine whether he or she has vocational limitations that, when combined with the medical impairment(s), prevent work. In step four, the DDS office uses its physician’s assessment of the applicant’s residual functional capacity to determine whether the applicant can still perform work he or she has done in the past. For physical impairments, residual functional capacity is expressed in certain demands of work activity (for example, ability to walk, lift, carry, push, pull, and so forth); for mental impairments, residual functional capacity is expressed in psychological terms (for example, whether a person can follow instructions and handle stress). If the DDS office finds that a claimant can perform work done in the past, benefits are denied. In the fifth and last step, the DDS office determines if an applicant who cannot perform work done in the past can do other work that exists in the national economy. Using SSA guidelines, the DDS considers the applicant’s age, education, vocational skills, and residual functional capacity to determine what other work, if any, the applicant can perform. Unless the DDS office concludes that the applicant can perform work that exists in the national economy, benefits are allowed. At any point in the sequential evaluation process, an examiner can deny benefits for reasons relating to insufficient documentation or lack of cooperation by the applicant. Such reasons can include an applicant’s failure to (1) provide medical or vocational evidence deemed necessary for a determination by the examiner, (2) submit to a consultive examination that the examiner believes is necessary to provide evidence, or (3) follow a prescribed treatment for an impairment. Benefits are also denied if the applicant asks the DDS to discontinue processing the case. The following people also made important contributions to this report: William A. McKelligott, Barbara W. Alsip, and Daniel A. Schwimer.
The three largest disability programs collectively provided $89.7 billion in cash benefits to 10.2 million adults in 2001. However, the Disability Insurance (DI) program, Supplemental Security Income (SSI) program, and VA disability criteria reflect neither medical and technological advances nor the labor market changes that affect the skills needed to perform work and work settings. If these federal disability programs do not update scientific and labor market information, they risk overestimating the limiting nature of some disabilities while underestimating others. Twelve years ago, both the Social Security Administration and the Department of Veterans Affairs (VA) began reviewing relevant medical advances and updating the criteria they use to evaluate claims. However, the time the agencies are taking to revise the medical criteria could undermine the very purpose of the update. Moreover, because of the limited role of treatment in the statutory and regulatory design of these programs, the updates have not fully captured the benefits afforded by advances in treatment. Also, the disability criteria used by DI, SSI, and VA programs have not incorporated labor market changes. These programs continue to use outdated information about the types and demands of jobs needed to determine the impact that impairments have on individuals' earning capacity. To incorporate scientific advances and labor market changes into the DI, SSI, and VA programs, steps can be taken within the existing program design, but some would require more fundamental change. Agencies need to continue their medical updates and vigorously expand their efforts to more closely examine labor market changes. At a more fundamental level, SSA and VA could consider changes to the disability criteria that would revisit the programs' basic orientation.
Companies that develop and produce oil and gas resources do so under leases obtained from and administered by the Department of the Interior. Interior’s Bureau of Land Management (BLM) manages onshore leases, and Interior’s MMS manages offshore leases. MMS is responsible for collecting the royalties on all federal and many Indian oil and gas leases. Royalties on producing leases are a percentage of the value of the production sold less deductions known as allowances. Together, BLM and MMS are responsible for ensuring that oil and gas companies comply with applicable laws, regulations, and policies for more than 29,000 producing federal and Indian leases, which account for about 23 percent of domestically produced gas and 26 percent of domestically produced oil. In some cases, several companies form partnerships to explore and develop oil and gas leases, thereby sharing the risk, the costs, and the benefits. These companies often elect from among themselves a single company, called the operator, to manage the physical drilling of wells and the installation of production equipment. Operators report monthly to MMS on the Oil and Gas Operations Report (OGOR) the amount of oil and gas produced from each well on each lease. In addition, all the companies that share the proceeds from the sale of oil and gas from federal lands and waters are required each month to report to MMS on the Form MMS-2014 data about the oil and gas they sold. MMS refers to these companies, including the operator, as royalty payors. The data on each Form MMS- 2014 are then stored in MMS’s system as a number of records, each of which consists of many variables, such as the name of the payor, the lease number, the amount of oil and gas sold (sales volume), the value of this oil and gas (sales value), allowable deductions for transportation and processing, and the amount of royalties owed (royalty value). Payors can legally adjust these data they report for up to 6 years if, for example, they learn that the data they submitted were incorrect. Almost all payors submit these data electronically. Within its 5-year business plan for fiscal years 2008 to 2012, MMS has set an objective of ensuring timely and more accurate mineral revenue reporting and payment. According to Interior’s 2009 Budget Justification, MMS set goals in fiscal years 2008 and 2009 of ensuring that companies report 98 percent of their data accurately the first time, up from actual percentages of 97.4 in fiscal year 2006 and 97.3 in fiscal year 2007, and compared to an actual percentage of 98.3 as reported by MMS for fiscal year 2008. While we could not find a business entity that performed identical services to those of MMS for comparing its accuracy of electronic transactions, we chose the Internal Revenue Service (IRS) for comparison because of the potential difficulty in interpreting complex tax regulations, determining allowable deductions, and calculating taxes owed. To this end, IRS reported in January 2008 that its electronic tax filers have a 99 percent accuracy rate—only slightly higher than the rates reported by MMS. To help improve data accuracy, MMS subjects payor-reported royalty data to over 140 edit checks. Specifically, MMS has incorporated certain up-front edit-checks in its data acceptance tools that help detect and reject erroneous payor-reported royalty data before MMS’s data systems will accept them. MMS also incorporates a second level of edit checks that review payor-reported data for additional errors after data are accepted. Edit checks must comply with GAO standards for internal controls in the federal government as required by 31 U.S.C. § 3512(c) and (d), commonly referred to as the Federal Managers’ Financial Integrity Act of 1982. These standards identify and address major performance challenges and areas at greatest risk for fraud, waste, abuse, and mismanagement. Furthermore, the standards state that automated edits and checks should help control the accuracy and completion of transaction processing. Given the large amount of royalty revenues at stake and problems with royalty management identified by past GAO, Interior Inspector General, and other reports, MMS’s processes for ensuring the accurate collection of royalties have been the subject of continuing scrutiny. For example, in 2003 while examining MMS’s Royalty-in-Kind program, we found that from 1.9 percent to 3.3 percent of the data that we examined for oil leases in Wyoming and the Gulf of Mexico were erroneous or missing, and that 6 percent of the data that we examined for gas leases in the Gulf of Mexico were anomalous, meaning that data values fell outside of expected ranges. Similarly in 2004, we found that 40 percent of the royalty data that we examined for 10 geothermal projects was either missing or erroneous. In 2006, we examined the relationship between the increases in oil and gas prices from 2000 to 2005 and the amount of royalties collected during that time and found that 8.5 percent of the data appeared anomalous. In 2008, we reported that MMS’s royalty management system lacked several capabilities that would provide greater assurance that royalties are collected accurately. These capabilities include readily identifying changes that companies make to previously entered data, detecting the absence of royalty reports, and implementing a process for collecting the proper amount of royalties when MMS identifies that oil and gas volumes have been incorrectly reported. Among other things, we recommended MMS identify when royalty reports have not been filed as required and when companies make changes to data provided to MMS after the statutory limitation on such changes. We also reported that MMS was taking steps to address these deficiencies. In addition to GAO’s work, Interior’s Inspector General (IG) analyzed MMS’s auditing and compliance process and made several recommendations in 2007 to improve these functions and the systems that track them. Also, the Royalty Policy Committee (RPC)—a group empanelled by the Secretary of the Interior and charged with providing advice on managing federal and Indian leases and revenues––has identified numerous deficiencies. In December 2007, the RPC issued a report that included more than 100 recommendations to strengthen Interior’s royalty collections by improving BLM’s and MMS’s verification of production volumes, improving many areas of MMS’s audit and compliance efforts by establishing a compliance strategy counsel, improving coordination between MMS and BLM, and improving MMS’s computer system. MMS has three major efforts underway to improve the accuracy of payor- reported royalty data used to collect and verify royalties, but it is too early to evaluate the effectiveness of these efforts. First, MMS is beginning to address GAO’s recommendations concerning the identification of missing royalty reports and the monitoring of adjustments that companies make to their royalty data. Second, MMS is implementing RPC recommendations concerning edit checks, valuation regulations for natural gas, and coordination with BLM. Third, MMS is continuing to develop processes to increase the accuracy of royalty reporting data by improving edit checks on oil and gas sales prices and using the CPT to identify errors in the amount of oil and gas reportedly sold by payors. To address a past GAO recommendation, MMS is developing a process to automatically detect within 6 months those cases in which a company has not filed a royalty report when it has filed a production report. MMS officials explained that 6 months is a reasonable timeframe, and that companies make most corrections to missing or incorrect royalty data within this time frame. Under the current royalty reporting system, cases in which a company has not filed a royalty report may not be detected until more than 2 years after the initial reporting date, when MMS personnel in their compliance group begin to target leases for a review or audit. According to MMS officials, personnel in the financial management group are beginning to identify missing royalty reports by identifying instances in which the royalty report—the Form MMS-2014—is absent when a production report—the OGOR—was filed by the operator. With few exceptions, MMS should receive corresponding royalty reports for each production report it receives. MMS has additional checks in place through its CPT for determining when both the OGOR and the Form MMS- 2014 are missing. Also in response to a GAO recommendation, MMS is developing an automated process to identify changes that royalty payors make to their previously entered royalty data that exceed the 6-year statutory limit on such adjustments or that occur after compliance work, including audits, has been completed. Although these adjustments may change payors’ royalty payments, prior to this effort MMS’s royalty reporting system could not monitor them and payors could continue to adjust their previously reported royalty data without prior MMS approval or review. In addition, companies could change royalty data after an audit has been completed, and MMS needs to be able to identify when this occurs, as we have suggested in our previous work. While adjustments may occur for legitimate reasons, and identifying them will not prevent them from occurring, it could facilitate later scrutiny and follow up with company officials. However, it is too early to evaluate the effectiveness of these actions. MMS is implementing action plans to address royalty reporting issues raised by the 2007 RPC Report. The following actions directly relate to four recommendations for improving the accuracy of the royalty reporting process out of over 100 recommendations identified by the RPC. First, MMS is in the process of using its existing edit checks and adding additional edit checks to examine more data before the data are entered into its database, instead of examining data that have already been accepted and stored. Specifically, this change will affect royalty data that payors submit through the electronic reporting interface—a Web site- based portal through which MMS accepts almost 30 percent of its data. According to MMS officials, the other 70 percent of royalty records are accepted through the Electronic Data Interchange (EDI)—a standardized method of transferring data electronically between computer systems, such as a payor’s system and MMS’s system. Currently, there are some edit checks built into the EDI software, but MMS’s goal, as outlined in its strategic business plan for 2008-2012, is to require EDI reporters to implement most edits on their individual computer systems before they submit the data through EDI. If they do not, then payors must use MMS’s other system for submitting data—the electronic reporting interface— which accepts fewer royalty records at a time, but already has these up- front edit checks built into its system. As GAO has noted in prior reports, edit checks that prevent potentially erroneous data from entering the databases offer advantages over efforts to continually clean up erroneous data allowed into the system. However, it is too early to tell how useful these specific efforts will be. MMS’s processes for checking data are outlined in figure 1. Second, MMS is working on a problem identified by the RPC concerning the accuracy of reporting natural gas royalties. The RPC recommended that MMS add a data field on the Form MMS-2014 that identifies the heat content per cubic foot of natural gas, which is important in determining the amount of royalties owed. State and tribal royalty auditors with whom we spoke also identified the need to check on the heat content of natural gas. In response to the RPC recommendation, MMS officials said that they developed and recently implemented an alternate plan for evaluating the information identified by the RPC using data already collected on the Form MMS-2014 and maintained in its databases. In particular, payors report to MMS the quantity of natural gas sold (in thousands of cubic feet) as well as the total heating value of all the gas sold (in millions of Btus, an industry standard for selling natural gas). MMS officials told us they plan to calculate the heating value per cubic foot from these existing data fields, by dividing the total heating value by the quantity sold, and implement an edit check on the reasonableness of the results of this calculation. Moreover, MMS officials said that it was too costly to change the structure of its database to accommodate a new data field and modify how data are collected. We believe that MMS’s alternative is a reasonable approach and that it is likely to identify errors in reported gas volumes. Third, MMS is planning to publish proposed revisions to its gas valuation regulations and guidelines that they believe will address several problems. For example, MMS regulations provide a series of benchmarks for companies to use in establishing the price of natural gas when they sell it to their affiliates. However, according to the RPC and state auditors, these benchmarks are difficult to apply and do not reflect how gas is currently sold so they recommend that MMS should replace these benchmarks with widely published market indexes. Another problem that MMS intends to address with its new gas valuation regulations relates to how companies can take deductions from gas revenues. According to MMS regulations, the costs for transportation and processing must be properly allocated among the individual products that result from the processing of gas. However, gas purchasers can “bundle” all of these charges together, making it difficult for the payor to determine how to allocate these deductions and then to calculate what is actually owed in royalties. While MMS has plans to address these and other issues with its new regulations, they were unable to give us sufficient details about how this would be done for us to evaluate the effectiveness of the new regulations. MMS has a target date for completion of the new proposed regulations of December 2009. Fourth, in response to RPC recommendations that MMS improve its interagency coordination with BLM, MMS has taken a first step to improve coordination. Specifically, the RPC recommended that the Department of the Interior establish a Production Coordination Committee (PCC) that is charged with, among others things, defining and coordinating common processes, defining common data standards, and addressing technical issues for information sharing between the two agencies. To begin this process, MMS, BLM, and the Bureau of Indian Affairs held a 3-day PCC meeting in September 2008, during which a number of key issues regarding the accuracy of royalty data were discussed, including (1) placing more responsibility on industry to provide clean data to MMS; (2) resolving invalid lease numbers; (3) sharing information on rents, agreements, and Indian leases in a more timely manner; and (4) providing notices to MMS when wells first start to produce. This meeting was a first step in improving inter-agency coordination, but it is too early to judge the effectiveness of the committee. MMS officials said that additional meetings are planned on a recurring basis. MMS officials told us they are evaluating a process to incorporate more detailed market prices into its system to compare sales prices that MMS calculates from payor-reported royalty data to relevant market prices. MMS does not require payors to report their sales prices but can calculate an implicit sales price by dividing the total value of the oil or gas that payors report (sales value) by the volume that payors report as having sold (sales volume). Currently, MMS uses for comparison a few oil and gas prices with a wide range of values for all leases regardless of where the lease is located or the quality of oil that is produced. MMS officials told us that they intend to incorporate a more detailed price table into its royalty reporting system by 2010 that will include more specific sales prices related to geographic areas and specific sales months. We believe that this could be a significant improvement, but it remains too early to assess MMS’s efforts. In addition, during the course of our work, MMS officials told us they plan to expand the implementation of two edit checks. First, MMS plans to expand the use of an edit check that will calculate the royalty rate from payor-reported data and compare this with the royalty rate specified in each lease. As with sales prices, MMS does not require payors to report royalty rates but can calculate implicit royalty rates from payor-reported data. MMS can calculate implicit royalty rates by dividing the amount of royalties that payors report (royalty value) by the total value of the oil or gas that payors report (sales value). While MMS has checked royalty rates on Indian leases and prevented erroneous data on these leases from entering its system since prior to 2001, MMS’s checking of royalty rates has not prevented erroneous data on federal leases from entering its system. However, MMS plans to resolve this issue on federal leases by the end of fiscal year 2009. Second, MMS recently began using an edit check that ensures payors take processing allowances only on gas that is processed. MMS reported that in April 2009 it implemented such an edit check in its electronic reporting interface. This action will affect about 30 percent of data entering MMS’s system, but will not impact potentially erroneous data that companies submit through the EDI. We believe that expanding the use of both of these edit checks can improve MMS’s ability to evaluate self-reported royalty data, but we will be unable to evaluate the effectiveness of these new processes until they are fully implemented. In 2008, MMS auditors in its compliance group began to use the CPT to identify discrepancies—based on certain thresholds—between the volumes of oil and gas produced that lease operators reported on the OGOR and the total volumes sold that payors reported on the Form MMS- 2014. When conducting this process, MMS also is able to identify instances when a royalty payor fails to submit the required Form MMS- 2014. However until recently, these comparisons are not done until over 2 years after royalty data have been submitted when MMS begins to select leases for audit. While this volumetric comparison had been done much sooner and routinely for all leases in the past, the process was dropped when MMS implemented its current information system in 2001 because the new module that was to perform this function was not yet ready for implementation and because MMS wanted to expand the comparison to include an examination of the amount of royalties paid and the value of the oil and gas sold. MMS officials explained that under the old system, potential mismatches between OGOR and 2014 volumes often involved errors in the royalties paid and/or the value of the oil and gas sold, and it was important to look at all three of these components at once. They further explained that the new module was never implemented but instead was replaced with an expanded use of the CPT, albeit at a much later date than initially anticipated. MMS reported that in January 2009, it began using the CPT to compare volumes and examine the amount of royalties paid and the value of the oil and gas sold within 6 to 9 months after payors submit data. Moreover, in 1992 when we last examined the comparison of volumes on the OGOR with volumes on the Form MMS-2014, we determined that it was cost effective to follow up on at least the largest of the discrepancies and support MMS doing this within an earlier time frame, such as 6 months after receiving royalty data. While much of the royalty data we examined from fiscal years 2006 and 2007 appears reasonable, we found several instances where key data were missing or appear to be erroneous. For example, our close examination of producing gas leases in the Gulf of Mexico indicated that up to 5.5 percent of the time, royalty reports were missing for these leases. We also found that from about 2 to 7.4 percent of the time, depending on the group of leases we examined, either the amount of royalties that payors report due (royalty value) and/or the total value of the oil and gas that payors report (sales value) appeared erroneous. In addition, 3.9 percent of sales values and/or the volume that payors report as having sold (sales volume) from offshore oil leases in the Gulf of Mexico appeared erroneous while about 6.6 percent of one or both of these data elements appeared erroneous for offshore gas leases in the Gulf of Mexico. Our detailed examination of producing gas leases in the Gulf of Mexico indicated that 5.5 percent of royalty reports were missing. Using production reports filed by lease operators, we identified all leases producing gas in the Gulf from January 2006 through September 2007. For each month in which operators reported gas production, we checked MMS’s monthly royalty reports to ensure that payors reported sales of gas. We found that about 5.5 percent of the time that operators reported monthly gas production from leases, payors did not submit the corresponding monthly royalty report. The missing royalty reports for this production represent potentially about $117 million in royalties that may not have been collected. However, it is possible that instead of reporting royalties on the appropriate reports, payors may have misreported these royalties on reports for other leases, and as such, additional royalties would not be due. We also observed instances in which the total gas production on the royalty reports was substantially less than that on the production reports, possibly indicating that one of multiple payors on that lease may not have submitted a royalty report for that month. While a significant number of the almost 1,500 leases in our sample had royalty reports but no production reports, missing production reports were more prevalent for the last 3 months of fiscal year 2007, possibly indicating that these reports had not yet been received or accepted by MMS’s system. Missing royalty reports are illustrated in figure 2. We evaluated all royalty data for fiscal years 2006 and 2007—excluding royalty-in-kind leases—for obvious errors in key reported royalty variables, including volumes of oil and gas sold, the value of this oil and gas, and royalties paid, and found that the error rate for these variables ranged from 0 percent to about 2.3 percent, with the highest levels of errors being found in transportation and processing allowances. This analysis is summarized in table 1, along with subsequent analyses discussed below. We used a different method than MMS’s edit checks to evaluate the reasonableness of royalty data. For example, MMS’s edit checks generally evaluate each royalty record individually, and a royalty payor may submit multiple records for a given lease each month, including the original royalty report and often times multiple corrections to the volumes sold or the royalties paid. However, we combined all royalty records associated with a given payor for each month, product type, and lease. Unlike MMS’s edit checks of individual royalty records, our methodology is able to detect if adjustments exceed the amount of the original entries. For example, in checking the sum of the sales values, sum of sales volumes, and sum of royalty values that payors submitted for a given month, product type, and lease, we found that over 99.8 percent of the time these sums were positive, as one would expect when payors owe royalties. However, payors submit one payment per month for all their federal leases; therefore a negative royalty value for an individual lease may go undetected if it is small in comparison to the sum of the royalty values for all their other leases. Although the 0.2 percent of royalty values that we found to be negative is a small percentage, collectively this represented about $41 million in royalties that may not be collected if these instances are not detected in future compliance work or audits. Further, a check for positive royalty values is not a precise measure of accuracy. Rather, it is a gross check of reasonableness and some positive royalty rates, which we did not evaluate, could have been lower than they were supposed to be. We found that transportation allowances and processing allowances, which should always be negative values in the database, were positive 1.73 percent and 0.77 percent of the time, respectively. We also found that about 2.3 percent of claimed processing allowances were incorrect. These processing allowances were associated with either unprocessed gas, which by definition is not entitled to a processing allowance, or coalbed methane, which is never processed, and therefore should not receive an allowance. Claiming processing allowances for gas that was not processed could result in MMS collecting about $2 million less in royalties than are due for the fiscal year 2006 and 2007 leases that we examined. However, the gas reported as unprocessed gas could be processed gas that was improperly reported as unprocessed gas by the payors, and hence, no additional royalties would be due. Either way, there are reporting errors that raise questions about the accuracy of royalty collections. In addition, we checked that transportation and processing allowances did not exceed regulatory limits and found that they were within limits nearly 100 percent of the time. Lastly, we checked and verified that payors did not report sales volumes when reporting transportation and processing allowances separately from royalty amounts. This is not permitted because the reporting of sales volumes in this situation would lead to reporting the volumes sold twice. Table 1 summarizes the types of errors for which we checked and the percent of times they occurred. We found that, of the key royalty variables self-reported by royalty payors, either the royalties owed, the value of the oil or gas sold, or both, appeared erroneous from 2 to 7.4 percent of the time, depending on the group of leases that we examined. MMS’s royalty system does not require payors to report royalty rates but rather the amount of their royalty payment— royalty value—and the total amount they received for the sale of oil or gas from each federal lease—sales value. We calculated an implicit royalty rate by dividing royalty value by sales value and compared this number to royalty rates generally specified in federal leases. Because payors are not required to report the royalty rate that applies to each individual lease and data were not readily available to us, it was time prohibitive to individually compare each calculation to the royalty rate specified in the lease. Instead, we compared the calculated rates to general lease terms, allowing for significant but common departures from these terms. We found that either royalty values or sales values, or both, were erroneous about 2.2 percent of the time for offshore oil leases and about 2 percent of the time for offshore gas leases when we calculated implicit royalty rates with fiscal year 2006 and 2007 data. We compared our implicit royalty rates with standard offshore lease terms of either 12.5 percent or 16.67 percent, allowing for some rounding error in these rates. Our analysis did not identify as erroneous those instances when the calculated royalty rate was 12.5 percent, but the lease royalty rate was actually 16.67 percent, or vice versa. We also compared leases for which the calculated implicit royalty rates were other than 12.5 or 16.67 percent to actual royalty rates as specified in the federal lease and adjusted our analysis for those few times when these calculated, but apparently erroneous royalty rates, were legitimate. As such, a royalty rate that is different from general lease terms means that either the payor-reported royalty value or the sales value is erroneous. MMS acknowledged that erroneous royalty rates could result from payors misreporting the sales value or the royalty value owed to the federal government. We found that either royalty values, sales values, or both, appeared erroneous about 7.4 percent of the time for onshore oil leases and about 4.8 percent of the time for onshore gas leases when we calculated implicit royalty rates with fiscal year 2006 and 2007 data. We compared our implicit royalty rates with standard onshore oil and gas lease terms of either 12.5 percent or a variable royalty rate schedule that depended on production volumes for certain leases issued before 1988. These variable rates ranged from 12.5 percent to 25 percent for oil production and were either 12.5 percent or 16.67 percent for gas production. We also assumed royalty rates of 5 and 10 percent as being correct because MMS indicated that these were common royalty rates on certain older leases, and we verified this by examining a sample of leases. We excluded all oil leases prior to February 2006 because royalty rates below 12.5 percent were in effect during that time for low volume or heavy oil production. Our analysis did not identify as erroneous those instances when the implicit royalty rate matched standard royalty rates but was nevertheless incorrect. In addition to misreporting royalty values or sales values, MMS said that the higher percentage of apparently erroneous royalty data for onshore oil leases may be due to royalty payors continuing to incorrectly pay royalties under expired provisions for low volume or heavy oil. Erroneous royalty rates are summarized in table 2. We found that either sales values or sales volumes appeared erroneous about 3.9 to 6.6 percent of the time we used fiscal year 2006 and 2007 royalty data to calculate implicit sales prices in the offshore Gulf of Mexico. MMS does not require payors to report oil and gas sales prices (prices per unit sold) but instead requires payors to report the total amount they received for the sale of oil or gas from a federal lease—sales value—and the total volume of oil or gas that they sold—sales volume. We calculated an implicit sales price per unit by dividing sales value by sales volume and compared this number to prevailing market prices at the time. For offshore oil in the Gulf of Mexico, we found that our implicit sales prices fell outside of a wide range of prevailing market prices 3.9 percent of the time during fiscal years 2006 and 2007. We used a range of market prices each month for comparison, the low price being the lowest daily spot price that month for Mars oil—a low quality, low value oil produced in the offshore Gulf—and the high price being the highest daily spot price for light Louisiana sweet (LLS)—a high quality, high value oil. The average difference between these prices was about $16 per barrel of oil during the October 2005 through September 2007 period we evaluated. We believe that this is a conservative approach because the two prices are among the lowest and highest prices that we found in the Gulf of Mexico. Therefore, while there may be cases in which prices fall outside of this range for legitimate reasons, we would expect this to be a rare occurrence. Conversely, prices that fall within this range are reasonable but not necessarily correct. This price range is illustrated in figure 3. In addition to possible errors in reported sales values or sales volumes, MMS officials said that low oil prices may reflect poor marketing, sales of low quantities of poor quality oil that settle in storage tanks, or sales of oil at offshore platforms where the sales price may be discounted for transportation. MMS officials also said that royalty payors may also be netting the cost of transportation from their sales value, which is against MMS regulations. On the other hand, high oil prices may reflect good marketing. Figure 4 depicts the percentage of our calculated oil prices that appeared erroneous and distinguishes between when the prices fell below or above the expected range. For gas produced offshore in the Gulf of Mexico, we found that our calculated implicit sales prices fell outside of the range of prevailing market prices 6.6 percent of the time. We used a range of market prices at the Henry Hub—a major gas trading center in the Gulf of Mexico—each month for comparison. To establish a low and a high price, we examined three specific prices each month and chose the highest and the lowest price from among the three. These three prices are the maximum mid-day spot price during that month, the minimum mid-day spot price during that month, and the First of the Month price. All three prices are common prices upon which producers sell their gas in the Gulf of Mexico, according to MMS, and we believe this is a conservative approach. The average difference between the highest and the lowest prices was about $3 per MMBtu during the period October 2005 through September 2007. ber 2007. These prices are illustrated in figure 5. These prices are illustrated in figure 5. As with oil prices, being outside of the range does not necessarily mean that the price is erroneous, but we would not expect this to be a common occurrence. Conversely, being within this range means that the sales price is reasonable but not necessarily correct. In addition to possible errors in reported sales values or sales volumes, MMS officials said that low or high prices can reflect marketing efforts. Quality does not affect calculated prices because gas quality is standardized by reporting sales prices per MMBtu. The percentage that our calculated gas prices appeared erroneous is depicted in figure 6, distinguishing between implicit prices that fell below and above the expected range. Oil and gas company representatives reported that several factors can affect their ability to accurately report royalty data, including complex land ownership patterns, unit agreements, ambiguity in federal regulations, short time frames for filing royalty reports, and inaccuracies in MMS’s internal databases. The complexity of unit agreements (units) can impact the accuracy of royalty data. Upon the request of companies, BLM and MMS can administratively combine contiguous leases into units to more efficiently explore and develop an oil or gas reservoir and to lessen the surface disruption caused by the building of roads and the installation of pipelines and production equipment. MMS requires payors to report royalties for each producing lease and, if a lease is assigned to a unit, to provide information identifying the unit in the agreement data field. If a lease does not belong to a unit, the agreement data field should be left blank. However, companies can fail to complete the agreement data field when a lease belongs to a unit, which raises questions about whether the royalties paid were for production belonging to a unit or for production outside of a unit. This complicates the auditing of the royalty data. Figure 7 shows how federal leases can be combined into a federal unit to explore for oil and gas, and figure 8 illustrate the complexity of auditing these leases when a payor fails to complete the agreement field. Complex ownership patterns of federal leases, particularly those issued by BLM for onshore lands, may also further impact the accuracy of royalty data, according to several oil and gas company representatives. For example, when there are intermingled federal, state, and private leases, royalty reporting can be challenging because companies said that they may need to rely on multiple operators to provide royalty information, which is not always consistent and clear, and because different regulations and rules apply to federal, state, and private leases. Confusion can sometimes cause the first royalty payment to MMS to be delayed. Representatives from four companies reported that the ambiguity in extensive federal regulations that establish prices for oil and gas lead to difficulty in interpretation and hence, calculating the correct royalty payment. Nine of the 11 state and tribal auditors that we interviewed told us that the gas valuation regulations published in 1988 are out of date and that the series of benchmarks within these regulations that prescribe prices for gas are impractical to apply. Concerning the gas regulations, the RPC report noted the difficulty of applying these benchmarks and recommended that MMS consider using market indices to establish gas prices when companies sell to their affiliates in lieu of the 1988 benchmarks. RPC also recommended that MMS more clearly define allowable transportation and processing deductions for natural gas in their regulations. In addition, three companies reported difficulty in paying royalties on gas production in a timely manner because they do not receive data from their gas purchasers in time to meet MMS’s deadline for filing royalty reports and must submit estimates and later correct them. For example, a purchaser of oil and gas may report an adjustment to the volume of the gas purchased or the quality of the oil purchased after the payors are required to report, resulting in the payor having to make a correction to the original data. Reporting on gas is especially challenging, because gas transportation and processing are usually not reconciled within 30 days. However, payors are required to report royalties to MMS on or before the last day of the month following the month the product was sold or removed from the lease. Therefore, to stay in compliance with reporting requirements and avoid penalties, some company representatives reported that they file estimated gas royalty reports and keep funds deposited with MMS to cover variances in royalties due. This is not problematic as long as companies correct their original data as necessary and pay the correct amount of royalties. Oil and gas company representatives stated that BLM data on new leases and units is not always incorporated into MMS’s system in a timely manner, resulting in edit checks rejecting correct payor data. Two of these representatives reported that BLM’s delays in revisions to data on participating areas––the part of a unit for which participating companies have agreed to a manner for allocating production––can cause them to go back and adjust MMS royalty data that is over a year old. This lack of coordination between BLM and MMS was also addressed in the December 2007 RPC report, which found that incorrect data leads to errors in royalty receipts and revenue distribution, requiring MMS staff to correct the information and redistribute the revenue. The RPC report recommended that BLM and MMS improve data exchanges by establishing a coordinating committee with representatives from senior management levels, which would be charged with defining common data standards and developing solutions for technical issues of coordination and information sharing at MMS and BLM. MMS is addressing this issue. While oil and gas company representatives with whom we spoke reported that they generally have little difficulty understanding key data required to complete the Form MMS-2014, most state auditors with whom we spoke identified some problems with company submitted data. All 10 of the representatives we contacted explained that the major data fields, such as the sales value, sales volume, and royalty value, are easy to understand and complete. Eight of the representatives added that major royalty reporting codes, such as those that define product types and that provide more information on the nature of the sale of oil and gas, are also easy to understand. Only, two representatives reported some difficulty with using certain codes. However, 8 of the 11 state and tribal royalty auditors that we contacted identified a specific product code that creates difficulty for oil and gas companies in reporting royalties. Specifically, state auditors told us that product code 39 for coalbed methane is inconsistently used by payors reporting royalties, creating difficulty in auditing leases. During our analysis of MMS’s royalty data, we also noted that some companies claim a processing allowance for coalbed methane, which is not processed, possibly indicating confusion on use of this code. Additionally, these auditors told us that a certain code used to explain adjustments, known as adjustment reason code 10, is commonly used by royalty payors for all types of adjustments. They said that not having specific adjustment reason codes for volume adjustments, price changes, royalty adjustments, processing allowance adjustments, and transportation allowance adjustments, makes it difficult for auditors to clearly determine why a royalty payment was adjusted. Royalties paid to the federal government for the extraction of oil and natural gas from federal lands and waters remain both a large source of revenue to the federal government and a key element in the discussion on how to balance the use of these lands. Our past work has consistently raised questions about how MMS oversees the collection of these royalties and ensures that the country receives fair value for the resources removed. MMS has ongoing efforts to improve the reasonableness and accuracy of its royalty data. However, the agency still has more to do to ensure that key data used to report, pay, and audit federal royalties are accurate. In our view, MMS still lacks some effective controls to (1) prevent erroneous data on allowances from being accepted into the system, (2) detect errors in data once they are accepted into the system, and (3) ensure that key data needed for complex oil and gas units are consistently provided, and this can make the auditing and other compliance work done by MMS staff more difficult and could result in the federal government not receiving all the royalties it is due. In particular, our detailed examination of a portion of key fiscal year 2006 and 2007 data has identified missing data, significant errors, and questionable data, raising doubts about the 97 percent accuracy level that MMS reports. In light of our findings, it seems unlikely that MMS could sustain its goal of 98 percent data accuracy without taking additional steps. To improve the accuracy of royalty data and to help provide a greater assurance that federal oil and gas royalties are being accurately reported, to improve the efficiency of audit and compliance activities, and to increase the likelihood of collecting additional royalties in a timely manner, we are recommending that the Secretary of the Interior direct MMS to take five actions. To better prevent the submission of erroneous data into MMS’s database, we are recommending that MMS: share with payors that submit their data through the Electronic Data Interchange (EDI) MMS’s recent edit check that prevents payors from submitting data claiming processing allowances for gas that is not processed, including coalbed methane. To improve the quality of data that has been accepted by MMS’s database, we are recommending that MMS: design and implement additional edit checks to evaluate the net impact of all adjustments on original entries for critical royalty variables, including sales values, royalty values, sales volumes, transportation allowances, and processing allowances, by summing each month all entries for the variable submitted by each payor for each lease and each commodity and highlight potentially erroneous submissions to payors and appropriate MMS staff and use the monthly sums of original and adjusting entries for royalty values, sales values, and sales volumes to ensure that calculated royalty rates and unit prices for each payor on each lease for each commodity fall within expected ranges and highlight potentially erroneous submissions to payors and appropriate MMS staff. To simplify the auditing of leases and compliance work, we are recommending that MMS: enforce current MMS requirements to populate the agreement field with the correct agreement number and to populate the agreement field for leases outside of agreements with a single unique code that is easily identifiable, and collaborate with state and tribal auditors on the possibility of adding more specific adjustment reason codes that describe why payors made corrections to royalty data on the Form MMS-2014. We provided a draft of this report to Interior for review and comment. Interior provided written comments, which are presented in appendix II. In general, Interior agreed with our findings, concurring with four of our five recommendations and partially concurring with the other recommendation. With regard to this latter recommendation, which involves populating the agreement field, Interior agreed with us that it is important that MMS improve the enforcement of requirements for populating the agreement field. However, Interior was uncertain about how best to achieve this goal and stated that MMS is evaluating the best methods to ensure accurate reporting for agreements. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to appropriate congressional committees, the Secretary of the Interior, the Director of MMS, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http:www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To examine MMS’s key efforts to improve the accuracy of royalty data, we reviewed and discussed with MMS officials their action plans to implement RPC recommendations, reviewed a demonstration of MMS’s Compliance Program Tool (CPT), discussed their implementation of the CPT to systematically identify misreported volumes and missing royalty reports, reviewed their plan to monitor adjustments, and discussed efforts to adopt additional edit checks. To assess the reasonableness and completeness of MMS’s royalty data, we obtained from MMS an extract from their financial management system consisting of all oil and gas royalty records from fiscal years 2006 and 2007 and assessed the completeness and reasonableness of key data fields based on extensive data reliability studies documented in two previous GAO reports. We removed records related to rental payments, gas storage agreements, taxes, contract settlements, and geothermal operations by using transaction codes, and removed sulfur, helium, nitrogen, and carbon dioxide, using product codes. We also limited our analysis to cash royalty payments, excluding royalty-in-kind (RIK) payments whenever possible or appropriate. Our resulting analysis file consisted of about 4.1 million royalty records. First, we assessed the completeness of MMS’s data. We developed a frequency distribution of the number of records per month and compared these frequencies from month-to-month, looking for abnormal patterns. We discovered that there were about half as many records for April 2007 as for other months on average. At our request, MMS investigated the reason and discovered that the contractor who extracted the data inadvertently excluded records accepted by MMS’s system in June 2007— the month in which much of the data from April 2007 would have been submitted and accepted. We then obtained from MMS a new file of records accepted in June 2007 and combined the new data with the rest of the royalty data, and rechecked the monthly totals. This procedure revealed a fairly consistent number of records and leases on a month-to-month basis. We determined that the data we received from MMS were a complete representation of what was in their data system through our study date and was therefore reliable enough to allow us to use the extract in our more detailed review of royalty data. This monthly consistency is illustrated in figure 9. To examine the completeness of records in more detail, we analyzed a subset of MMS’s royalty data—leases that produced natural gas in the offshore Gulf of Mexico. We chose this subset because of: (1) its relatively manageable size—about 2,100 leases out of a total of about 29,000 producing federal and Indian oil and gas leases and (2) its financial significance—the gas royalties from Gulf of Mexico leases in fiscal year 2008 account for almost 30 percent of total federal and Indian oil and gas royalty revenues. For each lease, we compared gas volumes reportedly sold by payors on Form MMS-2014 to gas volumes reportedly produced by operators on MMS’s OGOR. Specifically, for each lease we added together all sales volumes on the Form MMS-2014 of processed and unprocessed gas in thousands of cubic feet for each month from January 2006 to September 2007 and compared these to gas volumes disposed of on the OGOR-B for the same month. From the Form MMS-2014, we included volumes for cash sales (transaction code 01), royalty-in-kind sales (transaction codes 06 and 08), and non-royalty bearing sales under provisions for deepwater royalty relief (transaction code 41). We excluded from our analysis October through December 2005 because major hurricanes disrupted production in the Gulf of Mexico, resulting in many production facilities being shut down. We also used data from MMS’s Technical Information Management System (TIMS) to identify all leases that belonged to unit agreements and excluded these leases in order to simplify the analysis. This resulted in about 1,500 producing gas leases. Also, a significant number of these leases had royalty reports but no production reports, but missing production reports were more prevalent for the last 3 months of fiscal year 2007, possibly indicating that these reports had not yet been received or accepted by MMS’s system. To investigate the completeness of individual royalty records, we examined key royalty data fields to ensure that they were populated. These data fields are necessary to match royalty payments to the proper payor, lease, sales month, and product code. Fields included payor number, lease number, sales date, and transaction code. We also checked that product code and sales type were populated. Because nearly 100 percent of these critical data fields were populated, we discontinued additional tests on assessing the completeness of individual data fields. However, we examined certain data fields to ensure that they were not populated when they should not be. These fields included sales value and sales volume for certain transaction codes, including minimum royalty due (transaction code 02), estimated royalty payment (transaction code 03), transportation allowance (transaction code 11), processing allowance (transaction code 15), and quality bank adjustment (transaction code 13). Population of these data fields could result in counting sales values and sales volumes twice. We then developed tests to investigate the gross reasonableness of certain data fields that our past work highlighted as being problematic, including royalty value, sales value, sales volume, transportation allowance, and processing allowance. We identified royalty-in-kind transactions from transaction codes (06 and 08) and excluded them from this analysis. We employed a technique that is different from MMS’s edit checks, which generally examine only individual royalty lines. We summed the data fields on all royalty records for each month on each lease for each royalty payor and product code. This technique aggregated the original royalty record with all subsequent adjustments, allowing us to examine the net effect and easily identify negative sums for royalty values, sales values, or sales volumes, which MMS’s edit checks of individual lines cannot identify. Since payors generally submit one electronic fund transfer for all the leases upon which they owe royalties for a given month, a negative sum can go undetected if submitted along with many other positive sums. Although we found a relatively small percentage (less than or equal to 0.2 percent) of negative sums, we examined the corresponding royalty lines to determine if their financial impact was significant. We used the same technique of summing royalty records to examine the gross reasonableness of transportation and processing allowances. Being deductions, these allowances should be negative. We found that transportation allowances and processing allowances were positive 3.8 percent and 10.1 percent of the time, respectively. However when we examined individual royalty records, we discovered that many of these records were associated with royalty-in-kind transactions, and therefore outside of the scope of our analysis. MMS, who creates the royalty-in-kind data, did not properly identify these RIK leases with the designated royalty-in-kind transaction codes (06 and 08), but instead used the codes for transportation (11) and processing (15) allowances. An MMS official with the RIK program explained that, due to constraints in their RIK system, some transportation and processing allowances could be positive due to their RIK system having populated the transportation and processing data fields for the current month with changes to prior months reported by pipelines and processing plants. This official also said that the RIK system included all revenues and expenses associated with natural gas liquids from the RIK leases in the processing allowances. These processes for RIK leases are inconsistent with processes for leases on which royalties are paid in cash. For cash royalties, adjustments to previous periods are posted to the specific sales month, not the current month. Also for cash royalties, revenue is identified as sales value, and allowable expenses, such as transportation or processing allowances, are individually identified as transportation or processing allowances for the appropriate product code. The MMS official said that they corrected this system problem in July 2007. We were then able to identify the RIK leases through their payor codes, which are alphanumeric as opposed to the numeric payor codes of cash royalty payments, and subsequently removed them. We also checked for transportation and processing allowances being taken in excess of the maximum amounts allowed by federal regulations and checked to see if transportation and processing allowances were taken for transaction codes for which they are not permitted, such as minimum royalties (transaction code 2), estimated royalty payments (transaction code 3), quality banks (transaction code 13), and offshore deep water royalty relief (transaction code 41). Lastly, we examined royalty data to see if payors reported processing allowances for products that are not processed, such as oil, condensate, unprocessed gas, and coalbed methane. We then investigated the reasonableness and accuracy of royalty values, sales values, and sales volumes in more detail because these royalty data fields appeared to be problematic in our previous work. Using the same method of summing these data fields each month for all royalty records for each payor for each lease and each product, we calculated the royalty rates by dividing royalty value prior to allowances by sales value. We then compared our calculated royalty rates to expected royalty rates based on general lease terms because we did not have access to individual lease terms for the estimated 29,000 producing federal and Indian leases. For offshore leases (product codes 01, 02, 03, 04, and 07), we used royalty rates of 12.5 percent and 16.67 percent for comparison. We identified the lease numbers associated with royalty rates outside of expected values and compared the calculated royalty rates of these 331 leases to royalty rates for these leases in the TIMS database. Sixteen of these leases had royalty rates other than 12.5 or 16.67 percent, and we adjusted our analysis accordingly. For onshore federal gas production (product codes 03, 04, and 07), we compared our calculated royalty rates to the same royalty rates as for offshore leases. For onshore federal oil production, we compared initially our calculated royalty rates to rates of 12.5 percent to 25 percent. According to MMS, this latter interval included a number of prescribed royalty rates that were common for oil production from certain leases issued before 1988. However because of the large number of calculated onshore oil and gas royalty rates that fell outside of expected values, we selected a sample of onshore leases for MMS to research. MMS reported that several leases had royalty rates that were either 5 percent or 10 percent—rates that they identified as common for certain older leases. We adjusted our onshore comparison to include these two rates as acceptable. Because few other leases had royalty rates that were uncommon, we did not ask MMS to research additional onshore leases. For Indian leases, we similarly calculated royalty rates and determined that few leases had royalty rates of less than 12.5 percent, so we did not pursue comparing these to actual lease terms. We further investigated the reasonableness and accuracy of royalty values, sales values, and sales volumes by calculating unit oil and gas sales prices with Gulf of Mexico monthly data submitted by royalty payors for each lease. We limited our analysis to the offshore Gulf of Mexico because this area has well developed transparent markets where regional prices are readily available, unlike onshore markets. To compare oil prices, we used a range of market prices each month for comparison, the low price being the lowest daily spot price that month for Mars oil (rounded down to the nearest dollar), and the high price being the highest daily spot price for light Louisiana sweet (rounded up to the nearest dollar). We investigated doing similar comparisons onshore but discovered that the price range onshore, with West Texas Intermediate among the highest priced oil we found, and Wyoming asphaltic being about the lowest priced oil we found, created a range that was so wide that it made any comparison meaningless. To compare gas prices, we examined the maximum mid-day spot price, the minimum mid-day spot price, and the First of the Month price at the Henry Hub and chose the highest and the lowest price from among the three (we rounded the lowest price down to the nearest dollar and rounded the highest price up to the nearest dollar). In calculating unit gas prices from MMS royalty data, we used volumes expressed per MMBtu to remove the effects of quality on price. As with oil prices, we investigated doing gas price comparisons onshore but found that exceptionally low gas prices at Opal, Wyoming created a range of prices that was so wide as to make any comparisons meaningless. To examine factors that affect oil and gas companies’ abilities to accurately report royalties owed to the federal government, we interviewed a limited number of oil and gas company representatives. To solicit views on oil and gas companies’ experiences with reporting royalty data to MMS, we used a nonprobability sample. To draw our sample, we identified the 20 oil and gas companies that submitted the highest number of royalty lines on Form MMS-2014 in fiscal years 2006 and 2007 and contacted representatives from the top 15 to request information. The top 20 companies accounted for 63 percent of all the royalty lines reported, and the top 15 accounted for more than 56 percent. In addition, we contacted the two largest national oil and gas industry associations— American Petroleum Institute (API) and the Independent Petroleum Association of the Mountain States (IPAMS)—to request information. IPAMS describes itself as a non-profit trade association representing more than 400 independent oil and natural gas producers, service and supply companies, banking and financial institutions, and industry consultants committed to environmentally responsible oil and natural gas development in the Intermountain West. API reports that it is the only national trade association that represents all aspects of America’s oil and natural gas industry. API has 400 corporate members, from the largest major oil company to the smallest of independents. They include producers, refiners, suppliers, pipeline operators, and marine transporters, as well as service and supply companies that support all segments of the industry. For our semi-structured interview questions, we received a total of 10 responses from oil and gas companies. Specifically, of the 15 companies with the most royalty lines, 2 responded to our request. From API members we received two responses, and from IPAMS members we received six responses. We personally met with two company representatives at the IPAMS office and discussed their written responses to our questions. Membership in these associations and being identified as 1 of the 15 companies is not mutually exclusive. Results from this nonprobability sample cannot be used to make inferences about all oil and gas companies, because the companies that were not included in our list of the top royalty payors or members in the associations we contacted had no chance of being selected as part of the sample. In addition to the individual named above, Jon Ludwigson, Assistant Director; Ron Belak; Melinda Cordero; Alison O’Neill; Kim Raheb; Barbara Timmerman; and Mary Welch made key contributions to this report.
In fiscal year 2008, the Department of Interior's Minerals Management Service (MMS) collected over $12 billion in royalties from oil and gas production from federal lands and waters. Companies that produce this oil and gas self-report to MMS data on the amount of oil and gas they produced and sold, the value of this production, and the amount of royalties owed. Since 2004, GAO has noted systemic problems with these data and recommended improvements. GAO is providing: (1) a descriptive update on MMS's key efforts to improve the accuracy of oil and gas royalty data; (2) our assessment of the completeness and reasonableness of fiscal years 2006 and 2007 oil and gas royalty data--the latest data available; and (3) factors identified by oil and gas companies that affect their ability to accurately report royalties owed to the federal government. MMS has several key efforts underway to improve the accuracy of the payor-reported data used to collect and verify royalties, but it is too soon to evaluate their effectiveness. MMS is in the process of implementing (1) GAO's past recommendations to help identify missing royalty reports and monitor payors' changes to royalty data; (2) recommendations from the Royalty Policy Committee--a group empanelled by the Secretary of the Interior to provide advice on managing federal and Indian leases and revenues--to improve edit checks, monitor the quality of natural gas, revise gas valuation regulations, and improve coordination with BLM; and (3) other efforts on adding specific edits for sales prices and identifying discrepancies in volumes between operators and payors. While much of the royalty data we examined from fiscal years 2006 and 2007 are reasonable, we found significant instances where data were missing or appeared erroneous. For example, we examined gas leases in the Gulf of Mexico and found that, about 5.5 percent of the time, lease operators reported production, but royalty payors did not submit the corresponding royalty reports, potentially resulting in $117 million in uncollected royalties. We also found that a small percentage of royalty payors reported negative royalty values, which cannot happen, potentially costing $41 million in uncollected royalties. In addition, payors claimed processing allowances 2.3 percent of the time for unprocessed gas, potentially resulting in $2 million in uncollected royalties. Furthermore, we found significant instances where payor-provided data on royalties paid and the volume and/or the value of the oil and gas produced appeared erroneous because they were outside of expected ranges. Oil and gas company representatives reported that several factors affect their ability to accurately report royalties, including complex land ownership, administratively combining leases into units, ambiguity in federal regulations that establish gas prices, short time frames for filing royalty reports, and inaccuracies in MMS's internal databases.
Influenza is a contagious respiratory illness caused by a number of different influenza virus strains and can range in severity from mild to lethal. Symptoms can include cough, muscle or body aches, and fatigue. Vaccination is the primary method for preventing infection with strains of the influenza virus and controlling the disease. In order for a vaccine to be most effective, it needs enough well-matched antigen to stimulate a protective immune response, antigen being the active substance in a vaccine that provides immunity by causing the body to produce protective antibodies to fight off a particular influenza strain. The vaccine’s antigen needs to be derived from a strain that is well-matched to a specific influenza strain—in wide circulation in humans—so that the antibodies formed in response to the vaccine protect against infection from that strain. Because multiple influenza strains are in constant circulation, seasonal vaccine is produced and administered annually to protect against the three influenza strains expected to be most prevalent that year (i.e., a trivalent vaccine). In contrast, the 2009 H1N1 pandemic vaccine was formulated to match the single pandemic-causing strain (i.e., a monovalent vaccine). Within the federal government, HHS is the department responsible for leading and coordinating preparedness and medical response activities to public health emergencies, per the 2006 Pandemic and All-Hazards Preparedness Act. Additionally, as the principal department for protecting the public’s health, HHS is the primary department funding the research and development of influenza vaccines. HHS enters into contracts with manufacturers for the development of new influenza vaccines using alternative technologies. DOD also makes some investments through its technology investment agreements for the research and development of alternative technologies that can be used in producing influenza vaccine as part of its preparedness efforts in order to maintain the military’s readiness. Manufacturers with which these agencies have entered into contracts or technology investment agreements include large-scale influenza vaccine manufacturers that have vaccines licensed for use in the United States and internationally as well as manufacturers that currently only have vaccines in research and development. Influenza vaccines—both seasonal and pandemic—are biological products. Within HHS, FDA is the federal agency responsible for the licensure and regulation of biological products for use in the U.S. market (see app. I for additional information on the research and development and review of licensing applications for new influenza vaccines in the United States). These responsibilities include issuing guidance for existing and new vaccines and consulting with manufacturers on the development of their new vaccines, such as on how manufacturers conduct clinical trials required for licensure of new vaccines. Until FDA has approved its licensing application, no manufacturer can market its biological product in the United States. Table 1 summarizes the federal government’s role in the research and development of alternative technologies and the licensure and regulation of influenza vaccines. Given its responsibilities for national seasonal influenza and pandemic preparedness and response, HHS has an interest in enhancing domestic production capacity—that is, enhancing the nation’s overall infrastructure for influenza vaccine production—and expanding the supply or accelerating the availability of influenza vaccine. HHS began awarding contracts to enhance domestic production capacity for the current egg- based technology as early as fiscal year 2005. Since fiscal year 2005, HHS has supported a program to ensure a year-round, secure, domestic egg supply; prior to this funding, manufacturers maintained a 9-month supply of eggs—enough for production only during the influenza season without any additional capacity for emergencies, such as an influenza pandemic. Despite HHS’s initial efforts to maintain a year-round egg supply, other events have occurred that highlighted the need for HHS to increase domestic production capacity for influenza vaccine and to support the introduction of influenza vaccines produced using alternative technologies. First was the unexpected loss of almost half of the influenza vaccine supply because of potential contamination during the 2004–05 season and the reliance on two domestic influenza vaccine manufacturers to supply enough vaccine for that year. Second was the recognition by HHS that one of the greatest challenges to preparing for an influenza pandemic and implementing its strategy for using vaccines was the lack of production capacity within the United States. As we noted in prior work, the lack of U.S. production capacity is cause for concern among experts because it is possible that countries without domestic production capacity will not have access to influenza vaccine in the event of a pandemic if countries where vaccine is produced prohibit the export of the pandemic vaccine until their own needs are met. As a result, HHS continued its funding of egg-based technology for the production of influenza vaccine to enhance domestic production capacity using this technology. For example, in fiscal year 2007, HHS entered into contracts with two manufacturers for the retrofitting of existing domestic egg-based production facilities for the production of pandemic influenza vaccine. Some of the completed facilities were used in the 2009 H1N1 pandemic, and according to HHS, when all the retrofitting is complete, one of these facility’s production capacity will double and the other will triple. Third, concerns about strains of the H5N1 virus that had reemerged in the early 2000s, and continues to cause severe infection in humans, further prompted interest in alternative technologies to egg-based technology for producing influenza vaccine. Strains of the H5N1 virus have infected chicken flocks and other poultry, resulting in the culling of these flocks, raising concern that the egg supply for influenza vaccine was at risk. Thus, HHS began a more concerted effort to fund the research and development of influenza vaccines using three alternative technologies. Specifically, HHS has funded the development of vaccines using two alternative production technologies—cell-based and recombinant technologies—and vaccines using a third alternative technology—antigen-sparing technology (adjuvants). Each of these three alternative technologies has the potential to expand the supply or accelerate the availability of both seasonal and pandemic influenza vaccines (see app. II for a description of the production process for influenza vaccine using the current, egg-based technology). Expanding the supply or accelerating the availability of influenza vaccine is particularly important when there is a perceived shortage of seasonal vaccine—when vaccine is not available and demand is highest—or during a pandemic when demand increases because of increased risk of disease and death. Expanding the supply or accelerating the availability of influenza vaccine can be done in two ways. The first is to increase the overall amount of vaccine available at the end of the production process; the second is to speed up the production process itself by, for example, reducing or eliminating step(s) in the process. Table 2 describes these three alternative technologies and their potential to expand the supply or accelerate the availability of influenza vaccines (see app. III for more information on these alternative technologies). In fiscal year 2005, with funds available from that year’s appropriation, HHS funded the research and development of an influenza vaccine produced using cell-based technology. Following the release of the Plan, numerous additional appropriations became available for the acquisition and development of pharmaceutical interventions for pandemic-related purposes, including approximately $3.2 billion dedicated for vaccines. HHS has since used these funds, as well as funds available from previous appropriations, for multiyear contracts for the development of influenza vaccine using cell-based technology, recombinant technology, and adjuvants. In response to the 2009 H1N1 pandemic, Congress provided HHS with a supplemental appropriation to prepare for and respond to an influenza pandemic. In addition to making $1.85 billion immediately available to HHS, the 2009 supplemental appropriation made $5.8 billion available contingent upon one or more presidential notifications to Congress. In August 2010, after the 2009 H1N1 pandemic had ended, HHS notified Congress of its plan to direct some of the remaining funds toward pandemic and related preparedness activities. Specifically, HHS proposed spending $1.98 billion in a variety of vaccine-related activities, including the development of alternative technologies, such as recombinant technology. According to HHS, it also uses funding available from annual appropriations, such as its fiscal year appropriations for 2009 and 2010, for pandemic-related activities. From fiscal year 2005 through March 2011, the federal government awarded approximately $2.1 billion in contracts and technology investment agreements for the research and development of cell-based and recombinant technologies and adjuvants, which can be used in producing influenza vaccines. Manufacturers are demonstrating progress toward licensure. In fiscal year 2005, HHS awarded the most funding through contracts to manufacturers to develop cell-based technology. With these funds, two manufacturers are demonstrating progress toward licensure of a vaccine by completing clinical trials required to file for licensure with FDA, and one of these two manufacturers has also constructed a domestic cell- based influenza vaccine facility. HHS awarded contracts to six manufacturers—one manufacturer in fiscal year 2005 and five manufacturers in fiscal year 2006—worth a total of approximately $1 billion for the development of an influenza vaccine produced using cell- based technology (see table 3). According to HHS, it awarded multiple contracts because it expected some attrition by manufacturers as the development of new influenza vaccines progressed. Cell-based technology has the potential to increase the overall amount of vaccine available at the end of the production process. As of March 2011, two of the manufacturers to which HHS had awarded contracts—DynPort Vaccine Company LLC (with Baxter International Inc.) (DynPort/Baxter) and Novartis Vaccines and Diagnostics Inc. (Novartis Vaccines)—have completed clinical trials required to file for licensure with FDA. While Novartis Vaccines anticipates submitting a licensing application for its seasonal influenza vaccine using cell-based technology to FDA in 2011, DynPort/Baxter anticipates submitting its licensing application to FDA in 2012. Additionally, GlaxoSmithKline plc (GlaxoSmithKline) is currently conducting clinical trials with its adjuvanted cell-based pandemic influenza vaccine, and MedImmune, LLC is conducting preclinical studies in animals on its cell-based pandemic influenza vaccine. The remaining two contracts with sanofi pasteur and Solvay Pharmaceuticals were terminated by HHS. In addition to the six contracts awarded for the research and development of cell-based influenza vaccine, HHS also entered into a $486.6 million contract with Novartis Vaccines in fiscal year 2009 for the construction of a cell-based influenza vaccine production facility in the United States to enhance domestic production capacity. According to HHS, Novartis Vaccines completed construction of this facility in November 2009 and will have qualified the facility for producing pandemic vaccine using cell-based technology, if needed, by the end of 2011. HHS expects the new facility to provide at least 25 percent of the needed domestic production capacity for pandemic vaccine. This facility also has the capacity to produce seasonal and adjuvanted influenza vaccine as well as other biological products that use this technology for other infectious diseases. In fiscal year 2009, HHS awarded contracts to manufacturers for the research and development of recombinant technology. Recombinant technology has the potential to increase the overall amount of vaccine available at the end of the production process and speed up the production process itself, in part, because unlike egg-based and cell-based technologies, it does not depend on the replication of the influenza virus for production. In fiscal year 2009, HHS entered into a $34.5 million contract with Protein Sciences Corporation (Protein Sciences) for the continued development of recombinant technology for use in producing an influenza vaccine. According to HHS, if Protein Sciences’ recombinant, seasonal influenza vaccine is shown to be safe and effective through clinical trials, the contract requires the company to establish enough domestic manufacturing capacity to provide finished vaccine within 12 weeks of the beginning of a pandemic and to produce at least 50 million doses of pandemic vaccine within 6 months of the beginning of a pandemic. In May 2011, HHS extended its contract with Protein Sciences for 2 years with $46.8 million of additional funding. In February 2011, HHS awarded two additional contracts for the research and development of pandemic influenza vaccines using recombinant technology. HHS awarded contracts to Novavax, Inc. (Novavax) for $97.3 million and VaxInnate, Inc. (VaxInnate) for $117.9 million each for a 3-year period. According to HHS, if the manufacturer and department mutually agree, each respective contract may be extended for an additional 2-year period, resulting in contract amounts totaling $179.1 million for Novavax and $196.6 million for VaxInnate (see table 4). In contrast to HHS’s contract awards specifically designated for influenza vaccine described above, DOD’s funding efforts have been more generally targeted toward the research and development of technologies that could be used in producing these vaccines. For example, in fiscal year 2010, DOD entered into technology investment agreements with manufacturers and research institutes—totaling approximately $86.9 million—for the research and development of recombinant technology through a DOD initiative called Blue Angel. The Blue Angel initiative is intended to accelerate ongoing programs that would potentially assist the federal government in providing a governmentwide response to an influenza pandemic. Under the Blue Angel initiative, DOD supported the initial testing of a production process using recombinant technology to produce antigen—the active substance in a vaccine that stimulates the production of protective antibodies—using the 2009 H1N1 pandemic strain. According to DOD, although the initiative did not result in a finished vaccine, the first batch of antigen was produced within 30 days of receiving information on the pandemic-causing strain. Since fiscal year 2007, HHS has also awarded contracts for the research and development of an adjuvanted influenza vaccine. Adjuvants have the potential to increase the overall amount of vaccine available at the end of the production process by enhancing the immune response, thereby reducing the amount of antigen needed per vaccine dose. Two manufacturers have demonstrated progress toward licensure of their vaccines by completing clinical trials. HHS awarded three contracts totaling $152 million to GlaxoSmithKline, Novartis Vaccines, and Intercell AG for the research and development of an adjuvanted influenza vaccine (see table 5). Of the three manufacturers awarded contracts, GlaxoSmithKline anticipates submitting a licensing application for its adjuvanted egg-based pandemic influenza vaccine to FDA for review in 2011, while Novartis Vaccines anticipates submitting a licensing application for its adjuvanted egg-based seasonal influenza vaccine to FDA for review in 2012. According to HHS, Intercell AG’s clinical trials did not achieve the desired result and were ended. In addition to its awards through contracts with manufacturers, HHS also provided $4 million in funding to the National Institutes of Health (NIH)— an agency within HHS—for H5N1 “mix-and-match” studies starting in March 2008. According to HHS, these studies are designed to determine whether the adjuvant from one manufacturer can be safely and effectively combined with the antigen from another manufacturer in the case of a public health emergency, such as an influenza pandemic. The ability to combine the antigen from one manufacturer with the adjuvant from another manufacturer could increase the overall vaccine supply during a pandemic. The preliminary preclinical studies in animals with a H5N1 vaccine were completed in early 2009 in preparation for clinical testing by NIH. However, NIH delayed its work on the H5N1 vaccine to conduct clinical trials testing the unadjuvanted and mix-and-match 2009 H1N1 pandemic vaccine as part of HHS’s response to the pandemic. According to NIH officials we spoke with, NIH resumed work on the H5N1 mix-and- match studies in May 2011; officials anticipate completing clinical trials for these studies in 2012. DOD has also funded the development of adjuvants for use with influenza vaccine. In fiscal year 2009, DOD entered into a technology investment agreement for $3.3 million with the Infectious Disease Research Institute. According to DOD, the department is currently awaiting the results of completed animal studies using an adjuvanted vaccine. Some stakeholders and federal reports identified three primary challenges to the development and licensure of influenza vaccines using alternative technologies: low demand, high research and development costs, and regulatory challenges. Some stakeholders told us that low demand because of low vaccination rates hinders manufacturers’ willingness to develop seasonal influenza vaccines using alternative technologies. According to CDC, during the 2009–10 influenza season, national vaccination rates reached an estimated 41 percent of the population aged 6 months or older for the seasonal vaccine, and an estimated 27 percent for the separate 2009 H1N1 pandemic vaccine. Each influenza season more vaccine is produced than is actually used, even in years where there has been a perceived shortage of influenza vaccine because of challenges in the production process. Data from CDC, FDA, and the American Medical Association confirm that—despite an increase in the total amount of influenza vaccine produced and distributed since at least 2001—more doses of seasonal vaccine are produced than distributed each year, including in years when there were few licensed manufacturers or a perceived vaccine shortage (see table 6). This excess vaccine expires and is destroyed at the season’s end as it will not be useful for the next influenza season, when a new vaccine will need to be formulated using the three influenza strains expected to be most prevalent that year. Additionally, despite the increase in influenza vaccine production and distribution and the United States using more seasonal vaccine than any other country, 5 of 12 manufacturer representatives, 1 of 3 industry association representatives, and 2 of 12 other experts we interviewed said that this low demand decreases incentives for manufacturers to develop new seasonal influenza vaccines using alternative technologies. Stakeholders told us that there are a number of reasons why demand for seasonal influenza vaccine is low. For example, two experts stated that patients commonly do not view seasonal influenza as a serious disease, and another expert and an industry association representative stated there is a need for more patient education on the safety of influenza vaccine to overcome patient and provider hesitancy. Researchers have also found that patients and providers have concerns about influenza vaccine. One manufacturer representative also noted that the current influenza vaccine is less effective for certain populations, such as the elderly, which also decreases demand. We have previously reported that according to CDC, a recommendation from a physician or other health care provider is the most important factor in an individual’s decision to get vaccinated. Additionally, a recent review of survey data found that health care professionals were cited as one of three most important sources of information in making decisions about children’s vaccines by 85 percent of parents surveyed. CDC has made efforts to encourage providers to recommend vaccination to their patients. However, despite these efforts, available data suggest that getting providers to recommend vaccination for their patients has been difficult. CDC told us that it is working closely with numerous partners to implement an influenza vaccine communication plan utilizing multiple forms of media to reach the general public as well as specific target populations. HHS officials acknowledged the challenge of low demand for seasonal influenza vaccine; however, they said manufacturers remain interested in pursuing the development of new influenza vaccines using alternative technologies. For example, according to department officials, manufacturers have more than two dozen influenza vaccines in development, and many of these manufacturers have received funds from HHS. Some stakeholders said that high research and development costs required for the development of influenza vaccines can decrease manufacturers’ incentives to pursue new influenza vaccines using alternative technologies. Six of the manufacturer representatives we spoke with said that research and development costs are high. Furthermore, five manufacturer representatives we spoke with noted that clinical trials in particular contributed to high research and development costs. For example, a representative for one manufacturer we spoke with noted the significant costs associated with the research and development of its currently licensed egg-based influenza vaccine, estimating that his company has spent $400 million alone on clinical trials. One small-scale manufacturer conducting clinical trials for a new influenza vaccine using an alternative technology estimated that it spends $150,000 per day on these trials and other expenses as it moves toward applying for licensure. In addition, PCAST—a presidential advisory council—found in a recent report on influenza vaccine research and development that constructing a cell-based influenza vaccine production facility could cost more than $1 billion and it could take over 30 years to recover the investment. Access to capital is important to manufacturers because of these high research and development costs. A manufacturer representative and an industry association representative that we spoke with told us that manufacturers’ difficulties in raising capital to finance research and development costs deterred or slowed the development of new influenza vaccines produced using alternative technologies. One manufacturer representative told us that in the current economic market it has been challenging for his firm to find investors. Three other manufacturer representatives noted that their decision making is also influenced by perceptions of whether the benefits of a new influenza vaccine will offset these high research and development costs by increasing production efficiency or supporting higher prices for the new product compared to the current vaccine. HHS told us that it has worked to address this issue through its funding for influenza vaccines using alternative technologies and that its support of manufacturers’ efforts has helped to change the return on investment such that manufacturers have more incentive to pursue the development of new influenza vaccines using alternative technologies. Additionally, HHS noted that increased investments in this area have generated a significant interest in this type of research and development. Some stakeholders identified two regulatory challenges to the development of influenza vaccines using alternative technologies. First, some stakeholders and recent federal reports identified weaknesses in FDA’s “regulatory science” capacity—that is, its ability to utilize resources, such as staff expertise, to develop new tests and measures to assess the safety, efficacy, quality, and performance of FDA-regulated products, such as influenza vaccines. Three manufacturer representatives, one industry association representative, and three experts told us that regulatory science weaknesses at FDA create challenges in the review of new product licensing applications, including those for new influenza vaccines. In particular, stakeholders told us that FDA’s staff expertise in alternative technologies affects its ability to work with manufacturers developing new influenza vaccines using these technologies, and that limited staff expertise is a challenge to efficient communication. A manufacturer representative told us that FDA’s ability to conduct its own research is important in understanding the science manufacturers present in licensing applications, but noted that some of FDA’s research programs have been cut in recent years thereby hindering its ability to gain needed experience. An industry association representative told us that manufacturers pursuing the development of some influenza vaccines using alternative technologies sometimes find it difficult to find FDA staff who can answer their questions. One expert said that many experienced senior leaders in FDA’s biologics division—where licensing applications for new vaccines are reviewed—have left the agency in recent years; therefore, reviewers are less familiar with these alternative technologies. This expert said this lack of familiarity can make it more difficult for manufacturers to work with reviewers to explain the technology to them. Some recent federal reports have echoed stakeholders’ concerns about FDA’s regulatory science capacity. According to a recent HHS report, FDA needs to be able to conduct applied research in order to better incorporate advances in life sciences research and knowledge into the regulatory process. In order to make that possible, the report states that FDA needs greater staff expertise and infrastructure. In addition, a 2007 report prepared for the FDA Science Board—an FDA advisory group—found that the development of products based on new science cannot be adequately regulated by FDA because of a lack of capacity to review new technologies. However, FDA officials told us that they are not aware of actual examples of lack of expertise within the agency and that their staff consists of highly qualified scientists. Furthermore, FDA officials noted the continuing education that staff members engage in to maintain their proficiency in technological advances as well as the quality of FDA’s research programs. The agency said it has the scientific and regulatory experience to adequately assess the safety and effectiveness of vaccines for use in the United States, but as noted later in this report, it continues to fund improvements in regulatory science capacity and staff expertise. Some stakeholders also identified a second challenge, namely that FDA’s written guidance and consultation with manufacturers on some of the requirements for licensure of new influenza vaccines using alternative technologies is not sufficiently comprehensive. They noted that FDA’s guidance documents do not include all of the various scenarios manufacturers may encounter. Additionally, one manufacturer representative said it can take months to arrange a formal meeting with FDA officials. Another manufacturer representative noted that FDA often conducts its discussions with manufacturers in stages, which can limit their ability to plan for long-term issues. According to stakeholders, this lack of detail and incremental approach can hinder manufacturers’ abilities to plan their research and development efforts, including those for new influenza vaccines, because they are uncertain as to what requirements they must meet in order to obtain licensure. For example, two manufacturer representatives said that it is unclear what size clinical trials will be required for influenza vaccines using alternative technologies because the guidance documents available are not specific enough in laying out these requirements. In addition, PCAST found in a recent report on influenza vaccine research and development that there is currently uncertainty about the regulatory pathway for recombinant influenza vaccines and recommended that guidance be developed on areas including criteria for formulation, safety, immunogenicity, and efficacy. Also, one manufacturer representative told us that his company was repeating clinical trials for an adjuvanted vaccine that had already been performed in Europe because the company had been unaware of certain FDA requirements for data that are not typically required for similar vaccines or by regulatory authorities in other countries. The manufacturer representative noted that this situation could have been avoided if FDA had provided a more complete explanation of the requirements in this regard. FDA officials acknowledged that its guidance documents are high level, explaining that specific instructions are unique to the product as guidance documents cannot cover all possible scenarios. In its comments, HHS officials noted that FDA’s guidance is intended to provide a regulatory framework, adding that guidance cannot be specific to individual manufacturing processes because these processes are trade secrets. Because of their inability to be very specific in guidance documents, FDA officials told us that they regularly meet with manufacturers developing vaccines using alternative technologies to discuss various issues and provide advice. They also noted that the agency has a good record of achieving its goals on meeting with manufacturers within a specific time frame, adding that officials often consult with manufacturers in other ways, such as participating in teleconferences. Additionally, FDA officials said that it is necessary to consult with manufacturers in stages because their review is an iterative process. They explained that it is not always apparent what requirements may be necessary for a late phase of clinical trials because such decisions are based, in part, on results from earlier trials the manufacturer has completed. Furthermore, FDA noted that it has approved many vaccines for other diseases that used alterative technologies, such as adjuvants, and these manufacturers were able to successfully develop and license their products using FDA’s guidance. Finally, FDA has published guidance on criteria for the formulation, safety, immunogenicity, and efficacy for vaccines using recombinant technology, and one manufacturer has submitted a licensing application for its influenza vaccine using this technology. According to HHS, part of this guidance, which is available on FDA’s Web site, is related to clinical trials and is specific to clinical data needed to support the licensure of pandemic influenza vaccines. HHS has expanded its recommendations for seasonal vaccination to a larger population and has released a 10-year strategic plan to address national immunizations. HHS also plans to assist manufacturers with high research and development costs by funding the establishment of specialized facilities. In addition, HHS plans to fund the enhancement of regulatory science capacity and FDA’s staff expertise to address challenges that may hinder the licensure of new influenza vaccines using alternative technologies. HHS has expanded its recommendations for seasonal influenza vaccination to a larger population and has released its 10-year strategy to enhance immunization rates in the United States, which it expects could eventually increase demand for influenza vaccine. In August 2010, HHS announced that it was expanding its vaccination recommendations for the 2010–11 influenza season from specific target groups based on personal risk from the disease to all persons aged 6 months and older. According to HHS, its expanded recommendations simplify the public health message to providers and to the public on who should be vaccinated against seasonal influenza. Because the 2010–11 influenza season is the first for which the recommendations are in place and the first influenza season after the 2009 H1N1 pandemic, HHS is also evaluating vaccination rates from this season for changes from previous years. For example, preliminary data from CDC suggest an increase in vaccination rates against seasonal influenza among children aged 6 months to 17 years. According to CDC, vaccination rates for this population increased by 6.7 percentage points, or from 42.3 percent during the 2009–10 influenza season to 49 percent, as of February 2011. Officials noted that currently, only about 40 percent of Americans are vaccinated against seasonal influenza. HHS added that eventually demand for seasonal vaccine could increase by approximately 32 percent—or 100 million people—as a result of the expanded recommendations. Additionally, a rise in immunization rates for seasonal influenza vaccine could result in an increase in the market for this vaccine of approximately $3 billion annually, according to HHS. In February 2011, HHS released its updated national immunization strategy, which outlines, in part, the department’s efforts to address low vaccination rates for influenza. This strategy, called the National Vaccine Plan, lays out HHS’s efforts to enhance aspects of vaccines and vaccination rates against infectious diseases and provides a comprehensive plan for U.S. vaccine and immunization efforts from childhood to adulthood. As we have noted above, several stakeholders we spoke with cited a lack of provider and public education and concerns regarding the safety of vaccines as factors affecting the demand for influenza vaccine. The National Vaccine Plan has been updated to reflect experiences from the 2009 H1N1 pandemic and describes various goals, such as enhancing provider and public education on vaccines and vaccine safety and assisting providers and the public with making informed decisions regarding vaccination. HHS also plans to develop a corresponding implementation plan that will include measurable indicators so the department can assess its progress in achieving the goals of the National Vaccine Plan; HHS anticipates releasing this implementation plan later in 2011. Additionally, HHS launched a new Web site, www.vaccines.gov in the spring of 2011 as another way of educating providers and the public on vaccines and vaccine safety. HHS plans to assist manufacturers with high research and development costs by supporting the establishment of two or three privately owned facilities called Centers for Innovation in Advanced Development and Manufacturing that will provide support and expertise to manufacturers. HHS indicated that it intends to enter into contracts to partially fund the construction of new facilities or the retrofitting of existing facilities using approximately $478 million available from various appropriations. Although not the primary purpose of these facilities, according to HHS, one benefit of these specialized facilities is that they could reduce smaller, less-experienced manufacturers’ research and development costs by providing needed resources and knowledge about manufacturing, and reduce the technical risks of researching and developing medical countermeasures, such as influenza vaccine produced using alternative technologies. These facilities are primarily intended to provide, on a routine basis, core services that include the advanced development and manufacturing of chemical, biological, radiological, and nuclear medical countermeasures. These specialized facilities may also be used in an emergency to make pandemic influenza vaccine produced using alternative technologies, such as recombinant technology. HHS noted that smaller, less-experienced manufacturers often lack the staff and other resources to address technical issues—such as those related to production, quality control, and licensure—resulting in delays and higher costs, which could cause an effort to fail. These specialized facilities would have the resources to provide manufacturers with the necessary staff, technical resources, and expertise to address these delays that can result in higher costs or effort failures. According to HHS, these facilities might also reduce the total cost of the federal government’s contracts with manufacturers. By using these specialized facilities for vaccine production, the costs associated with producing these initial vaccine doses, such as those for use in clinical trials, could be included in the facilities’ operating budgets rather than in manufacturers’ research and development contracts, thereby reducing the total amount of these contracts. According to HHS, the enhanced production capacity from these facilities could also help manufacturers with which HHS has contracts avoid production delays. These specialized facilities could also allow smaller, less-experienced manufacturers to focus more on developing new influenza vaccines using alternative technologies rather than on production and licensure issues. HHS anticipates awarding competitive contracts to establish these facilities in 2011 or 2012. HHS has announced plans to spend $170 million available from its fiscal year 2009 and fiscal year 2010 annual appropriations, in part, to facilitate FDA’s review of licensing applications for influenza vaccines produced using alternative technologies and for other medical countermeasures. Specifically, HHS intends to enhance regulatory science at FDA, that is, the development of new tests and methods to assess the safety, efficacy, quality, and performance of FDA-regulated products, such as influenza vaccines. According to HHS’s report, The Public Health Emergency Medical Countermeasures Enterprise Review, improvements in regulatory science at FDA will help strengthen the agency’s review of licensing applications. In October 2010, FDA released a report outlining a proposed framework for advancing regulatory science using the funding intended by HHS for this purpose. According to FDA, improvements in regulatory science would focus on transitioning products more efficiently through review from initial concepts to licensed products. In its report, FDA identified areas in which it would focus that would potentially assist it in reviewing licensing applications for products more quickly, including during an influenza pandemic or other public health emergency. In its October 2010 report, FDA proposes additional efforts that could enhance staff expertise in reviewing licensing applications for new vaccines using alternative technologies. For example, FDA intends to initiate a program to help recruit experts in emerging technologies to work as researchers and reviewers throughout the agency. FDA is also initiating the creation and support of Centers of Excellence in Regulatory Science to conduct applied regulatory science research both independently and in collaboration with the agency. According to FDA, this additional research will enhance staff expertise with emerging technologies. FDA has issued guidance to the industry on various aspects of vaccine production, such as on the selection of cells as a medium for producing vaccines and the clinical data needed for licensure of pandemic influenza vaccines. FDA officials noted that developing guidance relies on experience, which takes time to acquire, adding that they plan to continue to make themselves available to manufacturers to consult with and advise them on various aspects of the vaccine development process, including on conducting clinical trials and safety assessments. HHS, DOD, and the Department of State reviewed a draft of this report. HHS and DOD provided written comments, which we have reprinted in appendixes IV and V, respectively. The Department of State did not provide comments. HHS also provided technical comments, which we have incorporated as appropriate. In its comments, HHS stated that it agreed on the importance of expertise and research to the development of influenza vaccines produced using alternative technologies—cell-based and recombinant technologies and adjuvants. HHS also noted that the department has made significant contributions to advancing such expertise and research, as reflected in the collaboration within the department as well as with influenza vaccine manufacturers during the 2009 H1N1 pandemic. For example, HHS described how the Biomedical Advanced Research and Development Authority, FDA, and NIH worked with manufacturers producing both the seasonal and pandemic influenza vaccine. After approving seasonal vaccines from six manufacturers during the summer of 2009, FDA approved pandemic vaccines from four manufacturers in September 2009, and a pandemic vaccine from a fifth manufacturer in November 2009. HHS also described work done that allowed for influenza vaccine to be produced more rapidly. For example, FDA developed a technique to assess the sterility of vaccine, reducing the time for testing from 14 days to 5 days. HHS’s written comments also noted the department’s concern that our description of challenges identified by stakeholders could be construed as an endorsement of them. However, as stated in our objectives, scope, and methodology, we examined challenges identified by stakeholders to the development and licensure of influenza vaccines produced using alternative technologies, and we believe our report clearly attributes these statements to the stakeholders. In response to industry concerns, HHS stated that FDA has an excellent record of responding to industry within agreed-upon time frames under applicable law and that FDA’s guidance documents cannot be specific to individual manufacturing processes because these processes are trade secrets. HHS also stated that FDA provides clear guidance to manufacturers regarding the size of clinical trials and meets with sponsors of new vaccines at key stages of the product development process to provide further guidance that is informed by earlier trials. In its comments, DOD agreed with the contents of the draft and noted that it had no substantive or administrative issues with the draft report. We are sending copies of this report to the Secretaries of HHS, DOD, and State and to interested congressional committees. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. The research, development, and review of licensing applications for new influenza vaccine for the U.S. market involve several stages. Manufacturers producing a biological product, of which influenza vaccines are one type, must submit a licensing application for review by the Food and Drug Administration (FDA) in order to market their vaccine in the United States. If FDA approves the application, the vaccine will be licensed for use in the United States. As shown in figure 1, this process can take, on average, a little over 10 years to complete. Both seasonal and pandemic influenza vaccines for the U.S. market are produced using egg-based technology—a complex process that involves growing seed strains in millions of fertilized chicken eggs. As shown in table 7, this process involves a sequence of steps that can take approximately 4 to 5 months to complete. Both seasonal and pandemic influenza vaccines for the U.S. market are produced using egg-based technology—a complex process that involves growing seed strains in millions of fertilized chicken eggs. The antigen for an egg-based influenza vaccine—the active substance in a vaccine that provides immunity by causing the body to produce protective antibodies to fight off a particular influenza strain—is derived from strains well matched to the strains in wide circulation. In order for a vaccine to be most effective, it needs to contain enough antigen to stimulate a protective immune response. Egg-based technology has been used to produce influenza vaccine for several decades. Department of Health and Human Services (HHS) officials we spoke with described it as a “tried and true” production technology with which regulators and manufacturers are familiar. This technology utilizes fertilized eggs as the medium for producing the vaccine. Additionally, several decades of safety and efficacy data on the influenza vaccine produced using egg-based technology are available. However, the timeliness of vaccine production is hindered, in part, by egg- based technology’s reliance on seed strain development and growth. Another factor affecting the production timeline is the amount of antigen produced per egg. For example, during the 2009 H1N1 pandemic, vaccine delivery was delayed, in part, because of poorer yields of antigen per egg than expected. Also, the amount of influenza vaccine that can be produced depends on the manufacturer’s egg supply. It generally takes 12 to 18 months to establish an egg supply large enough to meet the demands of either seasonal or pandemic influenza. Some experts we spoke with expressed concern that despite keeping chicken flocks producing the eggs in secure conditions to prevent contamination, these flocks are at risk of infection by, for example, the H5N1 avian influenza virus (also known as “bird flu”). Alternative technologies that can be used in producing influenza vaccines include alternative production technologies—such as cell-based and recombinant technologies—as well as the use of adjuvants. While various alternative technologies are in development, in this report we focus on these three because these are the alternative technologies the federal government has primarily funded. These three technologies have the potential to expand the supply or accelerate the availability of both seasonal and pandemic influenza vaccines. Expanding the supply or accelerating the availability of influenza vaccine is particularly important during times of a perceived seasonal vaccine shortage—when vaccine is not available and demand is highest— or during a pandemic when demand increases because of increased risk of disease and death. Expanding the supply or accelerating the availability of influenza vaccine can be done in two ways. The first is to increase the overall amount of vaccine available at the end of the production process; the second is to speed up the production process itself by, for example, reducing or eliminating step(s) in the process. The key potential benefit to cell-based technology is the ability to increase the overall amount of vaccine available at the end of the production process. This technology for influenza vaccines typically relies on the use of well-established cell lines, such as those originally derived from the kidney cells of monkeys or canines. These cells can exponentially increase in number, allowing for the rapid expansion of the medium used for influenza vaccine production. Additionally, cells can be stored in freezers and prepared for use within days or weeks for large-scale production demands. Vaccines using cell-based technology are licensed for use in the United States for use against other infectious diseases, such as polio. Both seasonal and pandemic vaccines using such technology are also licensed in other countries, such as those in the European Union, including Germany and Spain. Cell-based seasonal and pandemic vaccines are also licensed for use in Iceland and Norway. Despite the potential benefits of cell-based technology, there are challenges associated with its use. Similar to egg-based technology, cell- based technology relies on seed strain development and growth to obtain the influenza vaccine’s antigen. For example, during the 2009 H1N1 pandemic, manufacturers had low production yields in both eggs and cells when they started vaccine production, which resulted in limited supplies for delivery to the public. Also, cell-based technology has not yet been licensed for use with influenza vaccine for the U.S. market. Additionally, few manufacturers have established domestic production capacity for influenza vaccine using this technology, and construction costs for cell- based facilities are high. For example, the construction costs for Novartis Vaccine and Diagnostics Inc.’s cell-based facility in Holly Springs, North Carolina, were over $1 billion, of which HHS funded approximately 40 percent and the manufacturer funded the remaining 60 percent. Recombinant technology potentially increases the overall amount of vaccine available at the end of the production process and speeds up the production process itself. First, this technology can also utilize specialized cells—from mammals or from other sources, such as from bacteria, yeast, insects, or plants—that can exponentially increase in number as the medium for influenza vaccine production, allowing for the rapid expansion of the medium used for influenza vaccine production. Recombinant technology also has the potential to speed up the production process because it does not rely on the development and growth of a seed strain to obtain the influenza vaccine’s antigen. Instead, antigen is derived from the protein(s) on the surface of the influenza virus or from the virus’s genes. Recombinant technology is currently used in U.S.-marketed vaccines against other diseases, such as hepatitis B and the human papillomavirus, so FDA has experience reviewing licensing applications for vaccines produced using this technology. However, influenza vaccine using recombinant technology has not yet been licensed for use in the United States. Although some influenza vaccine has been produced for use and is currently being used in clinical trials, influenza vaccine has not yet been produced on a large scale using this production technology. One manufacturer, Protein Sciences Corporation, has submitted a licensing application to FDA for a recombinant seasonal influenza vaccine, but some experts we spoke with said it is unlikely we will know the benefits of this technology in producing influenza vaccine for several years. Adjuvants’ antigen-sparing capability has the potential to increase the amount of vaccine available at the end of the production process. Adjuvants—which can be used with influenza vaccines produced using egg-based, cell-based, or recombinant technologies—can enhance the immune response, thereby reducing the amount of antigen needed per vaccine dose. By reducing the amount of antigen needed per vaccine dose, adjuvants could increase the overall influenza vaccine supply. Adjuvants have other benefits beyond potentially accelerating the delivery of influenza vaccine (see table 8). Seasonal influenza vaccines administered with adjuvants are licensed for use in other countries for targeted populations, such as the elderly. Adjuvants are licensed for use with seasonal influenza vaccine in other countries, such as those in the European Union, including Belgium and Italy. Adjuvanted seasonal influenza vaccines are also licensed for use in Argentina, Columbia, Hong Kong, Mexico, the Republic of South Africa, New Zealand, and Thailand. Adjuvants were also used with the 2009 H1N1 pandemic vaccine in other countries. These other countries include Canada and Malyasia. Although adjuvants have been used in other vaccines licensed for the U.S. market—such as in vaccine against tetanus—FDA has not approved a licensing application for a seasonal influenza vaccine using this technology in the United States; adjuvants were also not used in the U.S. supply of 2009 H1N1 pandemic vaccine. Some experts have noted potential concerns regarding the safety of repeated, annual administration of adjuvants in healthy populations—such as young adults—in a seasonal influenza vaccine. In addition to the contact named above, Thomas Conahan, Assistant Director; George Bogart; Cathleen Hamann; Mariel Lifshitz; Gay Hee Lee; John Rancourt; and Kristal Vardaman made key contributions to this report. Influenza Pandemic: Monitoring and Assessing the Status of the National Pandemic Implementation Plan Needs Improvement. GAO-10-73. Washington, D.C.: November 24, 2009. Influenza Pandemic: Gaps in Pandemic Planning and Preparedness Need to Be Addressed. GAO-09-909T. Washington, D.C.: July 29, 2009. Influenza Pandemic: Continued Focus on the Nation’s Planning and Preparedness Efforts Remains Essential. GAO-09-760T. Washington, D.C.: June 3, 2009. Influenza Pandemic: Sustaining Focus on the Nation’s Planning and Preparedness Efforts. GAO-09-334. Washington, D.C.: February 26, 2009. Influenza Pandemic: HHS Needs to Continue Its Actions and Finalize Guidance for Pharmaceutical Interventions. GAO-08-671. Washington, D.C.: September 30, 2008. Influenza Pandemic: Efforts Under Way to Address Constraints on Using Antivirals and Vaccines to Forestall a Pandemic. GAO-08-92. Washington, D.C.: December 21, 2007. Influenza Vaccine: Issues Related to Production, Distribution, and Public Health Messages. GAO-08-27. Washington, D.C.: October 31, 2007. Influenza Pandemic: Further Efforts Are Needed to Ensure Clearer Federal Leadership Roles and an Effective National Strategy. GAO-07-781. Washington, D.C.: August 14, 2007. Influenza Pandemic: Efforts to Forestall Onset Are Under Way; Identifying Countries at Greatest Risk Entails Challenges. GAO-07-604. Washington, D.C.: June 20, 2007. Influenza Pandemic: DOD Combatant Commands’ Preparedness Efforts Could Benefit from More Clearly Defined Roles, Resources, and Risk Mitigation. GAO-07-696. Washington, D.C.: June 20, 2007. Avian Influenza: USDA Has Taken Important Steps to Prepare for Outbreaks, but Better Planning Could Improve Response. GAO-07-652. Washington, D.C.: June 11, 2007. Influenza Pandemic: Applying Lessons Learned from the 2004–05 Influenza Vaccine Shortage. GAO-06-221T. Washington, D.C.: November 4, 2005. Influenza Vaccine: Shortages in 2004–05 Season Underscore Need for Better Preparation. GAO-05-984. Washington, D.C.: September 30, 2005. Influenza Pandemic: Challenges in Preparedness and Response. GAO-05-863T. Washington, D.C.: June 30, 2005. Influenza Pandemic: Challenges Remain in Preparedness. GAO-05-760T. Washington, D.C.: May 26, 2005.
Production delays for the 2009 H1N1 pandemic vaccine using the current egg-based production technology heightened interest in alternative technologies that could expand the supply or accelerate the availability of influenza vaccine. Within the federal government, the Department of Health and Human Services (HHS) and the Department of Defense (DOD) support the development of technologies that can be used in producing influenza vaccines. HHS's Food and Drug Administration (FDA) reviews licensing applications for new vaccine, and the Department of State is the U.S. diplomatic liaison to the international entity that declares worldwide pandemics. GAO was asked to review federal activities for the development of alternative technologies used in producing influenza vaccine. This report examines (1) federal funding from fiscal year 2005 through March 2011 for alternative technologies and the status of manufacturers' efforts, (2) challenges to development and licensure identified by stakeholders, and (3) how HHS is addressing those challenges. GAO reviewed HHS and DOD documents and funding data. GAO also interviewed stakeholders, including manufacturer representatives, industry associations, and other experts on challenges to development and licensure. GAO interviewed HHS officials on how they are addressing those challenges. From fiscal year 2005 through March 2011, HHS and DOD provided about $2.1 billion in funding for the development of alternative technologies that could potentially expand the supply or accelerate the availability of influenza vaccine. Specifically, HHS and DOD have funded two alternative production technologies--cell-based and recombinant technologies, which produce vaccine in cells instead of eggs--and adjuvants, which can reduce the amount of vaccine needed to stimulate an immune response. HHS's funding supports the development of a new influenza vaccine using alternative technologies with the goal of manufacturers submitting licensing applications to FDA. DOD's funding supports the research and development of a technology that can make various vaccines, including influenza vaccines. HHS awarded $1 billion in contracts to manufacturers to develop cell-based technology, with manufacturers making progress toward licensure. HHS and DOD funded $296.5 million in contracts and $86.9 million in technology investment agreements, respectively, for the development of recombinant technology. HHS also awarded about $152 million in contracts for the development of adjuvanted influenza vaccines. Two manufacturers receiving HHS funds plan to submit licensing applications for their adjuvanted vaccines to FDA within the next 2 years. Some stakeholders said low demand, high research and development costs, and regulatory challenges can hinder the development and licensure of new vaccines using alternative technologies. For example, despite the United States using more seasonal vaccine than any other country, some stakeholders told us that low vaccination rates can decrease incentives for manufacturers to develop new influenza vaccines using alternative technologies because there is not sufficient demand for new products. Some stakeholders said high research and development costs can also decrease manufacturers' incentives; however, HHS noted that increased investments in this area have generated a significant interest in this type of research and development. Some stakeholders also told us that some of FDA's guidance documents are not sufficiently comprehensive. FDA officials told us that their guidance documents cannot cover all possible scenarios; thus, they regularly meet with manufacturers to discuss issues and provide advice. HHS is addressing challenges in the development and licensure of new influenza vaccines using alternative technologies. For example, HHS intends to fund the establishment of specialized facilities that will provide support and expertise to manufacturers. Additionally, through FDA, HHS plans to facilitate the review of licensing applications for new influenza vaccines using alternative technologies and to enhance FDA's staff expertise. HHS, DOD, and the Department of State reviewed a draft of this report. In commenting on a draft of this report, HHS and DOD agreed with GAO on its findings. The Department of State did not provide comments. HHS provided suggestions to clarify the discussion.
The Airport and Airway Trust Fund was established by the Airport and Airway Revenue Act of 1970 (P.L. 91-258) to finance FAA’s investments in the airport and airway system, such as construction and safety improvements at airports and technological upgrades to the air traffic control system. Historically, about 87 percent of the tax revenues for the Trust Fund have come from a tax on domestic airline tickets. The remainder of the Trust Fund is financed by a $6 per passenger charge on flights departing the United States for international destinations, a 6.25-percent charge on the amount paid to transport domestic cargo by air, a 15-cents-per-gallon charge on purchases of noncommercial aviation gasoline, and a 17.5-cents-per-gallon charge on purchases of noncommercial jet fuel. FAA is responsible for a wide range of functions, which range from certifying new aircraft; to inspecting the existing fleet; to providing air traffic services, such as controlling takeoffs and landings and managing the flow of aircraft between airports. Over the past decade, the growth of domestic and international air travel has greatly increased the demand for FAA’s services. At the same time, FAA must operate in an environment of increasingly tight federal resources. In this context, we have generally supported FAA’s consideration of charging commercial users for the agency’s services. In particular, we have previously suggested that FAA examine the feasibility of charging fees to new airlines for the agency’s certification activities and to foreign airlines for flights that pass through our nation’s airspace. Similarly, we have reported our view that the various commercial users of the nation’s airspace and airports should pay their fair share of the costs that they impose on the system. In addition, to ensure full cost recovery, we have suggested that FAA consider raising the fees that it charges for the certification and surveillance of foreign repair stations. Because the various taxes that make up the Trust Fund are not based on factors that directly relate to the system’s costs, the extent to which the current financing system charges users according to their demand on the system is open to question. For example, two airlines flying the same number of passengers on the same type of aircraft from Minneapolis, Minnesota, to Des Moines, Iowa, at the same time of day will impose the same costs on the airport and air traffic control system. However, because the ticket tax is based on the fares paid, the airline that charges the lower fares in this example will pay less for the system’s use, even though both airlines had the same number of takeoffs and landings and flew the same number of passengers, the same type of aircraft, and the same distance. Motivated by their belief that the current system unfairly subsidizes their low-fare competitors, the nation’s seven largest airlines have proposed that the ticket tax be replaced by user fees on domestic operations. Under the proposal, airlines would pay fees for domestic operations according to the following three-part formula: (1) $4.50 per originating passenger, (2) $2 per seat on jet aircraft with 71 or more seats and $1 per seat on jets and turboprop aircraft with 70 or fewer seats, and (3) $0.005 per nonstop passenger mile. By using two factors in particular—originating passengers and nonstop passenger miles—the formula tends to favor the larger airlines, which operate hub-and-spoke systems, at the expense of the low-fare and small airlines, which tend to operate point-to-point systems. This relationship can best be shown by example. Consider the two possible routings between St. Louis, Missouri, and Orlando, Florida, shown in figure 1. The “hubbing” airline first takes the passenger to a hub, such as Chicago’s O’Hare Airport, to connect to another flight to Orlando. The point-to-point carrier takes the St. Louis passenger nonstop to Orlando. The airline that lands at O’Hare to transfer the passenger to another flight to Orlando has twice as many takeoffs and landings as the airline that flies nonstop between St. Louis and Orlando. As a result, the costs imposed by the hubbing airline on the air traffic control system are greater. However, by charging $4.50 per “originating” passenger, the airline that flies the passenger from St. Louis to Orlando via O’Hare would pay the same amount as the airline that flies the passenger nonstop between St. Louis and Orlando, even though the hubbing carrier puts a greater burden on the system. In addition, by charging $0.005 per “nonstop passenger mile”—or the straight-line distance between the points of origin and destination—the formula does not charge the hubbing airlines for the circuitous routings that are common to their hub-and-spoke operations. As a result, the airline transporting a passenger 297 miles from St. Louis to O’Hare and then flying that passenger 1,157 miles to Orlando would be charged the same as an airline flying a passenger nonstop from St. Louis to Orlando, even though the hubbing carrier placed a greater burden on the air traffic control system. Because the seven largest airlines operate hub-and-spoke systems and most low-fare and small airlines operate point-to-point systems, the proposed fee system would shift the financial burden away from the larger airlines and onto their competitors. For example, as figure 2 shows, on the basis of FAA’s traffic forecasts for fiscal year 1997, if the ticket tax were replaced by this proposal, the cost to the nation’s seven largest airlines would decrease by nearly $600 million, while the cost to Southwest Airlines, America West, and other low-fare and small airlines would increase by nearly $550 million. In addition, the coalition’s proposal would charge commuter carriers $1 per seat while charging airlines $2 per seat. Most major commuter carriers are owned by or affiliated with one of the coalition airlines; Continental Express, for example, is a wholly-owned subsidiary of Continental Airlines. As a result, by charging commuter carriers less per seat, the proposal would provide the coalition airlines with an additional benefit. Implementing a proposal that would shift nearly $600 million in costs from one segment of the industry to another could have substantial competitive impacts and needs to be studied first. While the ticket tax might provide low-fare airlines with a competitive advantage, other public policies favor some large carriers. For example, a few large airlines control nearly all the takeoff and landing slots at the four “slot-controlled” airports, which give them an advantage over their competitors. Simply eliminating the potential “subsidy” to low-fare airlines created by the ticket tax, while leaving the other policies in place that provide some large airlines with a competitive advantage, might result in higher fares and a reduction in service options for consumers. In addition, the proposal as written could have a dramatic shift in costs that could affect regions differently. On the one hand, consumers in regions such as the West and Southwest that have benefitted from the entry of low-fare airlines could pay more than they do under the ticket tax. On the other hand, consumers in the East and Upper Midwest, who have not experienced the entry of low-fare airlines to the same extent, could pay relatively less. Nevertheless, under any fee system that incorporated common measures of the system’s usage, such as departures and aircraft miles flown, it is likely that the relative share paid by low-fare airlines would increase compared with what they pay now under the ticket tax. In 1995, for example, Southwest accounted for 6.3 percent of the airlines’ payments under the ticket tax. In that year, Southwest accounted for 10 percent of the industry’s departures and 7 percent of the aircraft miles flown. However, if only these two measures were considered, Southwest’s share would not increase to the same extent as under the large airlines’ proposal. Under the coalition’s proposal, Southwest’s share of the industry’s contribution to the Trust Fund would increase to 10.3 percent. A more precise fee system, however, would account for those costs incurred by FAA in managing the airport and airway system, which vary greatly by the amount, type, and timing of various airline operations. For example, the air traffic control costs imposed by a flight arriving at 5 p.m. at New York’s congested LaGuardia Airport—regardless if that flight involves a large jet or commuter aircraft—are much greater than those imposed by a flight arriving at noon at the noncongested airport in Des Moines. Likewise, hubbing operations at the nation’s largest airports increase the peak service demands on the airway system and increase FAA’s operating and staffing costs. Neither the 10-percent ticket tax nor the largest airlines’ proposal accounts for these factors. Determining how best to finance FAA involves complex issues, requiring careful examination. In addition, an evaluation of alternative financing for FAA would need to involve the Department of Transportation’s (DOT) Office of Aviation and International Affairs. This office is responsible for evaluating the potential competitive implications of any changes to our aviation system. By changing what each airline pays, any new funding mechanism will have ramifications for airline competition that DOT would be better positioned to examine for the Congress than FAA. Likewise, DOT may also be better positioned than FAA to determine the extent to which a new financing mechanism might otherwise affect the aviation system. Recognizing the complexities associated with determining how best to finance FAA, the Congress recently directed that the issues involved be studied further. Specifically, the Federal Aviation Reauthorization Act (P.L. 104-264), enacted in October 1996, requires FAA to contract with an independent firm to assess the agency’s funding needs and the costs occasioned by each segment of the aviation industry on the airport and airway system. This assessment, which the contractor is required to complete by February 1997, will be a critical piece in designing a new fee system if the Congress ultimately decides to replace the ticket tax. The 1996 act also created the National Civil Aviation Review Commission, which is charged with studying how best to finance FAA in light of the contractor’s independent assessment of funding needs and system costs. The commission is to have 21 members—13 appointed by the Secretary of Transportation and 8 appointed by the Congress—and represent “a balanced view of the issues important to general aviation, major air carriers, air cargo carriers, regional air carriers, business aviation, airports, aircraft manufacturers, the financial community, aviation industry workers, and airline passengers.” The commission must report its findings and recommendations to the Secretary of Transportation within 6 months of receiving the contractor’s independent assessment—in other words, by August 1997. After receiving the commission’s report, the Secretary of Transportation is required to consult with the Secretary of the Treasury and report to the Congress by October 1997 on the Administration’s recommendations for funding the needs of the aviation system through 2002. We provided DOT with a draft copy of this report for review and comment. We discussed the draft with DOT officials, including the Deputy Assistant Secretary for Aviation and International Affairs, who stated that the agency was in complete agreement with the report. DOT also provided us with two comments, which we incorporated where appropriate. First, the agency noted that the coalition’s proposal also benefits the largest airlines by charging commuter carriers $1 per seat while charging airlines $2 per seat. DOT pointed out that because most of the commuter carriers are owned by or affiliated with one of the coalition airlines, this differential would provide the coalition airlines with an additional benefit. Second, in our draft report, we stated that FAA was completing work on its own cost allocation study, which the agency expected to release by the end of the year. DOT commented, however, that because of the recent congressional mandate that FAA contract with an independent firm to undertake such an assessment, FAA would likely not release its study. We obtained information for this report from (1) documents and data provided by DOT, FAA, and the coalition airlines and (2) our discussions with representatives of the coalition as well as the executives of several large carriers, including the CEO of American Airlines, and representatives of low-fare and other smaller airlines, including the CEO of Southwest Airlines. For our analysis of the implications of reinstating the taxes, we used the rates in effect as of November 1996. For FAA’s funding levels, we used the agency’s enacted fiscal year 1997 budget. We performed our review from June through November 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Transportation; the Acting Administrator, FAA; the Director, Office of Management and Budget; and other interested parties. We will send copies to others upon request. If you or your staff have any questions, please call me at (202) 512-2834. Major contributors to this report are listed in appendix II. During fiscal years 1990 through 1996, the Airport and Airway Trust Fund financed 100 percent of three FAA accounts—Grants-in-Aid for Airports (the Airport Improvement Program); Facilities and Equipment; and Research, Engineering, and Development. Also, during this period, the Trust Fund has, with the exception of fiscal year 1990, financed about half of FAA’s fourth account—Operations—and the remainder of this account was financed by the General Fund. Under FAA’s fiscal year 1997 budget, as enacted, the Trust Fund would continue to finance 100 percent of three accounts and would finance one-third of the Operations account if the taxes that finance the Trust Fund are extended beyond December 31, 1996. Table I.1 shows FAA’s funding sources for fiscal years 1990 through 1996 and FAA’s fiscal year 1997 budget as enacted. Table I.1: FAA Funding, Fiscal Years 1990-97 (3,017) (2,033) (2,250) (2,251) (2,285) (2,122) (2,420) (3,255) (807) (2,003) (2,110) (2,279) (2,295) (2,450) (2,223) (1,700) (3,017) (2,033) (2,250) (2,251) (2,285) (2,122) (2,420) (3,255) (4,123) (6,103) (6,637) (6,611) (6,294) (6,112) (5,725) (5,306) Under Public Law 104-205, FAA must use up to $75 million in user fees charged for air traffic control and related services to nongovernmental aircraft that fly over but do not takeoff or land in the United States in lieu of General Fund financing. If the taxes that finance the Trust Fund are not extended beyond December 31, 1996, the Trust Fund’s balance available (referred to as the uncommitted balance) will be below the level needed to finance FAA’s fiscal year 1997 budget as enacted. Specifically, the Trust Fund will be about $1 billion short of the funding needed to finance its portion of FAA’s fiscal year 1997 budget. Therefore, the total funding commitments that FAA can make are reduced by this amount. According to FAA’s estimates, the Trust Fund could provide about $4.28 billion for FAA’s budget and $65 million for non-FAA expenditures, thereby bringing the Trust Fund’s total contribution to about $4.35 billion. However, FAA’s budget as enacted calls for $5.31 billion from the Trust Fund and $3.26 billion from the General Fund. FAA also estimates that, under current law, the Trust Fund balance available will be $0 by early July 1997 if the taxes are not reinstated (or the tax on airline tickets is not replaced by user fees). Table I.2 shows the Trust Fund’s enacted share of FAA and non-FAA budgets and the potential funding shortfall for fiscal year 1997. Also, the authority to transfer the tax receipts from the Treasury to the Trust Fund will expire on December 31, 1996. As a result, some taxes imposed late in 1996 will not be deposited in the Treasury until 1997 and, therefore, cannot be transferred to the Trust Fund. FAA estimates that this amount will total about $300 million. If the Congress provides transfer authority for moving this $300 million to the Trust Fund, then FAA estimates that the Trust Fund balance available to finance FAA would not reach $0 until late July 1997 and the potential shortfall would be reduced to $724 million. FAA officials estimate that in order for the Trust Fund to finance $5.31 billion of FAA’s fiscal year 1997 budget, the taxes would need to be reinstated by July 1997. However, according to FAA officials, reinstatement by this date allows for almost no margin of error in the agency’s estimates of tax revenue. Consequently, if revenue is less than estimated, congressional action would be required to obtain additional financing from the General Fund. Also, the Trust Fund balance available to finance FAA’s fiscal year 1998 budget would depend on when the taxes and transfer authority are reinstated, as shown in figure I.1. Charles R. Chambers Gerald L. Dillingham Timothy F. Hannegan Julian L. King Robert E. Levin Francis P. Mulvey John T. Noto The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined the proposal by a coalition of the seven largest U.S. airlines to replace the ticket tax with user fees, focusing on: (1) whether the ticket tax should be replaced by a different fee system; (2) what the potential competitive impacts of the fees proposed by the coalition airlines would be; (3) what factors need to be considered if a new fee system were to be developed; and (4) the implications on the Federal Aviation Administration's (FAA) budget of reinstating or not reinstating the taxes that finance the Airport and Airway Trust Fund. GAO found that: (1) because the ticket tax is based on the fares paid by travelers and not an allocation of actual FAA costs, it may not fairly allocate the system's costs among the users; (2) the coalition airlines' proposal to replace the ticket tax with user fees only incorporates factors that would substantially increase the fees paid by low-fare and small airlines and decrease the fees paid by the seven coalition airlines; (3) the proposal would dramatically redistribute the cost burden among airlines and could have substantial implications for domestic competition; (4) any replacement system for the ticket tax would need to account for the wide range of costs incurred by FAA in managing the airport and airway system; (5) the views of all affected parties, not just any particular group of airlines, would need to be included in assessing the mechanisms for financing the airport and airway system; and (6) Congress established a commission to study how best to meet FAA financing needs which will help ensure that, in the long term, FAA has a secure funding source, commercial users of the system pay their fair share, and a strong, competitive airline industry continues to exist.
Oil is vitally important to the world and U.S. economy, accounting for nearly 40 percent of world primary energy consumption. As shown in figure 1, although world oil consumption has increased significantly over the past 20 years, oil’s share of primary energy consumption has remained fairly constant. EIA projects similar trends for the next 20 years, with total world energy consumption increasing 2 percent annually through 2025 and oil comprising about 38 percent of all energy consumption in 2025. Oil is also the largest primary source of energy in the United States, accounting for about 40 percent of all energy consumed in 2004. As shown in figure 2, two-thirds of the oil consumed in the United States is used for transportation. About 96 percent of energy used for transportation in the United States comes from oil. The transportation sector is almost exclusively dependent on oil because there are no significant competitive alternatives. EIA projects that transportation will comprise an even larger part of U.S. oil use in the future, about 72 percent in 2030, because it expects the growth in demand for transportation to far exceed increases in fuel efficiency. As shown in figure 3, the United States’ demand for imported crude oil increased rapidly after 1970, when domestic crude oil production peaked. Although the percentage of imported crude oil decreased from about 45 percent in 1977 to about 26 percent in 1985 due to a reduction in demand for oil, imported crude oil increased again to 65 percent by 2004 due to a combination of increases in consumption and decreases in domestic production. The United States created the SPR because the country’s reliance on oil imports makes it vulnerable to disruptions in oil supply. Strategic oil reserves like the SPR are particularly important now because oil market cushions, such as excess oil production capacity and private inventories, have decreased in recent years. Although estimates of spare production capacity are uncertain, experts believe that spare production capacity dropped to around 1 million barrels per day in 2004, close to a 20-year low. Additionally, private inventories of oil and oil products have been on a long- term declining trend, in part because of a trend toward just-in-time inventory. The absence of these market cushions means that less oil is available in the market to mitigate price spikes during oil supply disruptions. Thus, a supply disruption that takes even a small amount of oil off the market could cause the price of oil to rise dramatically. One factor limiting excess oil production capacity is recent steep increases in world consumption of oil. Together, the United States and Western Europe accounted for 44 percent of the 80 million barrels of oil per day of world oil consumption in 2003. The United States is the world’s largest oil consumer, accounting for about 25 percent of the world’s oil consumption, despite having only 5 percent of the world’s population. In addition to the high levels of consumption in the United States and Western Europe, oil consumption has also been rising rapidly in Asia and Oceania, as shown in figure 4. For example, according to a recent study by the International Monetary Fund, China and India accounted for 35 percent of incremental oil consumption between 1993 and 2003, even though they accounted for only 15 percent of world economic output over the period. China has overtaken Japan as the second largest oil consumer in the world, second to the United States. Since 1976, the United States has spent about $26.3 billion—$45.2 billion when valued in year 2005 dollars—to build, maintain, fill, and manage the SPR. The largest cost has been the cost of filling the reserve. Since filling began in 1977, $20.0 billion has been spent to obtain oil ($35.1 billion in 2005 dollars). This amount includes $15.7 billion of oil purchased with funds appropriated from 1977 through 1992, and $4.3 billion of oil received in lieu of government royalty payments since 1999. Since 1999, oil for the SPR has been obtained through the royalty-in-kind transfer program, in which royalties from government oil leases in the Gulf of Mexico are taken in the form of oil, rather than in cash. The Department of the Interior’s Minerals Management Service, which collects the royalties, contracts for delivery of the royalty oil to designated market centers. Because the oil delivered to these market centers often does not meet SPR quality specifications and is distant from the SPR storage sites, DOE awards complementary contracts to exchange royalty oil at the market center for SPR-quality oil delivered to the SPR facilities. However, the logistics of Gulf of Mexico oil production from federal leases limits the rate at which royalty oil can be economically delivered to the SPR sites. The SPR oil is stored in salt caverns at the following four facilities: Bayou Choctaw and West Hackberry in Louisiana, and Big Hill and Bryan Mound in Texas. These caverns range in size from 6 million to 35 million barrels and were created by solution mining, in which water injected into an underground salt formation dissolves the salt and creates a cavern. According to DOE, salt caverns offer the lowest cost, most environmentally secure way to store crude oil for long periods of time. Storing oil in aboveground tanks generally costs 5 to 10 times as much. Also, because the salt caverns are 2,000 to 4,000 feet below the surface, geologic pressure will seal any crack that develops in the salt formation, ensuring that no crude oil leaks from the cavern. An additional benefit is the natural temperature difference between the top of the caverns and the bottom, which keeps the crude oil continuously circulating in the caverns, ensuring that the oil in the cavern is of consistent quality. Areas near the Gulf of Mexico were a logical choice for locating the SPR. In addition to the more than 500 salt domes concentrated along the Gulf Coast, many U.S. refineries and distribution points for tankers, barges, and pipelines are available. The four SPR storage areas are connected via pipelines to the Gulf Coast and the Midwest refining regions. Oil can be transferred via tanker to the Louisiana Offshore Oil Port, which is a major facility in the Gulf of Mexico that is connected via pipeline to over 50 percent of the United States refining capacity. The location of the SPR is less advantageous for distributing oil to or receiving it from the western United States. Past drawdowns of the SPR have occurred for a wide variety of reasons. The SPR has sold oil twice under emergency conditions, 17.3 million barrels in 1991 at the beginning of Operation Desert Storm and 11.0 million barrels in 2005 after Hurricane Katrina. In response to problems ranging from a blocked pipeline to a potential shortage of commercial heating oil stocks, exchanges of crude oil from the SPR with private companies have occurred eight times, ranging in size from 500,000 barrels to 30 million barrels. The largest exchange occurred in the fall of 2000 in response to concerns about low inventories of heating oil in the Northeast. In these exchanges, the borrowing parties returned the amount of oil borrowed plus additional volumes of oil as interest. In two cases, conducted for operational reasons, the SPR exchanged 11.0 million barrels of lower quality oil for 8.5 million barrels of higher quality oil and 2.7 million barrels of crude oil for 2.0 million barrels of heating oil. DOE has also conducted two test sales to demonstrate the readiness of the SPR, in 1985 and 1990. In addition, sales to reduce the federal deficit occurred mainly in 1996. Recent concerns about filling the SPR and long-standing concerns about its use can be addressed in ways that improve SPR effectiveness, according to numerous energy and oil market experts. A number of persons have raised questions because they believe that recent efforts to fill the SPR during tight oil supply conditions put upward pressure on oil prices. Others have expressed concerns that the SPR has not been used in disruptions where its use was warranted and, when used, has not been used early enough after a disruption has occurred. In addressing these concerns, experts with whom we spoke suggested alternative practices to consider when filling the SPR to reduce fill costs, as well as various points to consider when deciding whether to use the SPR. While early SPR fill activity focused on establishing an oil reserve large enough to be useful during a supply disruption, more recent fill activity has focused on maximizing long-term protection against disruptions. Although several oil analysts and experts believe that filling the SPR from late 2001 through 2005 during a time of tight supply and demand conditions caused the price of oil to increase by several dollars per barrel, most of the experts with whom we spoke believe that filling the SPR at that time had minimal impact on oil prices because the volume was so small compared with world oil demand. Experts suggested SPR fill practices that could reduce the cost of filling the SPR. They recommended that DOE acquire a fixed dollar value of oil per time period, rather than a fixed volume of oil per time period, and allow industry more flexibility in the timing of oil deliveries to the SPR. Prior to 1984, several pieces of legislation set forth minimum fill rates for the SPR, in an effort to increase the volume of the reserve to a level large enough to be useful during an oil supply disruption. However, the actual rate of fill often fell short of these goals. Several studies completed around this time reported that, given the SPR’s small size, it should be reserved for severe disruptions since it is a one-time source of crude oil, which must be replenished after a drawdown. They advised that only after the SPR contained a minimum of 250 million to 500 million barrels of oil would it be advisable to use it. In a September 1981 report, we echoed this concern, believing that DOE should not suspend SPR fill, except during severe disruptions, until the SPR reached a minimum threshold size. Furthermore, we stated that, given the importance of the SPR, filling it should be considered a part of U.S. base demand and should not be cut back under tight market conditions. Figure 5 shows the progress in filling the SPR since its inception in 1975. Fill was suspended from September 1979 to September 1980 when oil supplies were disrupted following the Iranian Revolution. The SPR reached a volume of about 500 million barrels in 1985, and filling the reserve slowed considerably after that time. SPR fill was again suspended in 1990 after the Iraqi invasion of Kuwait. The size of the SPR did not significantly increase again until after the September 11 terrorist attacks, when the President ordered DOE to fill the SPR to its 700 million barrel capacity to maximize the long-term protection against potential oil supply disruptions. The President’s statement accompanying the fill order indicated that, although current strategic inventories in the United States and other countries were sufficient to meet any potential near-term supply disruption, filling the SPR to capacity would strengthen the long-term energy security of the United States. The President directed that the SPR be filled in a deliberate and cost-effective manner, principally through royalty-in-kind transfers. From April 2002 to August 2005, DOE added 138 million barrels to the SPR at a cost of $4.3 billion. The SPR received oil from the royalty-in-kind program at average rates varying from about 60,000 to 116,000 barrels per day, although fill was suspended twice during this period, including January to April 2003 in response to the disruption of crude oil supplies from Venezuela. The President’s directive to fill SPR in 2001 became controversial. Several oil analysts and experts believe that filling the reserve at that time caused the world price of oil to increase by several dollars per barrel. Most of the oil experts with whom we spoke, however, believe that filling the SPR had minimal impact on oil prices, because the volume of oil going to the SPR was very small, less than one-quarter of 1 percent of total world demand. To decrease the cost of filling the SPR, many experts recommend changes in SPR practices, including more flexible timing of oil acquisition. Generally, all fill options must balance the cost of adding oil to the SPR now against the benefits that the additional oil will provide in the future. During the initial filling of the SPR, it was clear that the benefits of adding oil outweighed the immediate costs of doing so. However, now that the SPR holds nearly 700 million barrels of oil, there is a greater interest in finding ways to reduce the acquisition costs. Several experts suggested that DOE should use a predictable, transparent long-term process to acquire oil for the SPR. For example, some experts suggested a dollar-cost-averaging approach, where DOE would acquire a steady dollar value of oil per time period (e.g., day or month) instead of a relatively steady volume, as has generally been the case in recent years. A dollar-cost-averaging approach would take advantage of fluctuations in oil prices, since the same dollar amount will purchase more oil when prices are low than when prices are high. To evaluate the effect of a dollar-cost- averaging approach on SPR fill cost, we estimated the potential savings of this approach had it been used from October 2001 through August 2005. Our results showed that if DOE had followed a dollar-cost-averaging approach when filling the SPR during that time, it could have saved approximately $590 million while acquiring the same amount of oil. We also ran simulations to estimate potential future cost savings from using a dollar-cost-averaging approach over 5 years. The simulations showed that dollar cost averaging is likely to save money over a range of plausible paths of future oil prices, whether prices are rising or falling and whether price volatility is small or large. The savings due to dollar cost averaging were generally greater when oil prices were more volatile. As an additional measure, some experts suggested that DOE exercise flexibility and react to market conditions when filling the SPR. They said that DOE should not fill the SPR when the oil market is tight or when doing so would significantly tighten the market. DOE officials told us that the department has approved some delivery deferrals that contractors have requested, in particular after the oil workers’ strike in Venezuela, but DOE has also turned down some requests. In return for these deferrals, DOE received additional barrels of oil as a premium. From October 2001 through August 2005, payment for deferrals added 4.6 million barrels of oil to the SPR, with a value of approximately $110 million. Some experts suggested that DOE could expand the use of deferrals by allowing oil producers to delay oil delivery to the SPR when they believe that supply and demand are in tight balance and current prices are higher than expected future prices. Under these conditions, it is financially advantageous for oil producers to delay delivery, and producers could provide additional oil to the SPR to pay for the privilege of delaying delivery. Experts noted that there may be considerations beyond the oil market, such as national security concerns, that would necessitate the delivery of oil to the SPR at a particular time, therefore, DOE would want to exercise its authority to disallow deferrals at times when it is in the national interest that oil deliveries not be delayed. The law allows broad presidential discretion and provides only general guidance for the SPR’s use, making use of the SPR a matter of judgment by the President. SPR use decisions are largely a matter of judgment, and members of our group of experts disagreed about the appropriateness of past use decisions. Past drawdowns have been for widely varying purposes, including emergency responses, test sales, and deficit reduction. In addressing use-related issues, experts suggested several points to consider when deciding whether to use the SPR. The President has the primary authority to decide when to use the SPR. The Energy Policy and Conservation Act authorizes the President to use the SPR in the event of a severe energy supply disruption or when required to meet the obligations of the United States to the International Energy Agency. Amendments to this act in 1990 gave the President additional authority to use the SPR in reaction to a circumstance that constitutes or is likely to become a significant shortage, and where action taken would assist in preventing or reducing the adverse impact and would not impair national security. These amendments allow for only limited use of the SPR—no more than 30 million barrels may be sold over 60 days, and no sales may be made if the SPR is below 500 million barrels. In addition to presidential authority, the Secretary of Energy is authorized to carry out test drawdowns and sales or exchanges from the SPR to evaluate the drawdown and sale procedures. The Secretary may not release more than 5 million barrels of oil during such a test. DOE officials pointed out that they follow a series of progressive steps in responding to a disruption. They can (1) identify relevant inventories and evaluate market impacts (with the help of EIA); (2) defer any ongoing deliveries to the SPR, thereby making this oil available to the market; (3) make exchanges in response to requests from individual companies facing problems; and (4) arrange for competitive exchanges, whereby companies bid for oil from the SPR by promising to replace it with a greater volume of oil at a specified date in the future. DOE officials believe that this graduated approach allows them a flexible and measured response appropriate to the size of the disruption. While the President’s discretion over the release of oil introduces some uncertainty into the market, it also has certain advantages. Members of our group of experts told us that uncertainty around SPR use can be valuable. For example, the President can use the SPR as a bargaining tool in diplomatic negotiations during energy crises, enabling him to encourage behavior by oil-producing nations that could be beneficial to the United States. Members of our group of experts disagreed about the appropriateness of past SPR use decisions. Since the decision about whether the SPR should be used to ameliorate a situation is generally a matter of judgment, experts tend to view past decisions from the perspective of hindsight. For example, several members of our group told us that they believed the oil workers’ strike in Venezuela in 2002 to 2003 was a clear case in which SPR use was appropriate, although the reserve was not used in response to the strike. However, DOE officials stated that oil from the SPR was not needed during the strike. They noted that other oil-producing nations had agreed to increase production, and that the U.S. government allowed oil companies to delay delivery of oil to the SPR—which together added significant quantities of oil to the market. Members of our group of experts held a range of views about the timeliness of past use, including the SPR’s first emergency use during the Gulf War in 1991. While some said that reserve use in this instance was timely and showed the market that supply would be available, others contended that the United States did not use the SPR soon enough, when it could have dampened oil price increases and prevented the U.S. economy from slipping into a recession. However, these experts acknowledged the difficulty of disentangling the effects of the war from the effects of the SPR release on oil prices. Group members were generally supportive of SPR use in response to Hurricane Katrina in 2005. Several experts agreed that this use of SPR demonstrated that the government understood its role as one of complementing rather than competing with the market. Despite the lack of clear consensus regarding previous decisions to use the SPR, experts in our group suggested several points that policymakers should consider when deciding whether to use the SPR: (1) that recent increases in the size of the SPR should result in a greater willingness to use it during a disruption, (2) that more extensive experience with the SPR during oil supply disruptions may enable better understanding of the features of each disruption that determine whether SPR use is warranted, and (3) that using the SPR without delay when it is needed will minimize economic damage. DOE officials told us that, while they do not have a formal checklist, they consider all relevant features when considering SPR use during a disruption, including the features noted by our group of experts. First, experts in our group and in interviews noted that the SPR is much larger today than in the past, and that this change allows the SPR to be used with less concern about keeping enough oil in the reserve for future disruptions. Members of our expert group pointed out that today’s larger reserve diminishes the value of holding oil back during a disruption as a hedge against possible future disruptions, and they noted greater willingness to use reserves in response to disruptions now than in the past. Second, more extensive experience with the SPR during past disruptions may enable better understanding of the unique features of future oil disruptions that warrant a release of oil from the SPR. In a 1993 report, we stated that U.S. policy emphasized initially relying on free market forces in oil supply disruptions. However, the report observed that this policy provides little specific guidance on how long market forces should be allowed to operate before the SPR is used or what conditions should dictate its use. Experts in our group agreed that the SPR should be used to supply oil during disruptions where the market cannot make up for lost supply. Experts also identified a variety of specific features of disruptions that could help determine when SPR use is warranted. These features included the volume of oil disrupted, the type of oil disrupted, the availability of spare oil production capacity, the source of the disruption and its distance from the United States, and the time of year that the disruption occurs (with implications for gasoline supplies in the summer and heating oil in the winter). Economic experts have described additional points to consider when making decisions about using the SPR during a disruption. Experts noted that not all oil price increases are equally damaging to the economy. Economic research shows that rapid oil price increases, or price shocks, are much more harmful to the economy than oil price increases along a steady upward path. For example, one expert noted that although average world crude oil prices increased by more than $30 per barrel between 2001 and 2005, there was no price shock, and the U.S. economy remained strong, growing at about 3.5 percent annually during this period. Under some conditions, decision makers could use monetary policy to partially offset economic damage from an oil price shock. The Federal Reserve might be able to prevent some economic damage by allowing a one-time increase in the money supply to stimulate spending and spur GDP growth. However, not all economists agree that monetary policy would be effective, or that monetary policy could offset the impacts of a disruption without having other negative impacts on the economy. Third, avoiding delay in using the SPR when its use is warranted will minimize economic damage. Expert group members encouraged early use of the SPR as a first line of defense against oil supply disruptions, noting that recent changes in the oil industry—including diminished spare crude oil production capacity, refining capacity, and product inventories—have removed sources of supply security that have covered short-term supply losses in the past. Additionally, some experts believe that much of the harm to the U.S. economy occurs in the early phases of a disruption, before the economy has a chance to adjust to higher prices. Avoiding delay in SPR use is also important because even when spare production capacity is available in the world to take the place of disrupted oil supply, this oil will take time to reach the United States. EIA estimates that the majority of the world’s spare oil production capacity is located in Saudi Arabia and takes about 30 to 40 days to reach the United States. For this reason, experts told us that spare capacity would be unlikely to mitigate the early stages of a domestic disruption or a disruption affecting a nearby oil supplier, such as Venezuela, whose oil takes about 5 to 7 days to reach the Gulf Coast of the United States. At their current capacities, the SPR and international reserves can replace the oil lost in all but the most catastrophic disruptions. Doing so protects the economy from significant damage, according to the results of two DOE models, although these models disagree about the magnitude of the avoided damage. Additionally, several factors beyond the SPR’s ability to replace oil could decrease or increase the economic benefit of the reserve, such as the compatibility of SPR oil with some U.S. refineries. In June 2006, the SPR contained 689 million barrels of oil that can be released at a maximum initial rate of 4.4 million barrels a day, a rate that can replace about 44 percent of U.S. oil imports. As shown in figure 6, the maximum drawdown rate gradually decreases after 90 days as the storage caverns are emptied. If the SPR is drawn down more slowly, it could release a million barrels of oil per day for nearly 1½ years, or at smaller rates for an even longer period. In addition to the reserves in the United States, members of the International Energy Agency have about 2.7 billion barrels of public and industry reserves, of which about 700 million barrels are government- controlled for emergency purposes. These government-controlled reserves can release a maximum of about 8.5 million barrels of oil and petroleum products per day, diminishing quickly to about 4.5 million barrels per day after 30 days, about 3.5 million barrels per day after 60 days, and slightly more than 1 million barrels per day after 90 days. Reserves of refined petroleum products, such as gasoline or diesel, can be useful during oil supply disruptions, but they are more expensive to store than crude oil. We did not independently verify the potential drawdown rates of international reserves. The SPR, either alone or in combination with these international reserves, can replace the oil lost in four of the six hypothetical disruption scenarios that we developed for this review. The six scenarios are (1) a hurricane in the U.S. Gulf Coast, (2) a strike among oil workers in Venezuela, (3) an embargo of Iranian oil supply, (4) a terrorism event at an oil facility in Saudi Arabia, (5) closure of the Strait of Hormuz, and (6) a shutdown of Saudi Arabian oil production. For each scenario, we assume that world excess crude oil production capacity and world fuel-switching capabilities, which together total 850,000 barrels per day, are available immediately to help offset a disruption. We also assume that private inventories of crude oil are neutral during a disruption—holders of private inventory neither draw down their inventories nor hoard oil. (See app. II for a more detailed description of our scenarios.) As shown in table 1, the SPR is large enough and has enough drawdown capacity to completely replace the oil lost during our Gulf Coast hurricane and Venezuelan strike scenarios, which reduce world oil supply by 155 million barrels over 6 months and 307 million barrels of oil over 24 months, respectively. The SPR could eliminate these hypothetical disruptions by releasing 24 million and 87 million barrels of oil, respectively, and world spare capacity and fuel switching would make up the remaining 131 million and 220 million barrels. The SPR alone is not large enough to replace all of the oil lost in our Iranian embargo scenario, and it does not have enough drawdown capacity to completely replace the oil lost during our Saudi terrorism scenario. Our Iranian embargo scenario assumes a disruption of almost 1.5 billion barrels of oil over 18 months. Even if the United States were to release all of the oil in the SPR and if excess production capacity and fuel switching were available in the amount assumed here, there would still be a net disruption of slightly more than 300 million barrels. In our Saudi terrorism scenario, the drawdown capacity of the SPR would be insufficient to replace the oil lost during the 1st month of the disruption. For the SPR to replace the oil during the 1st month with no assistance from international reserves, maximum SPR drawdown capacity would need to be increased by almost 1 million barrels per day, to a total drawdown capacity of approximately 5.2 million barrels per day. In both of these cases, however, a coordinated international response could replace all of the disrupted oil. Even with a coordinated response, the SPR and international oil reserves are not adequate to replace the disrupted oil from our catastrophic Strait of Hormuz closure and Saudi shutdown scenarios. The drawdown capacity of international reserves is inadequate to replace the very large amount of oil that could be disrupted if the Strait of Hormuz were closed. We assume that a closure of the Strait of Hormuz could disrupt 17 million barrels of oil per day during the 1st month—more than 12 million barrels per day beyond what the SPR could release on its own and more than 4 million barrels per day beyond what could be released during a coordinated international response. In contrast, the volume of oil in international reserves is inadequate to replace the oil lost during our Saudi shutdown scenario. Even if all of the oil in the SPR were used in a unilateral response, the net disruption would still be more than 4.9 billion barrels over 2 years, an amount equal to about 16 percent of the crude oil consumed in the world in 2004. Assuming a coordinated international response, the net disruption would still be over 4.1 billion barrels over 2 years, an amount equal to more than 13 percent of the crude oil consumed in the world in 2004. The SPR can reduce economic damage during oil supply disruptions by replacing some or all of the disrupted oil, moderating the resulting oil price increase and its negative effect on U.S. economic activity, as measured by GDP. As previously noted, DOE uses two different economic models to estimate the impact of oil supply disruptions on oil prices and GDP: one used by the Office of Petroleum Reserves and one used by EIA. We used both of these models to estimate the reduction in economic damage (avoided damage) that could result from releasing oil from the SPR and international reserves during our six hypothetical disruption scenarios. (See app. II for additional description of these models and the assumptions used in our analysis.) Table 2 shows the oil price increases that the Office of Petroleum Reserves’ model estimates for our six disruption scenarios if reserves were not used, if the SPR were used alone, and if the SPR were used as part of a coordinated international response. This model estimates oil prices each month during a disruption and assumes that completely replacing the oil lost in a disruption eliminates the resulting price increase. Thus, this model predicts no price increase in situations where the SPR or international reserves can completely replace the disrupted oil, although experts told us that a price increase would likely occur in this instance due to market psychology. For those scenarios where some, but not all, of the oil can be replaced, the model estimates smaller oil price increases than if reserves were not used. For example, the model estimates that oil prices could rise by up to $47 per barrel during our Saudi terrorism scenario if reserves were not used. However, if SPR oil were released into the market, the estimated maximum price increase would be only $7 per barrel. If oil from international reserves were also released into the market, the model estimates there would be no price increase, because the reserve oil would completely replace the disrupted oil. To estimate how much economic damage could be avoided by using the SPR and international reserves during our oil supply disruption scenarios, we first estimated the damage that would occur if no reserves were used. We then estimated the damage to GDP, if any, from the disruptions if the SPR were used, either alone or in conjunction with international reserves. The difference between the estimates with and without reserve use is the avoided damage to GDP resulting from use of the reserve. As shown in table 3, the Office of Petroleum Reserves’ model estimates that the ability of the SPR alone to curb rising oil prices reduces damage to GDP by a range of $7 billion for our 6-month Gulf Coast hurricane scenario to $142 billion for our 8-month Saudi terrorism scenario. In all but the two smallest scenarios, the model shows that a coordinated international response can provide a greater reduction in damage, ranging from $118 billion for the 3-month closure of the Strait of Hormuz to $201 billion for our 18-month Iranian embargo scenario. In our 24-month Saudi shutdown scenario, the model shows that economic damage of approximately $662 billion occurs even if international reserves were used in response to the disruption. The damage caused by each disruption and the portion of that damage that can be avoided by releasing reserves depend on the nature of the disruption. For example, the SPR and international reserves cannot eliminate all of the economic damage that could be caused by our Strait of Hormuz closure scenario because, even though the duration is short, it involves a disruption of a very large quantity of oil that the reserves cannot replace. Additionally, the models show that replacement of a portion of the oil lost in the Saudi Arabian shutdown scenario results in less benefit to the economy than completely replacing the oil lost in the smaller Iranian embargo scenario. The way in which oil is released from the reserves also impacts how effective the reserves are in preventing damage to GDP. In each scenario, the results previously described include the assumption that release begins immediately and occurs at a steady rate for the entire length of the disruption. The results also include the assumption that the rate of release either completely replaces the oil lost or is the maximum sustainable rate for the entire disruption. Delaying the release of reserves in response to a disruption is harmful in every scenario, and the harm is greater the longer release is delayed. This effect is particularly large in scenarios where more oil is lost at the beginning of the disruption, such as the closure of the Strait of Hormuz or the Saudi terrorism scenarios. Replacing the oil lost during the disruption at the maximum rate possible instead of a steady rate gives a different result only in our largest disruption scenario, the Saudi shutdown. The maximum release strategy is advantageous in this scenario because the model assumes that the economic damage from the disruption is worse at the beginning, before the economy has had a chance to adjust. Since international reserves are emptied to respond to this scenario, releasing more oil at the beginning provides more benefit than releasing at a steady rate. Table 4 shows the oil price increases that the EIA model estimates for our six oil supply disruption scenarios for the same three circumstances described for the Office of Petroleum Reserves’ model: if reserves were not used, if the SPR were used alone, and if the SPR were used as part of a coordinated international response. The EIA model estimates a range of price impacts for each quarter of the disruption, rather than a single value for each month as in the Office of Petroleum Reserves’ model. Both models consider the amount of oil disrupted when calculating oil price increases, but the EIA model also estimates the impact of the disruption on market psychology. For example, an EIA official stated that disruptions caused by violent events would have larger price impacts than disruptions caused by peaceful events, such as a strike or natural disaster. Furthermore, the EIA model assumes that even if reserves can replace all of the oil lost in a disruption, oil prices may still increase because of‘ market psychology. For these reasons, in some cases, the EIA model predicts larger price increases when reserves are used than the Office of Petroleum Reserves’ model. For example, for the Saudi terrorism scenario, the EIA model predicts a price increase of $18 to $39 if the SPR were used alone (see table 4), while the Office of Petroleum Reserves’ model predicts a maximum price increase of only $7 (see table 2). As shown in table 5, the EIA model estimates that the ability of the SPR alone to mitigate increases in oil prices reduces damage to GDP by $0.4 billion to $1.0 billion for our Gulf Coast hurricane scenario up to $15 billion to $38 billion for our Iranian embargo scenario. As with the Office of Petroleum Reserves’ model, the EIA model also shows that a coordinated international response reduces more economic harm in each scenario, except those where the SPR can replace the oil alone. As it does with oil price increases, the EIA model estimates a range of GDP damage for each scenario, rather than the single value that the Office of Petroleum Reserves’ model produces. Under every scenario, the EIA model predicts much smaller avoided harm to GDP than the Office of Petroleum Reserves’ model. For example, in the Iranian embargo scenario, the Office of Petroleum Reserves’ model estimates that using international reserves could prevent $201 billion in economic harm, while the EIA Model predicts $23 billion to $60 billion in avoided harm. This difference occurs primarily because the EIA model assumes that oil price increases cause less harm to GDP, meaning that there is less economic harm for the SPR and other reserves to mitigate. The estimates of the effect of oil price spikes on GDP from the Office of Petroleum Reserves and EIA models are, respectively, near the high end and low end of the spectrum of such estimates in the economic literature. Officials from the Office of Petroleum Reserves and EIA acknowledged that they hold different views about how oil supply disruptions impact the economy. An EIA official also told us that EIA is currently updating its model, although the assumptions about how oil price changes impact GDP have not changed substantially. This discrepancy in results between the two models is potentially problematic because the results of the two models are used to support different decisions about the SPR. The Office of Petroleum Reserves’ model has been used to estimate the net benefits of expanding the SPR, as described in the following section of this report. The larger economic impacts predicted by the Office of Petroleum Reserves’ model would justify a larger SPR than if the model predicted smaller economic impacts. The EIA model is used to estimate the impact of oil supply disruptions and to advise officials about their potential consequences. The smaller economic impacts predicted by the EIA model could lead to recommendations that the SPR not be used as often or for as many oil supply disruptions as would be the case if the model found larger economic impacts. The results of these two models pull decision makers in opposite directions, making it important to clarify the differences between the two models and to ensure that policymakers are aware of the different views within DOE. The purpose of the SPR is to protect the economy from harm during oil supply disruptions by replacing the disrupted oil. However, factors beyond the amount of oil that SPR can replace affect the extent to which SPR can protect the U.S. economy from damage. For example, during some situations, such as a hurricane, typical transportation routes for oil could be blocked, reducing the benefits of releasing SPR oil. Also, the benefits of releasing SPR oil could also diminish if the type of oil in the SPR is not a good substitute for the disrupted oil, or if refining capacity is damaged. On the other hand, the SPR can provide economic benefits to the United States when it is used as a tool for diplomacy and as a deterrent against intentional disruptions, even when no oil is released. During a drawdown, SPR oil is shipped through marine terminals or pipelines. Shipping time from the SPR to different parts of the country varies, as shown in table 6. The oil pipeline network and marine shipping allow SPR oil to reach every region of the United States, except for the Rocky Mountains. Canada provides the only imported oil to the Rocky Mountain region, and DOE believes that a disruption of Canadian oil is unlikely. The ability of the SPR to reduce economic damage may be impaired if transport of oil to refineries is delayed. For example, the SPR was large enough to replace the oil lost from recent Hurricanes Katrina and Rita, but petroleum product prices still increased dramatically following the hurricanes, in part because power outages shut down pipelines that refineries depend upon to supply their crude oil and to transport their refined petroleum products to consumers. For example, Colonial Pipeline, which transports petroleum products to the Southeast and much of the East Coast, was not fully operational for a week after Hurricane Katrina. Consequently, short-term gasoline shortages occurred in some places, and the media reported gasoline prices greater than $5 per gallon in Georgia. The crude oils stored in the SPR are compatible with many refineries in the United States. However, some U.S. refineries process crude oils heavier than those stored in the SPR. Of the 8.3 million barrels of non-Canadian oil imported into the United States per day in 2004, 3.5 million barrels, or about 40 percent, were heavy oil. Refineries that process heavy oil may have difficulty operating at normal capacity if their supply of heavy oil is disrupted. A December 2005 DOE report identified 74 refineries that are connected to the SPR that receive non-Canadian imports of oil, and the report found that the types of oil currently stored in the SPR would not be fully compatible with 36 of those refineries, or slightly less than 50 percent.27, DOE estimated that if these refineries had to use SPR oil, U.S. refining throughput would decrease by 735,000 barrels per day, or 5 percent. DOE estimated that production of distillate fuels, such as diesel and jet fuel, would decrease substantially from heavy oil refineries, but DOE estimated that production of gasoline would increase. To improve the compatibility of SPR oil with refineries in the United States, the DOE study concluded that the SPR should contain about 10 percent heavy oil. However, DOE may have underestimated how much heavy oil should be in the SPR to maximize compatibility with refiners and minimize oil acquisition cost. First, DOE determined the least amount of heavy oil that could be added to improve the compatibility of the SPR oil inventory with U.S. refineries. However, because heavy oil is less expensive to purchase than the lighter oils currently stored in the SPR, a cost-benefit analysis may show that a larger amount of heavy oil is beneficial, while still maintaining compatibility with U.S. refining capacity. Second, the DOE report may have underestimated the potential impact of heavy oil disruptions on gasoline production. Several refiners who process heavy oil told us that they would be unable to maintain normal levels of gasoline production if they used only SPR oil. For example, an official from one refinery stated that if it used solely SPR oil in its heavy crude unit, it would produce 11 percent less gasoline and 35 percent less diesel. Representatives from other refineries said that they might need to shut down portions of their facilities if they could not obtain heavy oil. A refining industry expert explained that a reduction in gasoline production would likely occur when some heavy oil refineries processed light oil, because the light oil would not provide enough feed to units designed to convert heavier products into gasoline. DOE, Office of the Deputy Assistant Secretary for Petroleum Reserves, Strategic Petroleum Reserve Crude Compatibility Study (December 2005). In addition to disrupting crude oil supplies, disasters such as hurricanes and terrorist acts can disrupt supplies of petroleum products by damaging refineries. Crude oil must be processed in refineries to be useful. Following Hurricanes Katrina and Rita, nearly 30 percent of the refining capacity in the United States was shut down, disrupting supplies of gasoline and other products. Because the SPR contains only crude oil, it cannot replace petroleum products if a disruption in refining occurs. However, some countries in the International Energy Agency hold petroleum products in their reserves, and they released these products after Hurricanes Katrina and Rita. DOE reported that these releases of petroleum products helped reduce prices for gasoline and diesel after the hurricanes. Several members of our group of experts and other experts noted that the SPR has value to the United States economy in addition to physically replacing oil during supply disruptions. First, the ability of the SPR to replace supply during disruptions may deter adverse behavior on the part of oil-producing nations. Since the SPR can replace a large amount of disrupted oil, cutting off supply would not have the intended negative economic consequence. Second, the SPR could be used as a negotiation tool to encourage producers to increase oil production when needed. Third, experts told us that they believe the SPR may reduce oil prices by lowering the risk premium sometimes included in the price of oil. Oil prices can increase because of fear of a disruption, and some experts told us that the existence of the SPR may quell this fear. If demand for oil in the United States increases as expected, a larger SPR will be necessary to maintain the economy’s present level of protection from oil supply disruptions. Expansion of the SPR could also be required under the U.S. agreement with the International Energy Agency. In addition, a recent study prepared for DOE shows that the benefits of expanding the SPR to as much as 1.5 billion barrels would exceed the costs over a range of future conditions, although expanding the reserve to this size would take approximately 18 years. However, factors influencing the SPR’s ideal size are likely to change over time, including factors such as oil demand and the likelihood of oil supply disruptions. Future oil demand in the United States has an important impact on the benefits of expanding the SPR, and current projections support the interest in a larger SPR. Under the base case in the EIA’s most recent Annual Energy Outlook, published in February 2006, U.S. demand for petroleum will rise from 21.1 million barrels per day in 2005 to 23.6 million barrels per day in 2015 and 26.1 million barrels per day in 2025, increases of 12 percent and 24 percent, respectively. As a result, the volume of imported oil and petroleum products is projected to increase over time to meet this demand, from 12.5 million barrels per day in 2005 to 13.2 million barrels per day in 2015 and 15.7 million barrels per day in 2025. The amount of protection that the SPR provides to the U.S. economy is generally measured in days of net import protection. The SPR contained enough crude oil in 2005 to offset about 58 days of imports. Using the most recent EIA forecast, we calculate that the net import protection that the SPR provides at its current size will decrease to 53 days in 2015 and 45 days in 2025. The United States’ agreement with the International Energy Agency could also require an expanded SPR as imports of oil and oil products increase, if private inventories do not increase enough to cover the difference in demand. As we previously mentioned, the United States agrees to hold inventories of oil and petroleum products totaling 90 days of net imports as part of its obligation to the International Energy Agency, and the United States meets its obligation with a combination of public and private inventories. Privately held inventories of oil and petroleum products vary, but in 2005 DOE assumed these inventories could offset 58 days of imports. In total, the SPR and private inventories could offset 127 days of imports in 2005. As shown in figure 7, DOE estimates that without SPR or private inventory expansion, the United States will drop below its 90-day stockholding obligation in 2025. With the expansion of the SPR to 1 billion barrels included in the Energy Policy Act of 2005, DOE estimates that the United States will remain in compliance with its 90-day obligation through 2030. As figure 7 shows, the number of days of net import protection provided by private inventory of oil and petroleum products has generally decreased since the mid-1980s, and DOE officials expect this trend to continue. Holding inventory is costly to private companies, so they have an incentive to keep their inventory as low as possible. To evaluate the costs and benefits of expanding the SPR to a capacity of up to 1.5 billion barrels, DOE’s Oak Ridge National Laboratory (ORNL) prepared a study for DOE in late 2005. This study relies on the same model that the Office of Petroleum Reserves used, as discussed in the previous section, to estimate the reduction in economic damage from using the SPR during oil supply disruptions. The study shows that the benefits of expanding the reserve to 1.5 billion barrels exceed the costs over a 45-year horizon. The study estimates the costs and benefits of SPR expansion through 2050 because of the long construction time for additional SPR capacity and the large up-front investment required. The costs of constructing and filling the additional capacity dominate the analysis until 2020, while benefits of the additional capacity accrue from 2021 through the end of the analysis in 2050. The study uses EIA forecasts of oil price and demand through 2025, and a linear extrapolation of these forecasts from 2025 through 2050. Any analysis of costs and benefits so far in the future is inherently uncertain. However, this study is the only one of its kind to analyze the future net benefits of SPR expansion. The costs of expanding the SPR to 1.5 billion barrels consist of capital costs to acquire and construct the facilities, the cost of crude oil to fill the new capacity, and ongoing maintenance and security costs for the additional facilities. DOE estimates that expanding the physical structure of the SPR to 1.5 billion barrels would take approximately 18 years and cost approximately $5.4 billion, in 2004 dollars. DOE assumed that expanding the reserve to this size would involve purchasing or constructing additional storage capacity at three existing SPR sites: West Hackberry and Bayou Choctaw in Louisiana, and Big Hill in Texas. The remaining expansion would be accomplished by constructing new storage sites at three sites selected from five potential sites in Texas, Louisiana, and Mississippi. The ORNL study’s authors estimate the cost of filling the additional SPR capacity at $23.0 billion, in 2004 dollars. This estimate is based on the base- case oil price forecast from the 2005 Annual Energy Outlook because the 2006 volume was not yet published when the ORNL study was completed. The 2006 Outlook forecasts higher crude oil prices than the 2005 Outlook. Using the most recent base-case forecast, the ORNL authors estimated a fill cost of $36.2 billion in 2004 dollars. These calculations assume that the new SPR capacity is filled as it is completed at a maximum fill rate of 100,000 barrels per day, a fill rate achievable using the royalty-in-kind program. The ORNL study does not separately consider the costs and benefits of the expansion to 1 billion barrels authorized in the Energy Policy Act of 2005. DOE estimates that expanding to this size would take approximately 15 years and cost at least $1.3 billion, in 2004 dollars, based on selection of the lowest-cost expansion options. This cost includes, as we previously described, purchasing or constructing additional capacity at three existing SPR sites and constructing a new storage site at one of the five potential locations. We estimate that filling the additional capacity would cost approximately $13.4 billion in 2004 dollars, using the base-case cost estimate in the 2006 Annual Energy Outlook. The ORNL study estimates that the benefits of expanding the reserve to 1.5 billion barrels exceed the costs over a range of assumptions about future demand and oil prices. Expanding the SPR to 1.5 billion barrels is estimated to be cost-beneficial for each of the demand and world oil price forecasts in EIA’s 2005 Annual Energy Outlook. The 2005 Outlook contains four forecasts of the world oil market: a base-case forecast, a lower-price forecast, and two higher-price forecasts. A different level of oil demand is associated with each of these price forecasts. The estimated net benefits of expanding the SPR are greatest in the EIA forecast when oil demand is highest and oil prices are lowest, and least when oil demand is lowest and prices are highest. The ORNL study used the 2005 forecasts because, as we previously mentioned, it was completed before the 2006 Outlook was published. The 2005 Outlook forecasts higher oil demand and lower oil prices than the 2006 edition, but the author of the ORNL study noted that the highest-price case included in the 2005 report closely resembles the 2006 base case. Thus, SPR expansion appears to be cost-beneficial for the 2006 base-case forecast, but the study does not include oil prices as high as those in the 2006 high-price forecast, which would tend to decrease the benefits of a larger reserve. Beyond assumptions about future oil demand and price, the ORNL study makes a number of additional assumptions, including important assumptions about the probability of disruptions and the impact of oil price increases on GDP. The likelihood of oil supply disruptions in the future is uncertain and difficult to assess. The ORNL study considers two different estimates of the probability of oil supply disruptions: one that DOE created in 1990 and a second that the Stanford Energy Modeling Forum created in 2005. The benefits of expanding the SPR to 1.5 million barrels exceed the costs for both disruption probability estimates, but the benefits are larger for the 2005 Stanford Energy Modeling Forum estimate because this estimate (1) considers longer disruptions than those considered in the 1990 estimate and (2) recognizes that excess capacity will not be available from a part of the world where supply is disrupted. The measure of how much a given increase in oil price reduces GDP is known as the GDP elasticity of oil price. GDP loss avoided when the SPR is used during oil supply disruptions is a measure of the benefit of the SPR. The ORNL study used a range of GDP elasticity estimates and the results of the model runs indicate that, over that range, expanding the SPR is cost-beneficial. Some economists, however, believe that the GDP elasticity is lower than the bottom of the range of elasticity estimates used by the ORNL study. For example, the model we described in the previous section that EIA uses to estimate the impacts of oil supply disruptions uses values for this GDP elasticity derived from the Global Insight Macroeconomic Model that are one-quarter to one- half the size of the smallest value considered in the ORNL study. A smaller value for the GDP elasticity would reduce the calculated benefits of expanding the SPR. Many factors influence the ideal size of the SPR, including world demand for oil and the probability and potential size of oil supply disruptions. Although current projections anticipate increasing future demand for oil in the United States and world, future oil demand conditions are uncertain. Predicting future demand is difficult because it depends on many factors, including the rates of economic growth, the price of oil, policy choices, and technology changes. The rate of world economic growth strongly influences oil demand. Strong economic growth in China has increased its demand for oil and petroleum products, contributing to rising world oil prices since 2004. In that year, China became the world’s second largest consumer of oil, behind the United States, and its demand for oil grew at an annual rate of 15 percent. Conversely, the financial crisis in Asia in mid-1997 dramatically slowed the rate of oil demand growth in the region at that time, and oil demand even decreased between 1997 and 1998 in some countries. This change in demand contributed to lower oil prices in 1998 and early 1999, according to some experts. Future demand for oil will also depend on its price. As we previously described, crude oil prices are set in the world marketplace, and are largely outside the control of U.S. policymakers. High oil prices can encourage conservation and investment in fuel-efficient technologies and alternative fuels, reducing demand, while low oil prices can have the opposite effect. Members of our group of experts suggested several policy choices that might diminish growth in U.S. demand for oil. First, they suggested that research and investment in alternative fuels might reduce the growth of future U.S. oil demand. Vehicles that use alternative fuels, including ethanol, biodiesel, liquefied coal, and fuels made from natural gas, are now generally more expensive or less convenient to own than conventional vehicles, because of higher vehicle and fuel costs and a lack of refueling infrastructure. Alternative-fuel vehicles could become more viable in the marketplace if their costs and fuel delivery infrastructure become more comparable to vehicles fueled by petroleum products. Second, expert group members suggested that greater use of advanced fuel-efficient vehicles, such as hybrid electric and advanced diesel cars and trucks, could reduce U.S. oil demand. The Energy Policy Act of 2005 directs the Secretary of Energy to establish a program that includes grants to automobile manufacturers to encourage domestic production of these vehicles. Third, several members of our group of experts suggested improving the Corporate Average Fuel Economy (CAFE) standards to curb demand for petroleum fuels in the United States. After these standards were established in 1975, the average fuel economy of new light-duty vehicles improved from 13.1 miles per gallon in 1975 to a peak of 22.1 miles per gallon in 1987. More recently, the fuel economy of new vehicles in the United States has stagnated at approximately 21 miles per gallon. New CAFE standards for light trucks, including minivans and sport-utility vehicles, were announced by the administration in March 2006, which include larger vehicles that were not regulated under past standards. Other experts have questioned the need for enhanced CAFE standards, noting that today’s higher gasoline prices will bring about more efficient use of gasoline. Additionally, studies from the Congressional Budget Office suggest that a tax on gasoline could reduce demand at lower cost to the economy than enhanced CAFE standards. The size of the SPR needed to protect the U.S. economy also depends on the likelihood of oil supply disruptions. A number of factors in today’s energy market cause particular concern, including a reduction in global surplus oil production capacity in recent years, the fact that much of the world’s supply of oil is produced in relatively unstable regions, and rapid growth in world oil demand that has led to a tight balance between demand and supply. However, factors influencing disruption probability are likely to change over time. As we described in the previous section, international reserves augment the SPR’s ability to replace oil during supply disruptions. Since a release of oil anywhere in the world during a disruption can lower oil prices everywhere, strategic reserves in other countries are beneficial to the United States and influence the SPR’s ideal size. Along these lines, some members of our group of experts stressed the importance of international reserves to U.S. oil security and suggested that the United States and the International Energy Agency should encourage construction of strategic reserves abroad to be used during oil supply disruptions and should offer technical assistance to countries that want to construct such reserves. Officials from DOE and the International Energy Agency described efforts to support construction of reserves in other countries, including sponsoring workshops and providing other assistance. Experts pointed out that encouraging the construction of strategic reserves is particularly important in developing countries that are significant oil consumers and that are not currently members of the International Energy Agency, such as China. EIA forecasts that through 2025, demand in China will increase at a much faster rate than demand in more developed countries. Projections of future oil demand and oil market conditions are inherently uncertain, but these projections are key to any estimate of the optimal or necessary size of the SPR. If demand for oil grows as projected, keeping the SPR at its current size may put the economy at greater risk from the negative effects of oil supply disruptions. However, the estimates of world oil demand used in current studies could be too high or too low, resulting in high or low estimates of the SPR’s optimal size. Therefore, as time passes and oil markets change, periodic reassessments by DOE of the appropriate size of the SPR could be helpful as part of the nation’s long-term energy security planning. The SPR is a valuable asset for protecting the U.S. economy, providing benefits as a source of oil during supply disruptions and as a tool of diplomacy in foreign policy discussions. Our work shows that the SPR, particularly in conjunction with reserves held by the other countries of the International Energy Agency, can replace the oil lost during all but the most catastrophic disruption scenarios and, thus, can reduce the negative consequences of oil supply disruptions on the U.S. economy. However, our work also describes issues that could impact the cost and effectiveness of the SPR, including the conditions under which the reserve is filled, how DOE estimates the economic impacts of using the reserve, and the type of crude oil in the reserve. Expanding the reserve makes sense and will be necessary to maintain the economy’s present level of protection if demand for oil in the United States increases as expected. However, factors that influence the ideal size of the SPR are likely to change over time and will warrant periodic reassessments. Since the SPR’s inception, it has been filled and used in response to world events and changing conditions. Although some experts claimed that acquiring oil for the SPR after the terrorism events of September 2001 caused substantial increases in oil prices, the majority of experts we talked with believe that this increase was minimal because the volume of oil going to the SPR was very small relative to world oil demand. Experts believe that changes in SPR practices—including following a “dollar-cost- averaging” approach, where the government acquires a fixed dollar value of oil for the SPR over a specified time period, and allowing oil producers more flexibility in the timing of delivery for oil acquired for the SPR—could reduce the future cost of filling the SPR. Different parts of DOE have very different opinions on the amount of economic harm oil supply disruptions can cause and, thus, implicitly about the ideal size and use of the SPR. The estimates of the effect of price spikes on GDP that these different parts of DOE use are, respectively, near the high end and low end of the spectrum of such estimates in the economic literature. The two models have been used to support different kinds of decisions—the Office of Petroleum Reserves’ model has been used to support decisions about whether to expand the SPR, while the EIA model has been used to advise policymakers about the potential economic consequences of oil supply disruptions. Clarifying the differences between these models and how the models are used to provide policy advice would help ensure that DOE provides consistent transparent advice about the size and use of the SPR. The SPR protects the economy during oil supply disruptions by replacing the oil lost. For the SPR to be most effective, refiners need to be able to efficiently use the oil in the reserve in the absence of other sources of supply. The two types of crude oil currently stored in the SPR can be effectively used by most refineries during a supply disruption, but the lack of heavy sour oil in the SPR poses problems to refiners who use this type of oil. Adding some heavy sour oil to the SPR could provide a source of supply to these refiners during a disruption, while still leaving enough oil of other types for other refiners. A 2005 DOE study supports this finding, concluding that separately storing approximately 10 percent heavy sour crude in the SPR could provide oil supply to refiners who process heavy sour oil during a disruption and better protect the economy. Additionally, adding some heavy sour oil to the SPR could decrease the cost of filling the SPR, since this oil is generally less expensive than the lighter grades currently stored in the reserve. Although another 2005 study for DOE shows that expanding the SPR could be warranted, factors influencing the ideal size of the SPR are likely to change over time. Many factors influence the ideal size of the SPR, including oil demand levels and the likelihood of oil supply disruptions. Because these factors are very dynamic, decisions about expanding the SPR will always be made under uncertainty. Nonetheless, as the world changes, periodically revisiting decisions about SPR size would allow policymakers to use new information to refine their views on the SPR’s proper size. The Secretary of Energy should take the following four steps to improve the operation of the current SPR and to improve decisions surrounding the SPR’s use and expansion. Specifically, the Secretary should: Study how to best implement experts’ suggestions to fill the SPR more acquiring a steady dollar value of oil for the SPR over the long term, rather than a steady volume, to ensure a greater volume of fill when prices are low and a lesser volume of fill when prices are high and providing industry with more flexibility in the royalty-in-kind program to delay oil delivery to the SPR during times when supply and demand are in tight balance and current prices are higher than expected future prices. Conduct a new review about the optimal oil mix in the SPR that would examine the maximum amount of heavy sour oil that should be held in the SPR, in addition to the minimum amount determined in DOE’s prior report. The Secretary should ensure that DOE, at a minimum, implements its own recommendation to have at least 10 percent heavy sour oil in the SPR. Clarify the differences in structure and assumptions between the models used by the Office of Petroleum Reserves and EIA and clarify to policymakers how the models are used when providing advice to Congress and the executive branch. Periodically reassess the appropriate size of the SPR in light of changing oil supply and demand in the United States and the world. We provided a draft of this report to DOE for review and comment. DOE generally agreed with the conclusions and recommendations presented in the draft report, but provided additional information regarding the implementation of two of our recommendations. Additionally, DOE explained EIA’s efforts to update its model of the economic impacts of oil supply disruptions. In reviewing our draft report, DOE also provided technical and clarifying comments, which we incorporated as appropriate. DOE’s written comments are reproduced in appendix III. In response to our recommendation to study how to implement experts’ suggestions to fill the SPR more cost effectively, DOE noted that decisions on when to acquire oil are extremely complex and subject to many strategic and tactical considerations in addition to cost. We agree that SPR oil acquisition decisions must consider cost, market conditions, national security concerns, and other issues. DOE also stated that dollar cost averaging as a means to improve the cost-effectiveness of SPR fill could be employed only when DOE is purchasing oil, and noted that recent oil acquisition has been accomplished by the transfer of royalty oil from the Interior Department. However, we believe that dollar cost averaging when acquiring oil through the royalty-in-kind program is possible, although it would require that DOE vary the amount of oil it accepts from royalties and perhaps purchase some oil at times of low prices. Because of the potential for cost savings, we continue to believe that DOE should study such an approach. Finally, regarding this recommendation, DOE stated that it believes that the value of deferring oil deliveries to the SPR during the period of 2002 to 2004 would have been less than $590 million. To clarify, we did not attempt to value deferrals that DOE might have approved during this time period. Instead, the $590 million of potential savings referred to in the report reflects the potential savings from applying a dollar-cost- averaging approach from October 2001 through August 2005, not to the savings that could have occurred from deferring oil delivery. In response to our recommendation to consider storing heavy sour oil in the SPR, DOE stated that it does not believe the advantages of holding a heavier crude stream would justify replacing any of the current inventory. Instead, it believes that studying and implementing this recommendation should wait until the SPR is expanded. Neither our work nor DOE’s recent study explored the costs and benefits of adding heavy sour oil to the SPR. We believe that DOE should study the costs and benefits of adding heavy sour oil with and without SPR expansion. Without such analysis, DOE does not have data to determine whether replacing of any of the current inventory with heavy sour oil is economically justified. Regarding the last two recommendations, DOE agreed that officials will work together to better articulate the different approaches and perspectives contained in their modeling of the effects of oil supply disruptions on the economy, and committed to periodic reassessments of the SPR’s ideal size. DOE also described an ongoing update of the EIA model for assessing the impacts of supply disruptions. The new model is more complex than the older model, but according to EIA, its estimates of the GDP impacts of supply disruptions will remain smaller than those estimated by the Office of Petroleum Reserves’ model. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 21 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Secretary of Energy, and other parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report or need additional information, please contact me at (202) 512-6877 or wellsj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who have made major contributions to this report are listed in appendix IV. We addressed the following questions during our review: (1) Based on past experience, what factors do experts recommend be considered when filling and using the Strategic Petroleum Reserve (SPR)? (2) To what extent can the SPR protect the U.S. economy from damage during oil supply disruptions? (3) Under what circumstances would an SPR larger than its current size be warranted? In addressing these objectives, we conducted a comprehensive literature review of economic and public policy material relevant to the SPR’s fill and use, and to its ability to provide energy security for the U.S. economy. To identify articles for our literature review, we searched databases using key terms. We also obtained recommended reading lists of studies from several experts on issues related to the questions we addressed. We considered the methodological soundness of the articles and studies included in our literature review and determined that the findings of these studies were sufficiently reliable for our purposes. In addition, we conducted interviews with academics and experts, as well as industry representatives and officials from several offices within the Department of Energy (DOE), including the Energy Information Administration (EIA) and the Office of Petroleum Reserves. We also conducted interviews with academics and experts at institutes that study energy security issues. We selected these individuals on the basis of their expertise in energy security and SPR policy as represented by their presentations or publications. We present data and forecasts from EIA that have been deemed sufficiently reliable for our purposes. Additionally, we contracted with the National Academy of Sciences to convene a group of experts to collect opinions on the impacts of past SPR fill and use and on recommendations for the future, as well as on the benefit of the SPR in reducing economic losses in the event of oil supply disruptions. We worked closely with the National Academies to identify and select 13 group members (see table 7) who could adequately respond to our general and specific questions about current practices for filling and using the SPR and about the economic benefit the SPR could provide at its current size and at a larger size. In keeping with National Academies’ policy, the group members were invited to provide their individual views, and the group was not designed to reach a consensus on the issues that we asked them to discuss. The group members convened at the National Academies in Washington, D.C., on December 1, 2005. The views expressed by the group members do not necessarily represent the views of GAO or the National Academies. After the group of experts met, we analyzed a transcript of the discussion to identify principal themes and group members’ views. Although we were able to secure the participation of a balanced, highly qualified group of experts, the group was not representative of all potential views. Nevertheless, it provided a rich dialogue on current practices for filling and using the SPR and on what considerations are pertinent to identifying the best fill and use policies, as well as on how the SPR, at its current size and at a larger size, can protect the economy from significant losses in the event of oil supply disruptions. To learn what factors experts recommend be considered when making decisions about SPR fill and use, we reviewed records and reports from DOE and the International Energy Agency. We also reviewed available literature on the political and economic implications of various ways of filling and using the SPR, and interviewed experts from government, academia, and private industry on issues of SPR fill and use. To estimate the potential savings of using a dollar-cost-averaging approach to fill the SPR, we calculated the cost of using this approach for SPR oil acquisitions between October 2001 and August 2005. In addition, we ran simulations to project potential savings from a dollar-cost-averaging approach going forward over 5 years. Specifically, we evaluated 12 possible paths that future oil prices may take. First, starting from an initial price of $70 per barrel, we allowed prices to increase or decrease on average by varying degrees—the price paths increased or decreased at average rates of 1, 5, and 10 percent per year. Second, for each of these 6 possible price paths, we allowed prices to fluctuate to account for potential price volatility—for each of the 6 possible price paths, we allowed for a low- and high-price volatility case. Specifically, prices for each month were drawn randomly from a normal distribution, with standard deviations of $15 for the low volatility case and $50 for the high case. For each of these 12 scenarios, we then simulated future prices for 60 months and compared the average price per barrel under dollar cost averaging versus acquiring oil at a steady rate. We ran 1,000 simulations for each of the 12 scenarios and found that in all but 10 of the resulting 12,000 simulations, dollar cost averaging saved money. These simulations are not intended to measure the magnitude of savings. To do so would require using actual projections of oil prices and price volatility, something that was beyond the scope of this report. We did not independently verify information about security, drawdown rates, or other operational factors of the SPR or other strategic reserves held by countries that belong to the International Energy Agency. To analyze the ability of the SPR to reduce economic damage caused by oil supply disruptions, we present the results of two DOE models used to estimate the reduction of harm to U.S. gross domestic product (GDP) that would result from releasing oil from the SPR and international reserves during six hypothetical oil supply disruption scenarios. Oak Ridge National Laboratory (ORNL) produced one of these models under contract with DOE’s Office of Petroleum Reserves. ORNL officials produced model results for us. EIA produced the second model. We produced model results using the EIA model, and then verified these results with EIA officials. (See app. II for a more detailed discussion of the hypothetical oil supply disruption scenarios and the economic modeling effort.) Additionally, we conducted semistructured interviews with representatives from the refining industry. We spoke with representatives from companies that comprise 76 percent of the refining capacity of the United States to learn about their views on SPR operations. We also reviewed studies of the potential for oil supply disruptions to occur and to reduce U.S. GDP. To learn about the circumstances under which an SPR larger than its current size could provide additional energy security benefits, we reviewed an ORNL study that analyzed the expected costs and benefits of expanding the SPR, U.S. stockholding obligations to the International Energy Agency, and estimates of future U.S. oil demand. Finally, we also reviewed studies and interviewed expert group members and other oil market experts about factors that influence future demand for oil in the United States and alternatives for reducing U.S. economic losses in the event of oil supply disruptions. We present in this appendix the results of models that economists at ORNL and EIA created to simulate the effects of six hypothetical oil disruption scenarios. These scenarios illustrate the impacts of a variety of oil supply disruptions and the extent to which the SPR and international reserves could replace oil and protect the economy from losses. Both models make a number of assumptions in simulating the effects of disruptions on the economy, and some of these assumptions differ between models. To study the capabilities of the SPR and international reserves to replace oil and prevent economic damage during oil supply disruptions, we developed six hypothetical oil supply disruption scenarios. The six scenarios are as follows: A hurricane along the United States Gulf Coast decreases domestic oil production. This scenario is closely based on Hurricanes Katrina and Rita, which struck the U.S. Gulf Coast in August and September, 2005, and temporarily stopped a large percentage of the offshore crude oil production in the Gulf of Mexico. The disruption in production continued for several months as damaged offshore production platforms, pipelines, and onshore facilities were repaired. A strike occurs among oil workers in Venezuela. This scenario is based on the oil worker strike that occurred in Venezuela in 2002 to 2003. Although that strike lasted only 63 days, oil production was well below normal for several months and did not recover to its prestrike level. Iran stops exporting oil for 18 months. Although none of Iran’s 2.7 million barrels per day of exported crude oil go directly to the United States, removing this oil from the market would raise prices everywhere, thus impacting the U.S. economy. Terrorists attack the Abqaiq oil-processing facility in Saudi Arabia, which handles more than half of Saudi Arabia’s 10.4 million barrels per day of oil output. This facility is the largest oil-processing plant in the world, removing water, gas, sulfur, and other impurities before the oil is exported. This scenario assumes that a terrorist attack cripples the facility for 1 month, and then production recovers over 7 additional months as the facility is repaired. Terrorists attempted to attack this facility in February 2006, but security forces turned back the attack. Terrorist or military action closes the Strait of Hormuz, which connects the Persian Gulf with the Arabian Sea. Our scenario assumes that military action closes the Strait completely for 1 month, removing 17 million barrels per day of crude oil from the market. Oil supply then recovers over 2 months as the Strait is cleared and oil reaches the market through alternate routes. A catastrophic loss of oil production in Saudi Arabia occurs, eliminating exports of oil for 18 months. Oil production then recovers over the next 6 months. Since Saudi Arabia is the world’s largest exporter of crude oil, this is nearly a worst-case scenario for world oil supplies. For each scenario, table 8 shows the amount of crude oil disrupted during each month over a 2-year period. We selected these scenarios to illustrate the potential benefits of strategic reserves in disruptions of different size and duration, not because they are likely to occur. These scenarios are set in today’s oil market, with global crude oil demand of approximately 83 million barrels per day and U.S. demand of approximately 21 million barrels per day. We used two DOE models to estimate the economic effects of our six disruption scenarios. EIA developed one model and economists at ORNL developed the other, under contract to DOE’s Office of Petroleum Reserves. Both models estimate U.S. GDP loss from oil supply disruptions by linking disruptions to oil price spikes and linking price spikes to GDP losses. We used both models to estimate the economic effects of our hypothetical disruptions under three conditions: that is, no reserves are used in response to the disruption, the SPR is used alone, and the SPR is used in conjunction with international reserves. In both models, we assumed that world excess crude oil production capacity and world fuel-switching capabilities, together totaling 850,000 barrels per day, are available immediately to help offset a disruption. We also assumed that private inventories of crude oil are neutral during a disruption—holders of private inventory neither draw down their inventories nor hoard oil. Finally, we assumed that SPR and international reserves are used immediately at their maximum sustainable rate or at a rate large enough to replace disrupted oil supply. EIA’s Division of Energy Markets and Contingency Information has developed “rules of thumb” for estimating the oil price and U.S. macroeconomic impacts of oil supply disruptions, based on simulations from the Global Insight Macroeconomic Model of the U.S economy. The assumptions relating disruptions to oil price spikes are summarized in the “price rules of thumb” and the assumptions relating price spikes to GDP losses are summarized in the “economic rules of thumb.” EIA measures the response of world oil prices to a hypothetical supply disruption as the projected quarterly average increase in the price of West Texas Intermediate oil. EIA’s oil market analysis is based on competitive forces producing a market price on the basis of market fundamentals and market psychology during an oil supply disruption. The “price rules of thumb” are based on net disruption sizes and the current and expected future oil price level before the disruption. These rules of thumb provide a range of oil prices around an average price, and do not try to quantify the size of price spikes that could occur during disruptions. EIA estimates that a supply disruption when the price of oil is around $40 per barrel results in an oil price increase of between $4 and $6 per barrel for each 1 million barrels per day of oil that is disrupted. However, if the price of oil is about $50 per barrel, EIA estimates a price increase of between $5 and $7 per barrel for each 1 million barrels per day of oil that is disrupted. For a disruption of a given size, the higher the predisruption oil price, the bigger the price increase needed to balance supply and demand after the disruption. Additionally, EIA adds a “market psychology price premium” to the price calculated using the rules of thumb in situations where it believes market psychology will further increase the price. To translate oil price increases into GDP losses, EIA uses “economic rules of thumb,” based on simulations from the Global Insight Macroeconomic Model of the U.S. economy. These rules estimate that a sustained increase of 10 percent in the price of oil could result in a 0.05 to 0.l percent reduction in real U.S. GDP relative to its baseline value (the forecasted GDP without an oil disruption). EIA states that, for price increases greater than 10 percent, the GDP impacts would increase linearly with the price impacts, so that a doubling of the price impacts would result in a doubling of the GDP impacts. The EIA model’s GDP responsiveness estimates are derived from the Global Insight model that EIA uses for its long-run forecasts of energy market and overall economic activity. EIA notes that additional factors, such as the effect of high oil prices on the rest of the world’s economy, the reaction of the Federal Reserve to ameliorate the economic damage of high oil prices, and the change in the value of the dollar against foreign currencies, may also influence the economic impact of an oil price spike. Economists at ORNL, under contract to DOE’s Office of Petroleum Reserves, developed a model to estimate the costs and benefits of expanding the size and drawdown capability of the SPR. Economists at ORNL used a portion of this model to estimate the GDP impacts of our oil supply disruption scenarios. The model estimates the economic impacts of oil supply disruptions by first calculating the remaining oil shortfall after world excess oil production capacity has been utilized. Then the model assumes that world oil price increases sufficiently for world oil demand to contract enough to equal the now-reduced supply. On the basis of a review of the literature, the modelers assume a short-run price elasticity of demand for oil between -0.10 to -0.25. The elasticity gets larger as the duration of the supply shock gets larger and longer. The short-run oil demand elasticities then are used to determine the increase in the world price of oil. The GDP elasticity of oil price is then used to infer the losses in economic output that would follow a sudden, unanticipated oil price shock. The modelers draw on results from econometric studies of the sensitivity of the U.S. economy to oil price spikes to select a GDP elasticity, expressed in percentage terms, of -5.4 percent for a 100 percent spike in oil price. To estimate the benefits of expanding the size and drawdown capability of the SPR, the model simulates the impact of oil supply disruptions against DOE’s baseline paths for oil prices, world oil demands, U.S oil demands, and U.S. oil supplies. The primary benefit from the SPR is the GDP loss avoided when it is used to prevent or lessen the effects of oil price spikes. Their cost-benefit approach uses a simple model of the oil market and the U.S. economy to (1) assess the potential causes and likelihood that oil supply disruptions will occur, (2) account for the size of existing strategic oil stocks and expected degree of international cooperation on their use, (3) estimate the cost to the U.S. economy of oil supply disruptions and the incremental ability of additional SPR stocks and drawdown capability to reduce these costs, (4) estimate the costs of buying and storing oil in the SPR, and (5) determine the net benefit and efficient size of the SPR. The model uses a Monte Carlo simulation of the world oil market over the next several decades to model the likelihood of future oil supply disruptions. In assessing the economic costs of disruptions, the Office of Petroleum Reserves’ model makes a number of assumptions similar to those made by EIA, in particular, assumptions about the responsiveness of oil price to supply disruptions. However, the Office of Petroleum Reserves’ model assumes a considerably greater degree of responsiveness of the macroeconomy to oil price spikes than the EIA model. The Office of Petroleum Reserves’ model assumes for its base case that a sudden doubling in the price of oil could reduce GDP in the following year by about 5.4 percentage points below what it otherwise would have been. This contrasts with the EIA model result that a sudden doubling of the price of oil would cause about a 0.5 to 1.0 percent reduction in the level of real GDP relative to its value if an oil price increase did not occur. Some experts have suggested that the EIA model and Office of Petroleum Reserves’ model have assumptions that could be responsible for differences in their estimates of the responsiveness of GDP to disruptions. In the Office of Petroleum Reserves’ model, the responsiveness of GDP to an oil price shock incorporates a controversial assumption, that U.S. monetary authorities would not intervene and increase the money supply to accommodate the price shock. Some experts have suggested that, by increasing the money supply, monetary authorities could restore consumers’ purchasing power to its predisruption level and eliminate or moderate the GDP loss. Experts have also suggested deficiencies in the model that EIA uses for its estimates of the responsiveness of GDP to oil price shocks. A number of experts believe that large-scale macroeconomic models, such as the EIA model, underestimate the effects of oil price shocks on the economy. They question whether these models can distinguish between a price shock and a more gradual price increase. In contrast, the econometrically based estimates used by Office of Petroleum Reserves’ model and others are derived from models of oil price shocks. In addition to the individual named above, Dan Haas, Assistant Director; Dennis Carroll; Samantha Gross; Mike Kaufman; Marietta Mayfield; Cynthia Norris; Alison O’Neill; Paul Pansini; Jena Sinkfield; Anne Stevens; and Barbara Timmerman made key contributions to this report.
Congress authorized the Strategic Petroleum Reserve (SPR), operated by the Department of Energy (DOE), to release oil to the market during supply disruptions and protect the U.S. economy from damage. The reserve can store up to 727 million barrels of crude oil, and currently contains enough oil to offset 59 days of U.S. oil imports. GAO answered the following questions: (1) What factors do experts recommend be considered when filling and using the SPR? (2) To what extent can the SPR protect the U.S. economy from damage during oil supply disruptions? (3) Under what circumstances would an SPR larger than its current size be warranted? As part of this study, GAO developed oil supply disruption scenarios, used models to estimate potential economic harm, and convened 13 experts in conjunction with the National Academy of Sciences. The group of experts recommended a number of factors to be considered when filling and using the SPR. They generally agreed that filling the reserve by acquiring a steady dollar value of oil over time, rather than a steady volume of oil over time as has occurred in recent years, would ensure that more oil will be acquired when prices are low and less when prices are high. Experts also suggested allowing oil producers to defer delivery of oil to the reserve at times when supply and demand are in tight balance, with oil producers providing additional oil to the SPR to pay for the delay. Regarding use of the SPR, experts described several factors to consider when making future use decisions, including using the reserve without delay when it is needed to minimize economic damage. During oil supply disruptions, releasing oil from the SPR could greatly reduce damage to the U.S. economy, based on our analyses and expert opinions. Particularly when used in conjunction with reserves in other countries, the SPR can replace the oil lost in all but the most catastrophic oil disruption scenarios we considered, lasting from 3 months to 2 years. DOE uses one model to estimate the optimal size of the SPR and another to estimate the economic effects of oil supply disruptions. Both models predict positive effects from using the SPR, but the magnitude of such benefits differ. The substantial differences between the results of these two models could lead DOE to provide inconsistent advice about expanding and using the reserve. Furthermore, factors beyond the SPR's ability to replace oil affect the extent to which the SPR can protect the U.S. economy from damage. For example, SPR crude is not compatible with all U.S. refineries. During a disruption of heavy sour crude oil, refineries configured to use this type of oil would have to reduce production of some petroleum products when refining the lighter oil in the SPR, decreasing the reserve's effectiveness at preventing economic damage. If demand for oil increases as expected, a larger SPR would be necessary to maintain the existing level of protection for the U.S. economy. The Energy Information Administration recently projected increases in U.S. demand for petroleum of approximately 12 percent by 2015 and 24 percent by 2025, compared with the 2005 level. In this regard, a 2005 study prepared for DOE found that the benefits of expanding the reserve to 1.5 billion barrels exceed the costs over a range of future conditions. However, many factors that influence the SPR's ideal size are likely to change over time. For example, although projections show increasing oil demand, the level of demand depends on many factors, including rates of economic growth, the price of oil, policy choices related to alternatives to oil, and technology changes. Consequently, periodic reassessments of the SPR's size in light of new information could be helpful as part of the nation's energy security planning.
To understand where we are it helps to know where we’ve been. The budget process of today was not created in a single step. Rather, it was created in stages—and for the most part new pieces did not replace but were added to existing processes. Looking back at the objectives and structure of the 1974 Congressional Budget and Impoundment Control Act is very useful. The Constitution gives the Congress the power of the purse. The 1921 Budget and Accounting Act centralized power over executive agency budget requests under the President and—to balance this grant of power—moved control of the audit of spending from the Treasury to a new legislative branch entity, the GAO. The Congress also centralized its own spending decisions in the House and Senate Appropriations Committees. There was always some spending that did not go through the appropriations process—the first Congress provided a permanent appropriation to pay interest on the debt. However, this type of spending—otherwise referred to as “backdoor” spending—increased during the 20th century. Authority to borrow was created in the early 1930s. The Social Security Act of 1935 created a new permanent appropriation. Contract authority was expanded over the years. In 1987 and again this year, we reported that for fiscal years 1985 and 1994 close to 60 percent of budget authority and offsetting collections from nonfederal sources credited to accounts was available as a result of prior legislative action and was thus not provided in the annual appropriations process. An attempt in the late 1940s to create a joint House/Senate “legislative budget” failed. Meanwhile, the analytic strength of the Executive Office of the President was increased. Frustration with the piecemeal approach to spending and revenue decisions, concern that the increase in the proportion of the budget financed outside the appropriations process was leading to less control, and a major disagreement over Presidential versus congressional power to set spending led to the creation in 1972 of a Joint Study Committee on Budget Control. Its recommendations led directly to what later became the Congressional Budget and Impoundment Control Act of 1974. In that act, the Congress declared “that it is essential— (1) to assure effective congressional control over the budgetary process; (2) to provide for congressional determination each year of the appropriate level of Federal revenues and expenditures; (3) to provide a system of impoundment control; (4) to establish national budget priorities; and (5) to provide for the furnishing of information by the executive branch in a manner that will assist the Congress in discharging its duties.” We all often forget that the 1974 act did not seek a specific result in terms of the deficit. Rather, it sought to assert the Congress’ role in setting overall federal fiscal policy and establishing spending priorities and to impose a structure and a timetable on the budget debate. Underlying the 1974 act was the belief that the Congress could become an equal player only if it—like the executive branch—could offer a single “budget statement” with an overall fiscal policy and an allocation across priorities. Prior to 1974 only the President had a fiscal policy. The Congress did not look at the budget as a whole and there was no congressional budget per se, only the cumulative result of individual pieces of legislation. The Congress did not examine or vote on overall spending or revenues. The 1974 act sought to change that, and it did. The act sought to create a “congressional” budget as a counterpoint to the President’s budget—but it carefully avoided giving the Budget Committees anything like the power or even the coordinating role of the President’s Office of Management and Budget (OMB). The Budget Committees were layered on top of the existing committee structure, and limitations were placed on the level of detail with which the Budget Committees could deal. The Budget Resolution was to represent a congressional statement about total revenues and total spending and about the allocation of spending across various national missions. The design of programs and the allocation of spending within each mission area would be left to the authorizing and appropriations committees. The Budget Committees would deal in round numbers—they could not decide policy. Of course, this distinction was always a little artificial. Even in a world of lower deficits there were always policy assumptions behind the numbers. Frequently policy or program design defines the range of numbers possible. And, it turns out that the model of first deciding how much and then debating the specifics is not an entirely comfortable model for federal budget decisions. For some, the decision on “how much” is tied to the decision on how that number will be achieved. Recently, as the budget process has been increasingly aimed at deficit reduction, this distinction between overall numbers and the specific policies to achieve them has become more strained. The 1974 act also eliminated the Congress’ dependence on OMB for numbers and analysis by giving the Congress an independent source of budget numbers—the Congressional Budget Office (CBO). It settled the fight about impoundments by setting up a process for the President to report rescissions and deferrals. It was not until the Balanced Budget and Emergency Deficit Control Act of 1985—commonly known as Gramm-Rudman-Hollings or GRH—that the focus of the process changed from increasing Congressional control over the budget to reducing the deficit. Both the original GRH and the 1987 amendments (GRH II) sought to achieve a balanced budget by establishing annual deficit targets to be enforced by “sequesters” if legislation failed to achieve them. Measured against its stated objective of a balanced budget, GRH failed. GRH sought to hold the Congress responsible for the deficit, regardless of what drove the deficit. If the deficit grew because of the economy or demographics—factors not directly controllable by the Congress—the sequester response dictated by GRH was the same as if the deficit grew because of congressional action or inaction. If a sequester was necessary, it did not differentiate between those programs where the Congress had made cuts and those where there had been no cuts or even some increases—an almost pure prisoners’ dilemma. Finally, the timing of the annual “snapshot” determining the deficit and the size of the sequester and the fact that progress was measured 1 year at a time created a great incentive for achieving annual targets through short-term actions such as shifting the timing of outlays. The deficit did not, as we know, come down as envisioned. As table 1 below shows, in the year GRH II called for a zero deficit the actual deficit was $255 billion. The perceived failures of GRH led to the Budget Enforcement Act (BEA) of 1990. This act—extended and amended in the Omnibus Budget Reconciliation Act (OBRA) of 1993—was designed to enforce the multi-year provisions of the summit agreement reached by President Bush and the Congress. GRH, with its sole focus on the deficit, was unable to achieve its goals without the Congress and the Administration agreeing to address programs whose spending is driven by economic, demographic, or other behavioral factors. The focus of BEA is very different from that of Gramm-Rudman-Hollings. BEA seeks to limit congressional action and so to influence the result. Unlike GRH, BEA holds the Congress accountable for what it can directly control through its actions, and not for the impact of the economy or demographics, which are beyond its direct control. And on those terms BEA has been a success. BEA did this by dividing spending into two parts: pay-as-you-go (PAYGO) and discretionary. It imposed caps on the discretionary part that have succeeded in holding down discretionary spending—as a share of gross domestic product, discretionary spending declined from 9.2 percent in 1990 to 7.2 percent in 1996. BEA has also constrained congressional actions to create new entitlements or tax cuts. GRH sought to use a change in process to force agreement. The experience under this act showed, I believe, that no process can force agreement where one does not exist. In contrast, both in 1990 and 1993 substantive agreement on the discretionary caps and PAYGO neutrality was reached and BEA process was created to enforce this agreement. This is an important distinction. Although BEA has succeeded on its own terms, its ambition was limited. It did not seek to control economic, price- or demographic-driven growth in existing direct spending programs or tax expenditures, and these are the areas of greatest growth today. A budget process can facilitate or hamper substantive decisions, but it cannot replace them. The budget structure can make clear information necessary for important decisions or the structure can make some information harder to find. The process can highlight trade-offs and set rules for action. Later in this statement I suggest some broad objectives for a budget process or criteria by which it might be judged. As your staff requested, however, I will first expand a little on the question of how BEA’s design and the evolution of the budget process relates to the challenges you face today. BEA created a sharp distinction between appropriated programs—the discretionary portion of the budget—and what are called direct spending programs—primarily entitlements—and revenues. Within entitlements BEA made another distinction between changes in program costs driven by legislation and those driven by changes in population, the economy, private behavior, or prices. Because the sharpness of this distinction has become even more important, I’d like to elaborate a little. BEA focused on actions: it specified that the Congress must appropriate only so much money each year for discretionary programs and that any legislated changes in entitlements and/or taxes during a session of the Congress were to be deficit-neutral. The effect of this control on discretionary programs and on entitlements has been quite different. Spending for discretionary programs is controlled by the appropriations process. The Congress provides budget authority and specifies a period of availability. Controlling legislative action is the same as controlling spending. The amount appropriated can be specified and measured against a cap. For entitlement programs and for revenues, controlling legislative actions is not the same as controlling spending or revenues. For an entitlement program, spending in any given year is the result of the interaction between the formula that governs that program and demographics or services provided. For example, spending for a retirement program is a function of the number of retirees, the amount each is entitled to under the program’s benefit formula, and any inflation adjustment. The eligibility rules and the benefit formulas are specified in law; the number of dollars to be spent is not. BEA required that if the Congress and the President were to legislate an expansion in any entitlement program—either through the benefit formula or the individuals or services covered—that expansion had to be “paid for” during the same session of the Congress through either a legislated reduction in another entitlement program or a revenue increase. Legislated changes in entitlements and taxes were to be deficit-neutral over multi-year periods. However, BEA did NOT seek to control changes in direct spending or in revenues (including tax expenditures) that resulted from changes in the economy, changes in population, changes in the cost of medical care, etc. And it is the increased cost of entitlements caused by such changes that is driving the budget outlook. In the recent report on backdoor spending for Chairman Domenici and Senator Exon we reported that the greatest growth in such spending authority since our 1987 report has not been new accounts but in accounts—largely medical and retirement— which have existed for 30 years or more. Indeed, six accounts, all of them in existence more than 30 years, used 84 percent of total permanent appropriations used in 1994. In a 1995 report to you and Chairman Domenici we updated our simulations of the long-term economic impacts of deficits we first published in a 1992 report. We identified three forces driving the long-term growth of budget deficits: health spending, interest costs, and—after 2010—Social Security. These simulations did not assume any legislated changes in health programs. Nonetheless, health care cost inflation and the aging of the population work together to drive the deficit to unsustainable levels with extremely negative economic effects. Are the expressed objectives of the 1974 act still relevant as we approach the 21st century? At one level the answer must clearly be “yes.” Some of these objectives have been met—there is now a system of impoundment controls—and others have now been firmly embedded into the framework of our budget debate. And, in a broad sense, there can be little quarrel with the need to continue effective congressional control over the budgetary process, to provide for congressional determination of the appropriate level of federal revenues and expenditures, or to establish national priorities. The questions that confront those who would stand back and look at the process as a whole are to what degree have these objectives been achieved, should they be modified, and—given the challenges of the near future—should the Congress have additional objectives for its budget process. I would like to turn now to the question of what a budget process should do. Some of this discussion repeats points made earlier but in a different context. First, I’ll list four broad goals or criteria for a budget process, discuss the current process in those terms, and comment on some possible changes. Then I’ll turn to the overarching issue of streamlining the process. provide information about the long-term impact of decisions while recognizing the differences between short-term forecasts, medium-term projections, and a long-term perspective; provide information and be structured to focus on the important macro trade-offs, e.g., between consumption and investment; provide information necessary to make informed trade-offs on a variety of levels, e.g., between mission areas and between different tools; and be enforceable, provide for control and accountability, and be transparent. Let me discuss each of these in turn. A long-term perspective is important in both a macro and a micro sense. The macro perspective has to do with our nations’s economic health. In previous reports and testimonies we have argued that the nation’s economic future depends in large part upon today’s budget and investment decisions. Therefore, we believe that, at the macroeconomic level, the budget should provide a long-term framework and should be grounded on a linkage of fiscal policy with the long-term economic outlook. This would require a focus both on overall fiscal policy and on the composition of federal activity. The micro aspect of this longer-term perspective relates to those programs and activities where a longer time horizon is necessary to understand the fiscal and spending implications of a commitment. Examples include retirement programs, Medicare, pension guarantees, and mortgage-related commitments. Even very rough projections may be better in these areas than ignoring the long term. Although the multi-year focus of BEA represents significant progress in this regard, planning for longer-range economic goals requires exploring the implications of budget decisions for as long as 30 years or more into the future. This is not to say that detailed budget projections could be made over a longer-time horizon. Forecasts and projections are difficult enough for 1 to 3 years. The longer the time horizon, the less accurate any detailed projection is likely to be. However, there are differences between a short-term forecast, medium-term projections, and a long-term perspective. The President, the Congress, and the public need to think about the longer term when making choices about the composition of federal activity. This is true for at least two reasons: (1) each generation is in part custodian for the economy it hands the next and (2) some changes must be phased in over long periods of time. Introducing a longer-term perspective into the budget debate without falling into the trap of treating 30-year projections as anything more than indicative simulations is difficult. In testimony last year we provided some ideas on how this might be done. For example, if financial statements were improved and available with the President’s budget, the two together would provide useful information on the longer-term implications of some policies. Another approach might be to have long-term simulations of current budget policies, perhaps over a 30-year period, prepared periodically to help assess the future consequences of current decisions. The effects of policy changes as well as broader fiscal policy alternatives could be projected over the long term. Such projections could be prepared and presented in the President’s budget documents. Although the surest way of increasing national savings and investment would be to reduce federal dissaving by eliminating the deficit, the composition of federal spending also matters. Federal spending can be divided into two broad categories based on the economic effect of that spending: consumption spending having a short-term economic impact and investment spending intended to have a positive effect on long-term private-sector economic growth. We have argued that within any given fiscal policy path, the allocation of federal activity between investment and consumption is important and is deserving of explicit consideration. The current budget process does not prompt the executive branch or the Congress to make explicit decisions about the appropriate mix of spending for current consumption and spending for long-term investment. Appropriations subcommittees provide funding by department and agency in appropriations accounts that do not distinguish between investment and consumption spending. Although alternative budget presentations that accompany the President’s budgets provide some information on investment, these are not part of the formal budget process. The investment/consumption decision is not one of the organizing themes for the budget debate. How consideration of investment versus consumption is introduced into the budget process depends on how the overall process is to be structured. We have suggested that within the existing BEA structure, incorporating an investment component under the discretionary caps would be an appropriate and practical approach to supplement the unified budget’s focus on macroeconomic issues. An investment component would direct attention to the trade-offs between consumption and investment—but it would not weaken the overall fiscal discipline established by the caps. It would provide policymakers with a new tool for setting priorities between the long term and the short term. If the Congress and the President chose to change the budget process in ways that moved away from the current system of discretionary caps and PAYGO rules, one of the issues to consider in designing a new process would be how to introduce this trade-off between the long term and the near term, between investment and consumption, into the structure of the debate. The budget process is the central process through which the President and the Congress select among and balance the competing demands for government activity in achieving various goals. Therefore, the process should provide the information necessary to debate the relative priority among national needs or missions. The functional structure of the budget resolution was intended to facilitate priority-setting even among related programs housed in different agencies and different committees. By organizing the budget along “national needs” or mission areas, the budget resolution sought to permit an examination of the totality of federal spending activity in each area—regardless of the committee of jurisdiction or the agency at issue—and to permit priority-setting and trade-offs between missions. Instead of focusing on what each department spent, the Congress and the President were to be able to look across departments at the totality of activity in education and training or income security or transportation. From the beginning, however, the structure was not complete; if the government chose to advance a given mission area through the tax code, that commitment did not show up in the functional display. So, for example, the functional structure shows support for science and technology through loans or grants or federal activity but not through the research and development tax credit. Even on the spending side of the budget, however, the functional totals do not translate into and may not match the allocation of resources to the appropriations subcommittees. While the budget resolution is organized by national mission, the appropriations subcommittees are still organized along agency lines. This makes it difficult to trace the path from the budget resolution’s stated priorities through the appropriations process. Although CBO translates the budget resolution functional totals into allocations to the full Appropriations Committees, suballocations to the subcommittees (the so-called 602(b) allocations) are made by the Appropriations Committees. At one level priority setting within the discretionary side of the budget has been delegated to the Appropriations Committee—where it resided before the 1974 act. The Congress may or may not consider this a problem. However, if you are standing back and looking at the entire budget process, a question to ask is whether the current functional structure highlights the mission trade-offs relevant for today and whether the functional structure is doing as much to facilitate a debate among priorities as you would like. The sharp division BEA sought to draw between discretionary spending limits and the PAYGO scorecard made a great deal of sense. It simplified jurisdictional issues. It also recognized the difference in time horizons. Discretionary appropriations may be provided for 1 or more years and a discretionary spending cut may be a 1-year cut. Most changes in entitlement or tax law last longer than a single year. This sharp division, however, limits the ability to shift spending priorities. For example, it would be difficult to shift spending away from consumption support concentrated in the mandatory sector toward investment programs funded in the discretionary portion of the budget. Current rules do not permit cuts in mandatory spending to be used to pay for increases in appropriated programs. Consideration should be given to when and under what circumstances breaching the wall between discretionary and mandatory categories makes sense. At a level below the establishment of broad spending priorities, the budget process should facilitate the selection of the appropriate policy tool with which to address some mission. For any given goal or mission in which the federal government will play a financial role, there are a variety of tools available: grants, loans, loan guarantees, or tax provisions. The budget process should provide the information necessary to permit a choice based not on jurisdictional problems or scoring conventions but on the match between the goal and the tool. In order to facilitate appropriate choices, the budget must also provide information on the costs of various alternatives—on a comparable basis—and on the nature of the government’s commitment. This is one area in which there has been some improvement. The Credit Reform Act changed the way loans and loan guarantees were treated in the budget because the previous cash-based treatment gave decisionmakers misleading signals on the cost comparisons among grants, loan guarantees, and direct loans. However, as I noted above, there are still some programs for which either cash-based reporting sends misleading signals or for which even a 5-year perspective provides a misleading perspective on the nature of the government’s actual or potential commitment. These three elements are not identical, but they are closely related and achieving one has implications for the others. By enforcement I mean a mechanism to enforce decisions once they are made. Accountability has at least two dimensions: accountability for the full costs of commitments that are to be made, and targeting enforcement to actions taken. It can also encompass the broader issue of taking responsibility for responding to unexpected events. And, finally, the process should be transparent, that is, understandable to those outside the process. I will discuss each of these in turn. Enforcement: In general, enforceability requires a system for tracking outcomes and tying them to actions. One great strength of BEA has been the enforcement provisions. By targeting penalties to actions, BEA has succeeded in restraining discretionary spending to within the caps and in restraining new direct spending legislation. The design of the enforcement provisions in BEA has also created accountability for actions. Costs are to be recorded in the budget up front, when they can be controlled. And enforcement is targeted to actions. The appropriations committees are responsible for compliance with the discretionary spending limits while the PAYGO scorecard tracks compliance with the PAYGO rules. Unlike the prisoners’ dilemma created by GRH, sequesters are applied only to an area where the breach occurs. Accountability: The targeted nature of the sequester provisions in BEA served not only as enforcement but also to provide accountability for compliance with the rules. Some of the scoring and costing rules introduced by BEA have also increased accountability for the costs of actions taken. On another level, however, accountability is diffuse. The deficits in the early 1990s were greater than those expected by those who voted for and complied with the provisions of OBRA. This slippage was due almost entirely to a worse than expected economy and “technical changes.” Although GRH showed that holding committees responsible for results rather than actions is problematic, there are ways to bring more responsibility for the results of unforeseen actions into the system. We, and former CBO Director Reischauer, have previously suggested that the Congress might want to consider introducing a “lookback” into its system of budgetary controls. In a report issued to the Republican leadership last year, we described such a process under which the Congress would periodically look back at progress in reducing the deficit. Such a lookback would compare the current CBO deficit projections to those projected at the time of a prior deficit reduction agreement and/or the most recent reconciliation legislation and analyze the reasons for any difference. For a difference exceeding a predetermined amount, the Congress would decide explicitly—by voting—whether to accept the slippage or to act to bring the deficit path closer to the original goal by recouping some or all of this slippage. Although one could argue that each year’s budget resolution implicitly accepts or rejects changes in the deficit outlook, it does not require an explicit consideration and decision. Adoption of the requirement for such explicit consideration would provide members who make difficult choices in reconciliation an additional opportunity to ensure that the deficit path they voted for will, in fact, materialize. A similar—but more narrowly focused—process could be used to prompt consideration of the path of mandatory spending. Under its current structure, BEA requires any action that would cause a growth in mandatory spending to be offset, but it leaves completely unconstrained any growth in these programs that results from economic or demographic factors. This distinction is consistent with the act’s focus on controlling actions, but it has created other problems. Indeed, the very success of BEA at constraining discretionary and new direct spending has highlighted the dramatic growth in some entitlement programs. One way to begin to deal with this might be to adopt a procedure similar to that recommended by the House members of the Joint Committee on the Organization of the Congress. Under such a procedure, direct spending targets for several fiscal years could be specified. If the President’s budget showed that these targets were exceeded in the prior year or would be exceeded in the current or budget years, the President would be required to analyze the causes of the overage and recommend whether none, some, or all of the overage should be recouped. The Congress could be required to vote either on the President’s proposal or on an alternative one. If the goal was merely to restrain direct spending to the currently projected levels, then the current law baseline would constitute the targets. However, such a procedure could also be used as a kind of lookback on the success of any efforts to reduce mandatory spending. Transparency: Transparency is important because the budget debate is critically important—not because of the numbers in it but because it represents a statement about collective priorities and collective action. In a democracy, the debate about these priorities should be made as understandable as possible. If even reasonably dedicated citizens cannot understand the budget document or the budget debate, there is little accountability. If the budget debate is to be accessible to the American people—or to any significant subset of the population—consideration will have to be given to simplifying the structure of the budget, streamlining the process, and reducing the number of translations required to get from one part of the process to another. Does the Congress wish to organize the debate by national mission or by agency? If there is a need for both perspectives, how can they be brought together in an understandable way? Discussions about 602(b) allocations and “direct spending” are the stuff of what someone once called “budget process groupies”—not of the evening news or quick explanation. There must be summary documents, such as the old Budget in Brief, that explain where money comes from and where it goes. For fiscal years 1996 and 1997, OMB once again included a citizen-oriented document as part of the budget documents. The Citizen’s Guide to the Federal Budget provided an overview of the budget, highlighting such concepts as the deficit and the debt, and reviewing the President’s budget proposals. They did not, however, provide much insight on the long-term implications of current spending policies. Citizens cannot be expected to feel a stake in the budget debate—a debate that will affect all our lives and our national future—or to accept decisions made by others without basic information. At a minimum citizens need to how much money the federal government takes in—and how—and on what funds are spent. Each of the criteria or goals are important, and they are related—but they cannot all be maximized in a single process. Trade-offs are necessary. Any review of the budget process comes up against the overarching question: is there just too much process? The feeling that there are too many votes on related issues is, as I noted, in part a function of the way the process was created, of the decision to layer the Budget Committees and the budget process on top of the existing committee and procedural structure of the House and the Senate. The idea was that the budget resolution would define the overall aggregates and the rest of the process would proceed within those aggregates. As I mentioned above, however, especially as the goal of the process shifted to deficit reduction, this distinction became increasingly strained. There are a number of possible responses, but most of them involve considering the relationship of the budget resolution to legislation and of the various committees in the Congress. Streamlining—making the process take less time—has been the focus of a number of proposals in the past. However, it is in this area that it is especially important to think about the fact that a response to one problem may create another problem. Eliminating parts of the process or changing the cycle will have consequences beyond reducing the number of votes. These may or may not be acceptable, but they should be recognized. I will touch very briefly on three processes: the budget resolution, authorizations, and appropriations. If the recent pattern of multi-year fiscal policy agreements is to continue, are annual budget resolutions still necessary? It is important to review progress every year, but such a review may not require a complete budget resolution. If, however, annual budget resolutions are to be replaced with biennial budget resolutions, then something like the “lookback” procedure described above could become very important. Without it, there would be no procedure for tracking progress against the previous budget agreement or reconciliation bill. Multi-year authorizations can provide a longer-term perspective within which appropriations would be determined. Although the need for periodic reauthorizations can provide a window for program revision, there is little reason to reexamine and reauthorize programs more often than they might actually be changed. Of course, multi-year authorizations are already the rule in the nondefense portion of the budget. Some have suggested that changing the appropriations cycle from annual to biennial could free up time. As I have previously testified before this committee, it is important to differentiate between the length of availability of funds and the timing of the appropriations cycle. Even within the 39 percent of the budget that is on an annual budget cycle, not all appropriations are for 1-year funds. The appropriations subcommittees have been able—even within an annual appropriations cycle—to provide 1-year, multi-year, or no-year money as they have thought appropriate for the program or agency at issue. Annual appropriations have long been a basic means of exerting and enforcing congressional policy. A 2-year appropriations cycle would change the nature of that control. It is also unclear how much time it would save. In the end, streamlining or reducing the amount of time spent on apparently repetitive votes will require decisions about which votes are no longer necessary. That, in turn, is likely to require decisions about the relationship between discretionary and mandatory spending, between various committees, and about the nature and style of congressional control over the budget and appropriations. The budget process is the source of a great deal of frustration. The public finds it hard to understand. Members of the Congress complain that it is time-consuming and duplicative, requiring frequent votes on the same thing. And, too often, the results are not what was expected or desired. It is inevitable that, given the nature of today’s budget challenge, there will be frustration. It is important, however, to try to separate frustration with process from frustration over policy. To bring the deficit down requires hard decisions about what government will and will not do. A process may facilitate the debate, but it cannot make the decision. In considering whether and how to redesign the budget process, therefore, it is important to look beyond those frustrations tied directly to the need to bring down the federal deficit. The budget process serves a wider purpose. It is, in a real sense, the process for dealing with competing claims and setting priorities. The budget process should offer the Congress the means to set overall fiscal policy and to make decisions about relative priorities among missions or claims. In a democracy this process should be understandable to the interested citizen and it should offer that citizen some accountability. I have suggested that these overall goals are advanced by a process that: provides a long-term focus; provides information and structure to focus on important macro trade-offs; provides information necessary to make trade-offs between mission areas and between different governmental tools; is enforceable in that it provides for control and accountability; and is transparent. The apparently never-ending and repetitive nature of the budget process is in large part a function of the way it was created. A new process to provide an overall view was layered on top of the existing structures and processes by which the micro decisions are made in the Congress. Any attempt to streamline or “simplify” the process must consider the relationship between the goal of simplicity and the existing decision structure in the Congress. In addition, I have suggested that the Congress might want to consider the creation of a lookback procedure by which it would periodically look back at progress in reducing the deficit. Such a lookback would compare the current CBO deficit projections to those projected at the time of a prior deficit reduction agreement and/or the most recent reconciliation legislation and analyze the reasons for any difference. For a difference exceeding a predetermined amount, the Congress would decide explicitly—by voting—whether to accept the slippage or to act to bring the deficit path closer to the original goal by recouping some or all of this slippage. Although one could argue that each year’s budget resolution implicitly accepts or rejects changes in the deficit outlook, it does not require an explicit consideration and decision. Adoption of the requirement for such explicit consideration would provide members who make difficult choices in reconciliation an additional opportunity to ensure that the deficit path they voted for will, in fact, materialize. Mr. Chairman, no budget process is easy to design or to live with. I would be happy to answer any questions you or your colleagues may have, and we stand ready to work with you as you consider whether changes in the budget process are necessary and, if so, their design. Correspondence to Chairman Horn, Information on Reprogramming Authority and Trust Funds (GAO/AIMD-96-102R, June 7, 1996). Correspondence to Chairman Kasich, Budgeting for Federal Insurance (GAO/AIMD-96-73R, March 22, 1996). Budget Issues: Earmarking in the Federal Government (GAO/AIMD-95-216FS, August 1, 1995). Budget Process: History and Future Directions (GAO/T-AIMD-95-214, July 13, 1995). Budget Structure: Providing an Investment Focus in the Federal Budget (GAO/T-AIMD-95-178, June 29, 1995). Correspondence to Chairman Wolf, Transportation Trust Funds (GAO/AIMD-95-95R, March 15, 1995). Budget Process: Issues Concerning the 1990 Reconciliation Act (GAO/AIMD-95-3, October 7, 1994). Budget Policy: Issues in Capping Mandatory Spending (GAO/AIMD-94-155, July 18, 1994). Budget Process: Biennial Budgeting for the Federal Government (GAO/T-AIMD-94-112, April 28, 1994). Budget Process: Some Reforms Offer Promise (GAO/T-AIMD-94-86, March 2, 1994). Budget Policy: Investment Budgeting for the Federal Government (GAO/T-AIMD-94-54, November 9, 1993). Budget Issues: Incorporating an Investment Component in the Federal Budget (GAO/AIMD-94-40, November 9, 1993). Correspondence to Chairmen and Ranking Members of House and Senate Committee on the Budget Committees and Chairman of former House Committee on Government Operations (B-247667, May 19, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the evolution of the budget process and challenges to changing the budget process. GAO noted that: (1) conflicts over who controlled the budget led to the Congressional Budget and Impoundment Control Act of 1974 and Congress' role in setting federal fiscal policies and establishing spending priorities; (2) the act also established the Congressional Budget Office as an independent source of budget numbers and set up a process for the President to report budget rescissions and deferrals; (3) in 1985, the focus of the budget process shifted from increasing congressional control over the budget to reducing the deficit; (4) later legislation held Congress accountable for the part of the budget under its direct control, but did not seek to control growth in direct spending programs or tax expenditures, which are the main reasons for continued budget growth; (5) the budget process should provide a long-term perspective, focus on macro trade-offs between consumption and investment expenditures, provide necessary information to make informed trade-off decisions between missions areas, be enforceable, provide for control and accountability, and be transparent; (6) streamlining parts of the budget process may cause problems in other budget processes; (7) if the use of multiyear fiscal policy agreements continues, then annual budget resolutions could be eliminated, but a lookback procedure would become very important; and (8) a 2-year appropriation cycle would change the nature of the budget process, but how much time would be saved is unclear.
DHS and its components have used various mechanisms over time to coordinate border security operations. In September 2013, we reported that the overlap in geographic and operational boundaries among DHS components underscored the importance of collaboration and coordination among these components. To help address this issue and mitigate operational inflexibility, DHS components, including those with border security-related missions such as CBP, Coast Guard, and ICE, employed a variety of collaborative mechanisms to coordinate their missions and share information. These mechanisms had both similarities and differences in how they were structured and on which missions or threats they focused, among other things, but they all had the overarching goal of increasing mission effectiveness and efficiencies. For example: In 2011, the Joint Targeting Team originated as a CBP-led partnership among the Del Rio area of Texas, including Border Patrol, CBP’s Office of Field Operations, and ICE. This mechanism was expanded to support the South Texas Campaign (STC) mission to disrupt and dismantle transnational criminal organizations, and its membership grew to include additional federal, state, local, tribal, and international law enforcement agencies. In 2005, the first Border Enforcement Security Task Force (BEST) was organized and led by ICE, in partnership with CBP, in Laredo, Texas, and additional units were subsequently formed along both the southern and northern borders. The BESTs’ mission was to identify, disrupt, and dismantle existing and emerging threats at U.S. land, sea, and air borders. In 2011, CBP, Coast Guard, and ICE established Regional Coordinating Mechanisms (ReCoM) to utilize the fusion of intelligence, planning, and operations to target the threat of transnational terrorist and criminal acts along the coastal border. Coast Guard served as the lead agency responsible for planning and coordinating among DHS components. In June 2014, we reported on STC border security efforts along with the activities of two additional collaborative mechanisms: (1) the Joint Field Command (JFC), which had operational control over all CBP resources in Arizona; and (2) the Alliance to Combat Transnational Threats (ACTT), which was a multiagency law enforcement partnership in Arizona. We found that through these collaborative mechanisms, DHS and CBP had coordinated border security efforts in information sharing, resource targeting and prioritization, and leveraging of assets. For example, to coordinate information sharing, the JFC maintained an operations coordination center and clearinghouse for intelligence information. Through the ACTT, interagency partners worked jointly to target individuals and criminal organizations involved in illegal cross-border activity. The STC leveraged assets of CBP components and interagency partners by shifting resources to high-threat regions and conducting joint operations. More recently, the Secretary of Homeland Security initiated the Southern Border and Approaches Campaign Plan in November 2014 to address the region’s border security challenges by commissioning three DHS joint task forces to, in part, enhance collaboration among DHS components, including CBP, ICE, and Coast Guard. Two of DHS’s joint task forces are geographically based, Joint Task Force – East and Joint Task Force – West, and one which is functionally based, Joint Task Force – Investigations. Joint Task Force – West is separated into geographic command corridors with CBP as the lead agency responsible for overseeing border security efforts to include: Arizona, California, New Mexico/West Texas, and South Texas. Coast Guard is the lead agency responsible for Joint Task Force – East, which is responsible for the southern maritime and border approaches. ICE is the lead agency responsible for Joint Task Force – Investigations, which focuses on investigations in support of Joint Task Force – West and Joint Task Force – East. Additionally, DHS has used these task forces to coordinate various border security activities, such as use of Predator B UAS, as we reported in February 2017 and discuss below. In September 2013, we reported on successful collaborative practices and challenges identified by participants from eight border security collaborative field mechanisms we visited—the STC, four BESTs and 3 ReCoMs. Their perspectives were generally consistent with the seven key issues to consider when implementing collaborative mechanisms that we identified in our 2012 report on interagency collaboration. Among participants who we interviewed, there was consensus that certain practices facilitated more effective collaboration, which, according to participants, contributed to the groups’ overall successes. For example, participants identified three of the seven categories of practices as keys to success: (1) positive working relationships/communication, (2) sharing resources, and (3) sharing information. Specifically, in our interviews, BEST officials stated that developing trust and building relationships helped participants respond quickly to a crisis, and communicating frequently helped participants eliminate duplication of efforts. Participants from the STC, BESTs, and ReCoMs also reported that having positive working relationships built on strong trust among participants was a key factor in their law enforcement partnerships because of the sensitive nature of law enforcement information, and the risks posed if it is not protected appropriately. In turn, building positive working relationships was facilitated by another collaborative factor identified as important by a majority of participants: physical collocation of mechanism stakeholders. Specifically, participants from the mechanisms focused on law enforcement investigations, such as the STC and BESTs, reported that being physically collocated with members from other agencies was important for increasing the groups’ effectiveness. Participants from the eight border security collaborative field mechanisms we visited at the time also identified challenges or barriers that affected their collaboration across components and made it more difficult. Specifically, participants identified three barriers that most frequently hindered effective collaboration within their mechanisms: (1) resource constraints, (2) rotation of key personnel, and (3) lack of leadership buy- in. For example, when discussing resource issues, a majority of participants said funding for their group’s operation was critical and identified resource constraints as a challenge to sustaining their collaborative efforts. These participants also reported that since none of the mechanisms receive dedicated funding, the participating federal agencies provided support for their respective representatives assigned to the selected mechanisms. Also, there was a majority opinion among mechanism participants we visited that rotation of key personnel and lack of leadership buy-in hindered effective collaboration within their mechanisms. For example, STC participants stated that the rotation of key personnel hindered the STC’s ability to develop and retain more seasoned personnel with expertise in investigations and surveillance techniques. In addition, in June 2014, we identified coordination benefits and challenges related to the JFC, STC, and ACTT. For example, DHS and CBP leveraged the assets of CBP components and interagency partners through these mechanisms to conduct a number of joint operations and deploy increased resources to various border security efforts. In addition, these mechanisms provided partner agencies with increased access to specific resources, such as AMO air support and planning assistance for operations. Officials involved with the JFC, STC, and ACTT also reported collaboration challenges at that time. For example, officials from 11 of 12 partner agencies we interviewed reported coordination challenges related to the STC and ACTT, such as limited resource commitments by participating agencies and lack of common objectives. In particular, one partner with the ACTT noted that there had been operations in which partners did not follow through with the resources they had committed during the planning stages. Further, JFC and STC officials cited the need to improve the sharing of best practices across the various collaborative mechanisms, and CBP officials we interviewed identified opportunities to more fully assess how the mechanisms were structured. We recommended that DHS establish written agreements for some of these coordination mechanisms and a strategic-level oversight mechanism to monitor interagency collaboration. DHS concurred and these recommendations were closed as not implemented due to planned changes in the collaborative mechanisms. In February 2017, we found that as part of using Predator B aircraft to support other government agencies, CBP established various mechanisms to coordinate Predator B operations. CBP’s Predator B aircraft are national assets used primarily for detection and surveillance during law enforcement operations, independently and in coordination with federal, state, and local law enforcement agencies throughout the United States. For example, at AMO National Air Security Operations Centers (NASOC) in Arizona, North Dakota, and Texas, personnel from other CBP components are assigned to support and coordinate mission activities involving Predator B operations. Border Patrol agents assigned to support NASOCs assist with directing agents and resources to support its law enforcement operations and collecting information on asset assists provided for by Predator B operations. Further, two of DHS’s joint task forces also help coordinate Predator B operations. Specifically, Joint Task Force – West, Arizona and Joint Task Force – West, South Texas coordinate air asset tasking and operations, including Predator B operations, and assist in the transmission of requests for Predator B support and communication with local field units during operations, such as Border Patrol stations and AMO air branches. In addition to these mechanisms, CBP has documented procedures for coordinating Predator B operations among its supported or partner agencies in Arizona specifically by developing a standard operating procedure for coordination of Predator B operations through its NASOC in Arizona. However, CBP has not documented procedures for coordination of Predator B operations among its supported agencies through its NASOCs in Texas and North Dakota. CBP has also established national policies for its Predator B operations that include policies for prioritization of Predator B missions and processes for submission and review of Predator B mission or air support requests. However, these national policies do not include coordination procedures specific to Predator B operating locations or NASOCs. Without documenting its procedures for coordination of Predator B operations with supported agencies, CBP does not have reasonable assurance that practices at NASOCs in Texas and North Dakota align with existing policies and procedures for joint operations with other government agencies. Among other things, we recommended that CBP develop and document procedures for Predator B coordination among supported agencies in all operating locations. CBP concurred with our recommendation and stated that it plans to develop and implement an operations coordination structure and document its coordination procedures for Predator B operations through Joint Task Force – West, South Texas and document its coordination procedures for Predator B operations through its NASOC in Grand Forks, North Dakota. In January 2017, we reported that Border Patrol agents use the CDS to classify each alien apprehended illegally crossing the border and then apply one or more post-apprehension consequences determined to be the most effective and efficient to discourage recidivism, that is, further apprehensions for illegal cross-border activity. We found that Border Patrol uses an annual recidivism rate to measure performance of the CDS; however, methodological weaknesses limit the rate’s usefulness for assessing CDS effectiveness. Specifically, Border Patrol’s methodology for calculating recidivism—the percent of aliens apprehended multiple times along the southwest border within a fiscal year—does not account for an alien’s apprehension history over multiple years. In addition, Border Patrol’s calculation neither accounts for nor excludes apprehended aliens for whom there is no ICE record of removal from the United States. Our analysis of Border Patrol and ICE data showed that when calculating the recidivism rate for fiscal years 2014 and 2015, Border Patrol included in the total number of aliens apprehended, tens of thousands of aliens for whom ICE did not have a record of removal after apprehension and who may have remained in the United States without an opportunity to recidivate. Specifically, our analysis of ICE enforcement and removal data showed that about 38 percent of the aliens Border Patrol apprehended along the southwest border in fiscal years 2014 and 2015 may have remained in the United States as of May 2016. To better inform the effectiveness of CDS implementation and border security efforts, we recommended that, among other things, (1) Border Patrol strengthen the methodology for calculating recidivism, such as by using an alien’s apprehension history beyond one fiscal year and excluding aliens for whom there is no record of removal; and (2) the Assistant Secretary of ICE and Commissioner of CBP collaborate on sharing immigration enforcement and removal data to help Border Patrol account for the removal status of apprehended aliens in its recidivism rate measure. CBP did not concur with our first recommendation and stated that CDS uses annual recidivism rate calculations to measure annual change, which is not intended to be, or used, as a performance measure for CDS, and that Border Patrol annually reevaluates the CDS to ensure that the methodology for calculating recidivism provides the most effective and efficient post apprehension outcomes. We continue to believe that Border Patrol should strengthen its methodology for calculating recidivism, as the recidivism rate is used as a performance measure by Border Patrol and DHS. DHS concurred with our second recommendation, but stated that collecting and analyzing ICE removal and enforcement data would not be advantageous to Border Patrol for CDS purposes since CDS is specific to Border Patrol. However, DHS also stated that Border Patrol and ICE have discussed the availability of the removal and enforcement data and ICE has agreed to provide Border Patrol with these data, if needed. DHS requested that we consider this recommendation resolved and closed. While DHS’s planned actions are a positive step toward addressing our recommendation, DHS needs to provide documentation of completion of these actions for us to consider the recommendation closed as implemented. In February 2017, we reported on CBP’s efforts to secure the border between U.S. ports of entry using tactical infrastructure, including fencing, gates, roads, bridges, lighting, and drainage. For example, border fencing is intended to benefit border security operations in various ways, according to Border Patrol officials, including supporting Border Patrol agents’ ability to execute essential tasks, such as identifying illicit-cross border activities. CBP collects data that could help provide insight into how border fencing contributes to border security operations, including the location of illegal entries. However, CBP has not developed metrics that systematically use these data, among other data it collects, to assess the contributions of its pedestrian and vehicle border fencing to its mission. For example, CBP could potentially use these data to determine the extent to which border fencing diverts illegal entrants into more rural and remote environments, and border fencing’s impact, if any, on apprehension rates over time. Developing metrics to assess the contributions of fencing to border security operations could better position CBP to make resource allocation decisions with the best information available to inform competing mission priorities and investments. To ensure that Border Patrol has the best available information to inform future investments and resource allocation decisions among tactical infrastructure and other assets Border Patrol deploys for border security, we recommended, among other things, that Border Patrol develop metrics to assess the contributions of pedestrian and vehicle fencing to border security along the southwest border using the data Border Patrol already collects and apply this information, as appropriate, when making investment and resource allocation decisions. DHS concurred with our recommendation and plans to develop metrics and incorporate them into the Border Patrol’s Requirements Management Process. These actions, if implemented effectively, should address the intent of our recommendation. In February 2017, we found that CBP has taken actions to assess the effectiveness of its Predator B UAS and tactical aerostats for border security, but could improve its data collection efforts. CBP collects a variety of data on its use of the Predator B UAS, tactical aerostats, and TARS, including data on their support for the apprehension of individuals, seizure of drugs, and other events (asset assists). For Predator B UAS, we found that mission data—such as the names of supported agencies and asset assists for seizures of narcotics—were not recorded consistently across all operational centers, limiting CBP’s ability to assess the effectiveness of the program. We also found that CBP has not updated its guidance for collecting and recording mission information in its data collection system to include new data elements added since 2014, and does not have instructions for recording mission information such as asset assists. In addition, not all users of CBP’s system have received training for recording mission information. We reported that updating guidance and fully training users, consistent with internal control standards, would help CBP better ensure the quality of data it uses to assess effectiveness. For tactical aerostats, we found that Border Patrol collection of asset assist information for seizures and apprehensions does not distinguish between its tactical aerostats and TARS. Data that distinguishes between support provided by tactical aerostats and support provided by TARS would help CBP collect better and more complete information and guide resource allocation decisions, such as the redeployment of tactical aerostat sites based on changes in illegal cross- border activity for the two types of systems that provide distinct types of support when assisting with, for example, seizures and apprehensions. To improve its efforts to assess the effectiveness of its Predator B and tactical aerostat programs, we recommended, among other things, that CBP (1) update guidance for recording Predator B mission information in its data collection system; (2) provide training to users of CBP’s data collection system for Predator B missions; and (3) update Border Patrol’s data collection practices to include a mechanism to distinguish and track asset assists associated with tactical aerostats from TARS. CBP concurred and identified planned actions to address the recommendations, including incorporating a new functionality in its data collection system to include tips and guidance for recording Predator B mission information and updating its user manual for its data collection system; and making improvements to capture data to ensure asset assists are properly reported and attributed to tactical aerostats, and TARS, among other actions. In March 2014, we reported that CBP had identified mission benefits for technologies under its Arizona Border Surveillance Technology Plan— which included a mix of radars, sensors, and cameras to help provide security for the Arizona border—but had not yet developed performance metrics for the plan. CBP identified mission benefits such as improved situational awareness and agent safety. Further, a DHS database enabled CBP to collect data on asset assists, instances in which a technology—such as a camera, or other asset, such as a canine team— contributed to an apprehension or seizure, that in combination with other relevant performance metrics or indicators, could be used to better determine the contributions of CBP’s surveillance technologies and inform resource allocation decisions. However, we found that CBP was not capturing complete data on asset assists, as Border Patrol agents were not required to record and track such data. We concluded that requiring the reporting and tracking of asset assist data could help CBP determine the extent to which its surveillance technologies are contributing to CBP’s border security efforts. To assess the effectiveness of deployed technologies at the Arizona border and better inform CBP’s deployment decisions, we recommended that CBP (1) require tracking of asset assist data in its Enforcement Integrated Database, which contains data on apprehensions and seizures and (2) once data on asset assists are required to be tracked, analyze available data on apprehensions and seizures and technological assists, in combination with other relevant performance metrics to determine the contribution of surveillance technologies to CBP’s border security efforts. DHS concurred with our first recommendation, and Border Patrol issued guidance in June 2014 and Border Patrol officials confirmed with us in June 2015 that agents are required to enter this information into the database. These actions met the intent of our recommendation. DHS also concurred with our second recommendation, and as of September 2016 has taken some action to assess its technology assist data and other measures to determine contributions of surveillance technologies to its mission. However, until Border Patrol completes its efforts to fully develop and apply key attributes for performance metrics for all technologies to be deployed under the Arizona Border Surveillance Technology Plan, it will not be well positioned to fully assess its progress in determining when mission benefits have been fully realized. Chairwoman McSally, Ranking Member Vela, and members of the subcommittee, this concludes my prepared statement. I will be happy to answer any questions you may have. For further information about this testimony, please contact Rebecca Gambler at (202) 512-8777 or gamblerr@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement included Kirk Kiester (Assistant Director), as well as Stephanie Heiken, David Lutter, Sasan “Jon” Najmi, and Carl Potenzieri. Southwest Border Security: Additional Actions Needed to Better Assess Fencing’s Contributions to Operations and Provide Guidance for Identifying Capability Gaps. GAO-17-331. Washington, D.C.: February 16, 2017. Border Security: Additional Actions Needed to Strengthen Collection of Unmanned Aerial Systems and Aerostats Data. GAO-17-152. Washington, D.C.: February 16, 2017. Border Patrol: Actions Needed to Improve Oversight of Post- Apprehension Consequences. GAO-17-66. Washington, D.C.: January 12, 2017. Border Security: DHS Surveillance Technology Unmanned Aerial Systems and Other Assets. GAO-16-671T. Washington, D.C.: May 24, 2016. Southwest Border Security: Additional Actions Needed to Assess Resource Deployment and Progress. GAO-16-465T. Washington, D.C.: March 1, 2016. Border Security: Progress and Challenges in DHS’s Efforts to Implement and Assess Infrastructure and Technology. GAO-15-595T. Washington, D.C.: May 13, 2015. Border Security: Opportunities Exist to Strengthen Collaborative Mechanisms along the Southwest Border. GAO-14-494: Washington, D.C.: June 27, 2014. Arizona Border Surveillance Technology Plan: Additional Actions Needed to Strengthen Management and Assess Effectiveness. GAO-14-368. Washington, D.C.: March 3, 2014. Arizona Border Surveillance Technology: More Information on Plans and Costs Is Needed before Proceeding. GAO-12-22. Washington, D.C.: November 4, 2011. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Securing U.S. borders is the responsibility of DHS, in collaboration with other federal, state, local, and tribal entities. Within DHS, CBP is the lead agency for border security and is responsible for, among other things, keeping terrorists and their weapons, criminals and their contraband, and inadmissible aliens out of the country. In recent years, GAO has reported on a variety of DHS collaborative mechanisms and efforts to assess its use of border security resources. This statement addresses (1) DHS's efforts to implement collaborative mechanisms along the southwest border and (2) DHS's efforts to assess its use of resources and programs to secure the southwest border. This statement is based on GAO reports and testimonies issued from September 2013 through February 2017 that examined DHS efforts to enhance border security and assess the effectiveness of its border security operations. GAO's reports and testimonies incorporated information GAO obtained by examining DHS collaborative mechanisms, reviewing CBP policies and procedures for coordinating use of assets, analyzing DHS data related to enforcement programs, and interviewing relevant DHS officials. The Department of Homeland Security (DHS) and its U.S. Customs and Border Protection (CBP) have implemented various mechanisms along the southern U.S. border to coordinate security operations, but could strengthen coordination of Predator B unmanned aerial system (UAS) operations to conduct border security efforts. In September 2013, GAO reported that DHS and CBP used collaborative mechanisms along the southwest border—including interagency Border Enforcement Security Task Forces and Regional Coordinating Mechanisms—to coordinate information sharing, target and prioritize resources, and leverage assets. GAO interviewed participants from the various mechanisms who provided perspective on successful collaboration, such as establishing positive working relationships, sharing resources, and sharing information. Participants also identified barriers, such as resource constraints, rotation of key personnel, and lack of leadership buy-in. GAO recommended that DHS take steps to improve its visibility over field collaborative mechanisms. DHS concurred and collected data related to the mechanisms' operations. Further, as GAO reported in June 2014, officials involved with mechanisms along the southwest border cited limited resource commitments by participating agencies and a lack of common objectives. Among other things, GAO recommended that DHS establish written interagency agreements with mechanism partners, and DHS concurred. Lastly, in February 2017, GAO reported that DHS and CBP had established mechanisms to coordinate Predator B UAS operations but could better document their coordination procedures. GAO made recommendations for DHS and CBP to improve coordination of UAS operations, and DHS concurred. GAO recently reported that DHS and CBP could strengthen efforts to assess their use of resources and programs to secure the southwest border. For example, in February 2017, GAO reported that CBP does not record mission data consistently across all operational centers for its Predator B UAS, limiting CBP's ability to assess program effectiveness. In addition, CBP has not updated its guidance for collecting and recording mission information in its data collection system since 2014. Updating guidance consistent with internal control standards would help CBP better ensure the quality of data it uses to assess effectiveness. In January 2017, GAO found that methodological weaknesses limit the usefulness for assessing the effectiveness of CBP's Border Patrol Consequence Delivery System. Specifically, Border Patrol's methodology for calculating recidivism—the percent of aliens apprehended multiple times along the southwest border within a fiscal year—does not account for an alien's apprehension history over multiple years. Border Patrol could strengthen the methodology for calculating recidivism by using an alien's apprehension history beyond one fiscal year. Finally, CBP has not developed metrics that systematically use the data it collects to assess the contributions of its pedestrian and vehicle border fencing to its mission. Developing metrics to assess the contributions of fencing to border security operations could better position CBP to make resource allocation decisions with the best information available to inform competing mission priorities and investments. GAO made recommendations to DHS and CBP to update guidance, strengthen its recidivism calculation methodology, and develop metrics, and DHS generally concurred. GAO has previously made numerous recommendations to DHS to improve the function of collaborative mechanisms and use of resources for border security, and DHS has generally agreed. DHS has taken actions or described planned actions to address the recommendations, which GAO will continue to monitor.
Concern over the state of America’s infrastructure—highways, mass transit, rail, aviation, water transportation, water resources, water supply, and wastewater treatment facilities—has become widespread. The nation’s interstate highway system has nearly been completed, but highway, air traffic, and other transportation and environmental problems are mounting. However, the federal budget deficit has made it increasingly difficult to fund infrastructure improvements either directly through federal grants or indirectly through tax exemptions. Consequently, the Congress and the administration have explored additional financing methods—including some that involve America’s pension plans, which were estimated to have over $4 trillion in assets in 1994—to expand federal, state, and local financing of infrastructure projects. Substantial grant funding for infrastructure projects, including highways, wastewater treatment facilities, and mass transit began between 1956 and 1964. Spending on these programs as a share of total federal spending peaked in the 1970s. But by the late 1970s, the growth of federal infrastructure spending had slowed and continued to slow into the 1990s. According to CBO, the share of all federal spending that was devoted to infrastructure declined from over 5.4 percent in 1977 to less than 3 percent in 1992. For example, in the area of environmental infrastructure, the Congress began to reduce funding for constructing wastewater treatment facilities in the late 1970s and decided in 1987 to phase out federal capitalization grants by 1995. The decline in infrastructure investment as a share of federal spending—coupled with a growing backlog of infrastructure development and repair projects, and federal, state, and local budget deficits—led to a widespread perception of an infrastructure crisis in the 1980s and 1990s. Estimates of how much investment was needed to resolve the crisis varied widely. A 1991 Office of Technology Assessment study estimated that federal, state, and local governments spent about $140 billion annually on building, operating, and maintaining infrastructure facilities, but others estimated that $40 billion to $80 billion more was needed each year. However, some experts and economists believe that the infrastructure problem has been overstated. They argue, for example, that the U.S. stock of “public capital” (that is, infrastructure) rose steadily between 1949 and 1991. Some also contend that past spending on infrastructure means less can be spent now; the interstate highway system is about 98-percent complete, for instance, and Americans have the highest quality drinking water in the world. The debate over infrastructure investment has been extensively explored in the economic literature. Although there is no consensus on the magnitude of any infrastructure gap, federal, state, and local officials have begun seeking new and innovative ways to finance development for the 1990s and beyond. For example, the Congress included the “toll provisions” in section 1012(a) of the Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA) to allow tolls to be charged on new, reconstructed, or renovated federal highways other than interstates. The revenue streams from the tolls make participating in financing highway projects more attractive to private investors. Another provision of ISTEA, section 1081, established the Commission to Promote Investment in America’s Infrastructure (the Infrastructure Commission) “to conduct a study on the feasibility and desirability of creating a type of infrastructure security” that would attract pension plan investors. Creating securities that would encourage pension plans to invest in public facilities was a novel idea because private pension plans do not generally invest in public projects within the United States. Public projects at the state and local levels are commonly financed through bonds for which the interest income is exempt from federal taxation. Tax-exempt bonds pay lower interest rates and, thus, hold down the cost of borrowing for state and local governments while providing a return to investors comparable with the after-tax return of taxable securities. At the same time, to encourage the development and growth of private pensions, the federal government exempts pension plans’ earnings from taxation. However, since plans are subject to fiduciary rules under the Employee Retirement Income Security Act of 1974 (ERISA), which obligate them to seek the highest return (taking risk into account) on their investments, they do not normally invest in lower-yielding, tax-exempt bonds. The bipartisan Infrastructure Commission, which was appointed by the President and congressional leadership, made three major recommendations in its February 1993 report designed to increase institutional investment, including pension plan investment, in infrastructure projects: Create a National Infrastructure Corporation (NIC) to leverage federal dollars and boost investment in infrastructure projects; NIC would have the capacity to become self-sustaining through user fees or dedicated revenues. Create new investment options to attract institutional investors, including pension plans, as new sources of infrastructure capital. Strengthen existing infrastructure financing tools and programs by making federal incentives more consistent and by providing uniform treatment for investment in infrastructure projects. Given continuing congressional interest in infrastructure and pension issues, and at the request of the Chairman and the Ranking Minority Member of the House Committee on Transportation and Infrastructure, we initiated a study to identify the role that current federal policies play in providing incentives for private pension plans to invest in infrastructure projects and analyze the Infrastructure Commission’s 1993 proposals relating to pension plan investment to determine how pension plans might respond. In addressing these objectives, we reviewed relevant laws, policies, reports, Infrastructure Commission hearing testimony, and various economic analyses. We also interviewed former Infrastructure Commission officials, corporate executives, government officials, and experts on infrastructure financing or pension plan issues. For further details on our scope and methodology, see appendix I. To ensure the accuracy of our information, we provided a draft of this report to several outside experts, who generally agreed with our findings. We incorporated their technical comments where appropriate. We conducted our review between January 1994 and June 1995 in accordance with generally accepted government auditing standards. Current federal tax and pension policies are inconsistent with the goal of having pension plans invest in infrastructure projects to any significant extent. Fiduciary requirements state that pension plans must invest their assets for the exclusive benefit of their participants by earning the highest risk-adjusted return possible. Federal law also exempts the plans’ earnings from taxation. At the same time, the Internal Revenue Code and current federal grant and revolving fund programs encourage infrastructure project sponsors to finance public projects at lower interest rates through the municipal bond market. As a result, infrastructure projects do not attract pension plan investment. However, the Infrastructure Commission did not propose to substantially change the long-standing tax and pension policies, which together translated into more than $60 billion in indirect federal subsidies in fiscal year 1994. Federal law does not prevent private pension plans from investing in infrastructure, but the plans’ investments must meet certain standards. Private pension plan managers may only make investments that comply with various fiduciary standards found in ERISA, Taft-Hartley Act restrictions, Internal Revenue Code provisions, and common law. These fiduciary standards require plan managers to, among other things, carry out their duties with the same care, skill, and diligence as a prudent person. These standards have been interpreted to mean that managers should obtain market-rate returns on their investments. DOL, which is responsible for enforcing the fiduciary standards in ERISA, interprets the standards as permitting infrastructure investment. DOL’s Solicitor testified before the Infrastructure Commission that nothing in ERISA’s fiduciary provisions specifically prevents pension plans from purchasing a security created to encourage investment in infrastructure facilities. Specifically, a pension plan must act solely in the interest of the participants and beneficiaries, and for the exclusive purpose of providing benefits and defraying reasonable expenses; act prudently; diversify plan investments; and not engage in certain kinds of transactions that may create conflicts of interests or result in self-dealing. This guidance generally means that private pension plans may invest in infrastructure projects only if the investments offer an equal or higher rate of return, adjusted for risk, as other potential investments. However, a pension plan can, according to the Solicitor’s statement, consider “noneconomic” factors even though it is required to act solely in the interests of its participants. For example, DOL advised a pension plan that it could invest in a mortgage pool that included only construction projects built by union labor because the mortgages had to meet rigorous financial criteria, and the investment was competitive with comparable investments available in the marketplace. However, according to the Solicitor’s testimony, DOL has consistently opposed pension plan investments designed to achieve socially beneficial objectives at the expense of yield or security. A key ERISA requirement is that pension plan managers “act prudently.” Specifically, ERISA requires that a fiduciary use the care that “a prudent man acting in a like capacity and familiar with such matters would use in the conduct of an enterprise of a like character and with like aims.” In fact, the Counsel to the Infrastructure Commission said that only large pension plans are likely to want to bear the cost of the “due diligence” work to determine whether an infrastructure investment is prudent. In Interpretive Bulletin 94-1, issued on June 22, 1994, DOL reiterated its position that the fiduciary standards applicable to infrastructure are no different than the standards that apply to other investments. DOL stated that it issued its bulletin because “a perception exists in the investment community” that ETIs, including infrastructure, “are incompatible with ERISA’s fiduciary obligations.” The bulletin stated that sophisticated long-term investors, including pension plans, may invest in assets designed to create benefits to third parties in addition to their returns to investors. That would be possible even though less information about the investment may be readily available and the investment may be less liquid, may require a longer time to generate significant investment returns, and may require special expertise to evaluate. Federal tax laws were designed to help lower the costs to states and localities for developing and financing public projects. The Internal Revenue Code exempts the earnings on municipal bonds from federal taxation; thus, states and localities financing infrastructure projects can hold down their costs by paying lower interest rates and still attract investors who do not have to pay taxes on the interest they collect. Typically, the interest rate paid on tax-exempt bonds is about 15 to 20 percent lower than that paid on taxable bonds of comparable risk and maturity. Given the high capital costs of some infrastructure projects such as environmental facilities, the interest savings can be considerable. Although tax-exempt bonds help states and localities hold down the cost of public projects, the comparatively low interest rates they pay make them relatively unattractive to pension plans, whose earnings are already tax exempt. The difference between yields on tax-exempt and taxable bonds changes over time, but we found that taxable bonds consistently pay a higher rate of return. CBO, for example, found that the yields on 30-year AAA-rated tax-exempt general obligation bonds (or bonds that are issued for state and local projects) have been an average of 1.4 percentage points lower than those on 30-year Treasury bonds since 1989. Our work shows that the yields on the tax-exempt bonds were lower than those on the Treasury bonds every month during the 11-year period, as shown in figure 2.1. The lower return on tax-exempt municipal bonds means that the securities cannot compete effectively for pension plan assets. Consequently, infrastructure developers have little incentive to seek financing from private pension plans, and private pension plans have little incentive to seek infrastructure investment opportunities. Only 0.1 percent of the assets held in private pension plans were invested in such securities at the end of 1992, according to CBO. Federal agencies assist infrastructure projects through programs that involve direct, as well as indirect, federal expenditures. We reviewed several recent initiatives that used federal funds to help infrastructure developers obtain financing from traditional sources, such as tax-exempt bonds. Actions taken to increase private investment in infrastructure have included establishing the State Water Pollution Control Revolving Fund Program under the 1987 amendments to the Clean Water Act to leverage, and eventually replace, federal capitalization grants; permitting states to lend federal funds to toll road projects under section 1012(a) of ISTEA in 1991; and issuing Executive Order 12893, Principles for Federal Infrastructure Investments, on January 26, 1994, which directed federal agencies to seek private sector participation in their infrastructure programs. These initiatives generally use the tax-exempt market and do not target pension plan investment. The tax exemptions for private pension plans and for bonds that finance infrastructure projects involve large, indirect federal subsidies in the form of foregone federal revenue, referred to as a “tax expenditure.” Recent estimates show that exempting pension plan earnings from taxation resulted in foregone revenue of about $48.8 billion in fiscal year 1994.Nearly $12 billion in revenue was foregone by subsidizing interest on tax-exempt bonds. Although current policies are costly and have the effect of discouraging pension plans from investing in public projects, the Infrastructure Commission did not propose changing these basic federal laws substantially. However, research suggests that it might be less costly to pay interest subsidies directly to state and local governments than it is to exempt state and local bonds from taxes. Paying interest subsidies directly to state and local governments might also eliminate the disincentive for pension plans to invest in state and local government bonds because the bonds would presumably pay investors competitive interest rates. The Infrastructure Commission recognized, though, that both the tax exemption for pension plan contributions and the tax exemption for municipal bonds are long-standing federal laws. The Congress tasked the Infrastructure Commission with conducting a study on the “feasibility and desirability of creating a type of infrastructure security to permit the investment of pension assets in funds used to design, plan and construct infrastructure facilities in the United States,” including examining other methods of encouraging public and private investment in infrastructure facilities. In short, the focus of the Infrastructure Commission’s inquiry was on developing a new investment instrument that could attract private money, with a particular concentration on pension plans. The Congress has not acted on the proposals contained in the Infrastructure Commission’s 1993 report, although a bill based mainly on the proposals was introduced in the 103rd Congress. The Infrastructure Commission found that there is a significant need to facilitate investment in the repair, renewal, and development of domestic infrastructure. The Infrastructure Commission’s report argued that budgetary constraints will prevent federal, state, and local governments from increasing either grant expenditures or tax subsidies sufficiently to eliminate the nation’s projected shortfall in infrastructure investment. The Infrastructure Commission’s Counsel told us that there is a “limited appetite” for state and local tax increases to pay for roads, environmental facilities, and other infrastructure projects. Furthermore, the Infrastructure Commission’s report noted that the legal limits on federal tax subsidies for municipal bonds, issued to finance projects that involve private sector participation, constrain the availability of financing and increase the cost of financing such projects. As grant funds from the federal government decrease, states and localities need to find new ways to leverage their limited resources. The Infrastructure Commission’s 1993 report recommended that the Congress establish two new corporations to provide credit assistance and insurance to state and local issuers of debt to finance infrastructure. The Infrastructure Commission recommended the establishment of a National Infrastructure Corporation (NIC) that would purchase and bear the credit risk of obligations issued to finance transportation and environmental facilities, including both governmental and public-private sponsored projects. NIC would also insure project sponsors against a portion of the risk of developing new facilities. Also, the Infrastructure Commission recommended the establishment of an Infrastructure Insurance Corporation (IIC), initially an NIC subsidiary, that would insure infrastructure bonds. In general, the Infrastructure Commission’s proposals are aimed at providing credit assistance to public and private sponsors seeking financing for infrastructure projects. Borrowers may have difficulty securing financing because their projects may be judged as too risky given the rate of return they promise to investors. When an entity assumes some of this risk (that is, by providing credit assistance or bond insurance) the investment becomes more attractive to the investor. At the same time, the state or locality seeking to obtain financing may do so on more favorable terms. The ability to bring borrowers and investors together by having an entity assume risk may involve the provision of a subsidy on the part of the federal government. The Infrastructure Commission proposed that NIC and IIC provide three forms of credit assistance. First, NIC would purchase “subordinated” bonds sold by state and local governments to finance new infrastructure projects. The payment on these bonds would be legally subordinated to, or not due before, payments on the remainder of the debt, called “senior” debt. The subordinated debt purchased by NIC typically would be for projects that are not eligible for investment-grade credit ratings (ratings BBB and above) in the marketplace. Second, NIC would insure private firms against a portion of the risk associated with developing new facilities, such as the risk of environmental lawsuits and voter disapproval of the issuance of bonds to provide long-term financing. NIC would be legally obligated to cover up to 70 percent of any losses incurred by developers if the projects were never completed. Third, IIC would bear a portion of a project’s credit risk by insuring or reinsuring senior infrastructure bonds. IIC would insure or reinsure only those bonds that private municipal bond insurers would not insure or that could not obtain other forms of credit enhancement, such as a bank letter of credit. It was also proposed that in the long run, NIC would purchase senior infrastructure bonds, including bonds insured by IIC. The Infrastructure Commission proposed that the federal government initially capitalize NIC and IIC through a grant of $1 billion per year over 5 years. Later on, NIC would raise additional funds by issuing debt to the public and creating and selling securities backed by the infrastructure bonds that it had purchased (that is, providing securitization). The Infrastructure Commission’s report did not specify the legal and organizational status of NIC. It noted that NIC’s ability to borrow from the public would benefit from a “limited line of credit” from the U.S. Treasury, but it did not foresee a need for a “full faith and credit guarantee” from the federal government. IIC would be established initially as an NIC subsidiary and would operate as a private corporation similar to the College Construction Loan Insurance Association (Connie Lee)—a private, for-profit municipal bond insurer that insures bonds for construction at institutions of higher learning and teaching hospitals. The Infrastructure Commission noted that pension plans historically have not participated in financing infrastructure in the municipal bond market because of their tax-exempt status as well as the relative complexity of infrastructure credit. The Infrastructure Commission identified three options to encourage pension plans to participate: Pension plans could invest in the equity of the proposed bond insurer, IIC. Pension plans could buy taxable project debt insured by IIC or purchase securities directly issued by NIC. Pension plans could act as lenders directly funding taxable project debt through purchasing public benefit bonds. The first option would involve pension plans by having them provide capital to start up IIC. Since it is assumed IIC would generate a revenue stream of its own through fees paid by those seeking insurance for their bonds, the pension plans could earn a return. The size of such a return is not clear. Furthermore, the experience with Connie Lee suggests that those providing capital typically would invest only modest amounts. Thus, the potential equity participation in IIC by pension plans is likely to be of limited magnitude. The second option involves two parts. First, pension plans could directly purchase taxable project debt insured by IIC. Second, NIC could use its capital to purchase taxable project debt, some of which may have been insured by IIC. When a large volume of debt has been acquired, NIC could create a new security backed by the project debt that would then be sold to the market with NIC’s guarantee. It is thought that this security would create a secondary market for project debt and reduce the risks of investing in specific project debt. Since it is presumed that this security would offer a competitive, taxable, market rate and be more liquid than specific project debt, pension plans might be attracted to it. Pension plans might, for example, support pollution control projects that are not eligible for tax-exempt financing because they benefit private businesses. However, this could only occur after some time, since NIC would need to develop a quality portfolio of loans over time as a precondition to issuing its own debt or securitizing its loans. In its third option for increasing pension plan investment in infrastructure, the Infrastructure Commission recommended modifying federal tax law to allow all or part of the earnings on a municipal or “public benefit” bond to be distributed tax free upon retirement to workers who participated in defined contribution pension plans, such as 401(k) plans and individual retirement accounts (IRA). Defined contribution plan participants would be willing to invest in such a bond because its after-tax rate of return would be comparable to a taxable market return. This would allow the localities issuing the bonds to finance projects at rates comparable to those in the municipal bond market while attracting direct investment from pension plans. The Infrastructure Commission’s Secretary told us that he considers the public benefit bond proposal to be the “cornerstone of the Infrastructure Commission’s proposals related to pension plans.” However, the Infrastructure Commission’s Executive Director said that much of the investment in infrastructure would come from public, and perhaps union, pension plans—not from corporate pension plans. He estimated that about 1 percent of U.S. pension plan assets might be ultimately invested in infrastructure. One approach to evaluating the proposals is to examine the economic justification for an expanded federal role in establishing entities and incentives to entice pension plan investment into infrastructure. We reviewed a number of economic analyses related to the justification for a federal role, and a discussion is contained in appendix II. Here, we summarize that discussion, focusing on a recent CBO analysis that specifically addressed the Infrastructure Commission’s proposals. CBO made the following key points regarding the Infrastructure Commission’s proposals: The premise that greater investment in infrastructure will increase overall economic output is questionable, and only a few projects that would be supported through NIC would have returns higher than alternative private investments. The proposed new federal incentives may result in further distortion of investment choices by displacing other investments, which in turn could result in economic inefficiency. The municipal bond market already receives a large federal subsidy and is generally considered to be functioning well. The market imperfections that affect the municipal bond market were not addressed by the Infrastructure Commission’s proposals. Measuring and controlling the impact of new federal financial entities are influenced importantly by the organizational form of the proposed corporations. Establishing NIC as a government-sponsored enterprise carries high risk (through contingent liability) to the federal government. Concerning the specific incentives for pension plan involvement mentioned earlier, CBO made several additional points. With regard to pension plans investing in IIC’s equity, CBO noted that the proposed IIC has little justification on efficiency grounds since the municipal bond insurance industry appears to be competitive. Hence, this first avenue for pension plans may not be necessary or attractive. The second avenue—having pension plans buy debt securities issued directly by NIC—is problematic because infrastructure projects are heterogeneous and, thus, are not likely to be easy to pool into securities to create a secondary market. The third avenue for pension plan investment—having pension plans invest directly in funding infrastructure project debt through public benefit bonds—also was questioned by CBO. It noted that the public benefit bonds might subsidize projects that are not really public in nature. This provision might circumvent the legal restrictions put in place during the 1980s to prevent the excessive use of tax-exempt municipal debt to finance private activities that were crowding out state and local spending and raising costs for public projects. CBO also noted that administrative costs may be associated with implementing a public benefit bond that gives a tax break to pension plan participants. Internal Revenue Service regulations would have to be put in place to require individuals to separate income from investing in infrastructure securities from other asset income. The CBO analysis concluded that the interaction of existing federal tax subsidies for pension plans and municipal bonds is the main cause of the low level of direct investment in infrastructure. It noted that existing subsidies for municipal debt could be reduced and that this could induce pension plans to invest in infrastructure. A similar effect could result from taking away the tax exemption for pension plans. However, the Infrastructure Commission did not advocate either approach. In reviewing other analysis and commentary on the Infrastructure Commission’s proposals, we found substantial skepticism among economists as well (see app. II). The basic view was that there seems to be little reason to put new incentives in place to reallocate capital from its existing uses. Moving beyond the economic analysis framework implies that the issue becomes one of competing political values about how to allocate resources. In our discussions with Infrastructure Commission officials, it was noted that the rationale for the proposals was based more on “policy” considerations than on a strictly economic justification. In this regard, the Infrastructure Commission’s rationale seems more rooted in the view that capital should be reallocated to public investment (in infrastructure), and the proposed entities and incentives are justified as an effort to implement that objective. This means that the justification for an expanded government role to encourage investment in infrastructure can depend on the values expressed by the voters. In other words, if individuals perceive a problem with infrastructure and want it to be addressed, then government may be chosen as the means to meet this demand. The institutions and public processes for making infrastructure investment decisions should then be the focus of analysis and debate. Private pension plan managers and financial market experts we spoke with confirmed that private pension plans are not active investors in the domestic infrastructure finance market for many of the reasons that the Infrastructure Commission and others cited. For example, they noted the lack of available investment opportunities at competitive rates of return. Some market participants suggested a role for defined contribution pension plans and the desirability of finding “niches” for pension plan investment in infrastructure. Their points seem broadly in line with some of the proposals that the Infrastructure Commission made. Other market participants expressed concern about efforts to induce a reallocation of pension capital. Some pension plan managers were concerned that pressures to invest in infrastructure or other ETIs would ultimately affect their ability to comply with their fiduciary responsibilities under ERISA. They believed that even DOL’s Interpretive Bulletin 94-1 (which states that the selection of an ETI will not violate ERISA rules if the general fiduciary standards are met) did not provide any new information and that DOL’s interpretation would simply subject the private pension plans to a higher level of government scrutiny. Other experts noted there are alternative mechanisms that do not involve pension plans but may help increase infrastructure investment. Earlier, we discussed the fundamental disincentive for pension plan investment in infrastructure that results from the tax-exempt status of the plans and the use of tax-exempt municipal bonds as a common vehicle to finance infrastructure. The financial return to the pension plan is simply too low, which leads to concerns, as the Counsel to the Infrastructure Commission noted, in evaluating whether these investments meet fiduciary standards. In our discussions with market participants, other disincentives for pension plan involvement were noted. In many respects, the Infrastructure Commission’s proposals attempt to respond to these concerns. While pension plan managers told us that they are unlikely to invest in infrastructure projects, one investment manager noted that he could consider including infrastructure projects in an “alternative investment portfolio” if the projects provide a rate of return that is competitive with taxable securities. However, he believed that managers of alternative portfolios are still unlikely to invest pension plan assets in infrastructure projects because alternative investments, such as venture capital and foreign securities, are available. At any rate, alternative investment portfolios are small. Project finance experts told us that infrastructure projects are often large and complex, with long development and construction phases. These types of projects are not standardized and, thus, are difficult to make into securities, in contrast to home mortgages. Unlike mortgage-backed securities, pooling of infrastructure projects does not have a track record that investors can assess. That situation is not advantageous to pension plans, particularly in light of their fiduciary requirements concerning safety, liquidity, and yield. It is difficult for investors to estimate the rate of return or risk of a proposed infrastructure project. The chief investment officer of a communications firm that has a large pension plan told us that the pricing of public infrastructure projects is not driven by cost of capital but by political considerations. Predicting long-term cash flows is, therefore, difficult. In addition, a project developer who served as Secretary of the Infrastructure Commission said that he had been unable to attract pension plans to proposed projects because of the difficulty in estimating the rate of return. Proposed projects have not yet demonstrated competitive returns. Our discussions with market participants highlighted several key points regarding pension plan involvement in infrastructure. One is that pension plans seek diversification in their portfolios but must have investments that provide competitive returns within fiduciary standards. In addition, liquidity and standardization of investments are important. The securities that may be backed by infrastructure projects would not possess these characteristics, and it may be more difficult to use information about them in evaluating securities for future projects. Even if these problems could be overcome, the potential amount of pension capital that could be invested in infrastructure is probably significantly smaller than the vast pool of available capital sometimes suggested. Defined contribution pension plans (primarily 401(k) plans) represent the fastest-growing portion of pension plan assets. These plans, managed by the mutual funds and life insurance industry, represent a potential source for financing infrastructure. According to data that an investment firm provided to the Infrastructure Commission, private defined contribution plan assets represented almost 47 percent of all privately sponsored pension plan assets in 1991. According to the chief economist at a private bond-rating agency, the managers or the participants of these plans might not want to invest in infrastructure bonds. However, if the same bonds were sold as part of a “government bond fund” they might buy them, he said, because the public thinks of government bonds as safe investments. Also, investing a small portion of their assets in infrastructure might help diversify risks in a defined contribution pension plan’s portfolio of investments if the value of infrastructure assets goes up when the value of other assets is going down. One mutual fund industry lawyer told us that defined contribution pension plans cannot be expected to finance infrastructure projects since they do not “pass through” the tax advantage to beneficiaries. Also, infrastructure investments may lack the liquidity that defined contribution plans require in order to repay beneficiaries. These views suggest a possible role for the Infrastructure Commission’s proposed public benefit bond, which would pass through tax benefits to retirees. An advantage of tapping defined contribution assets is that a portion of these funds are self-directed by workers and, hence, these workers can make a voluntary choice to invest these funds in infrastructure securities. Since pension plans, by virtue of their tax-exempt status, would prefer fully taxable bonds over tax-exempt bonds, the way to entice them into funding public projects, some experts suggested, is to find niches where their capital can be put to fruitful uses at competitive rates of return. Projects that meet this criterion might include “stand-alone” toll roads constructed by private developers for which there is adequate demand and that can generate identifiable revenue streams. Furthermore, according to the Infrastructure Commission’s Secretary, toll roads that lacked a track record when they were initially financed could attract pension plans when they are refinanced based on a history of generating revenue. Economic research suggests that user fees could provide a revenue stream to finance a large share of many public works facilities such as transportation, water supply, wastewater treatment, and solid and hazardous waste systems. Because these facilities largely serve identifiable consumers, their use can be measured and priced, and the beneficiaries can be charged directly for the cost of services. If financing is linked to use, revenue can become steadier and more predictable, encouraging better maintenance, rehabilitation, and replacement. Unless these niches are found, pension plans will not be significant investors in America’s infrastructure, according to a managing director of a municipal bond-rating firm. Pension plans are not investing in infrastructure because “the deals aren’t out there.” One way to identify a niche, the managing director suggested, would be to establish a pilot program on one kind of project, such as highways or mass transit, so that pension plan managers would “learn to walk before they run.” Once these niches are found, then government incentives—such as the development of industrywide standards for evaluating projects, and tax credits—might help attract private capital. A transportation consultant said that any governmental actions to encourage investment in infrastructure must move in the direction of assisting the private sector’s efforts in infrastructure investment, establishing standards for project evaluation, and pooling and securitizing private sector projects. We were also told that while government guarantees, by reducing risk, enable state and local governments to obtain lower interest rates, they also lower the return to investors. The view that niches must be found suggests the need for exploring alternative financing schemes but also seems to be broadly consistent with the notion of a government corporation that would work with localities to find projects and help make them attractive to pension plans. Creating NIC and IIC to attract pension plan capital to finance public projects may not be necessary, according to the pension plan representatives and experts we met with. They pointed to the vast market for privately insured tax-exempt municipal bonds that already exists for financing such projects. Other concerns included the potential for increased federal direction or scrutiny of pension plan investments. Pension plan managers are concerned about the possibility that the federal government will mandate certain investments, or classes of investments, said a managing director of a financial services company that manages assets for large corporate pension plan clients. In addition, partners in a global investment management firm that advises pension plans told us that they believed DOL’s interpretive bulletin will subject private sector pension plans to a higher level of government scrutiny than they receive now. They believed that pension plan managers will become more circumspect about the possibility of investment losses in ETIs. State and local officials are unlikely to seek investment from private pension plans because it would increase their cost of borrowing, the chief economist of a bond-rating agency told us. An official of a private bond insurance company noted that NIC’s insurance proposals would also be more costly than private insurance. Instead, she recommended that projects obtain low-interest loans and grants from the federal government. In financing infrastructure projects, pension plans would require a competitive rate of return as well as a government guarantee backed by the full faith and credit of the federal government. Thus, pension plan capital would be expensive for states and localities to borrow. Municipalities could finance their projects more efficiently by improving their bond credit ratings instead of relying on government guarantees or pooling of assets, according to a managing director responsible for municipal bond ratings. In this view of the capital markets, plenty of capital is available to finance creditworthy infrastructure projects. Moreover, the bond insurance function envisioned for IIC would compete with the functions currently being performed by private insurance companies. One insurance industry official stated that IIC could not do any more than the private insurance industry does. The managing director of a bond-rating company told us that IIC may not significantly increase infrastructure investment by offering bond insurance because municipal bonds are already privately insured. A project finance expert at an investment bank told us that by offering insurance to projects with more risks than the private sector would normally accept, IIC would encourage the development of financially infeasible projects. We found that some market participants and experts were skeptical about the need for the government to intervene by creating NIC to reallocate capital. They were not sure whether the complex NIC mechanisms would work in the marketplace and questioned whether specific incentives to attract pension plans are the best way to spur infrastructure investment. They pointed out that other mechanisms for infrastructure financing already exist, such as municipal bonds, state revolving funds, user fees, and private bond insurance for creditworthy projects. In transportation finance, for example, some pointed out that ISTEA could be amended to allow states to create state revolving fund loans or to provide credit enhancement (such as guaranteeing local government bonds) with federal highway money. Some state officials and industry experts remain skeptical about the viability of state transportation revolving funds. One concern, for example, is whether even densely populated areas will generate many revenue-bearing projects with the capacity to repay loans. Despite these concerns, however, state transportation revolving funds could serve as an alternative to NIC and IIC, and may help expand infrastructure investment if key barriers to their effectiveness can be overcome. There has been long-established general agreement on the need for infrastructure to be funded by tax dollars and on a federal role in supporting infrastructure projects. Nevertheless, debate continues on the amount of infrastructure investment that is needed, the role of infrastructure in fostering future economic growth and higher productivity, and the appropriate degree of federal involvement. There is also a continuing effort to explore innovative and efficient ways to finance projects. The idea of attracting a portion of pension plan capital to infrastructure investment has become popular, and the proposals of the Infrastructure Commission offer an ambitious attempt to put that idea into practice. Our review of the Infrastructure Commission’s proposals suggests that they could play a role in encouraging more infrastructure investment by pension plans. The government can foster marketplace innovations and has done so in the past. But this comes at a cost to the federal government. We found strong reservations among economists and market participants about the need for new federal entities and subsidies to encourage a reallocation of pension capital when significant existing tax subsidies discourage pension plans from investing in infrastructure projects. If the primary goal is increasing infrastructure investment, then there are many ways, including initiatives currently under way, to address this goal. Encouraging localities to make projects more creditworthy through the provision of adequate revenue streams could foster investment that is more in line with the demand from the public. Techniques to leverage grants at the state level seem promising. Establishing NIC and IIC might aid these efforts, but the evidence is insufficient to conclude that such entities are required to attain an adequate level of infrastructure investment. The goal of attracting pension capital to infrastructure is problematic. Advocates for new federal entities to encourage direct pension capital flows to infrastructure recognize that the subsidies accorded to pension plans and municipal bonds are well established and serve important policy objectives. Advocates also need to recognize that fostering significant pension plan investment in infrastructure would probably require a reevaluation of existing tax policy.
Pursuant to a congressional request, GAO provided information on the role that pension plans play in expanding public investment in infrastructure projects. GAO found that: (1) pension plans have not been investing in domestic public infrastructure because of the combined effects of federal law, which requires plans to seek the highest rate of return on investments and encourages growth by exempting earnings from taxation; (2) to encourage public investment in infrastructure, federal law provides a tax exemption on interest income to those who invest in municipal bonds; (3) pension plans have no incentive to invest in lower-interest municipal bonds, since plan earnings are already tax exempt; (4) although the Infrastructure Commission recommended creating two federal financing entities to attract pension plans to invest, the share of plan assets that might go to infrastructure would likely be small; and (5) the federal capitalization of state revolving funds may be an option to expand infrastructure investment without relying on pension plans.
Charter schools are public schools established under contracts that grant them greater levels of autonomy from certain state and local laws and regulations in exchange for agreeing to meet certain student performance goals. Charter schools are often exempt from certain state and school district education laws and in some states may receive waivers or exemptions from other laws; however, charter schools must comply with select laws, including those pertaining to special education, civil rights, and health and safety conditions. While charter schools are free from many educational regulations, they are accountable for their educational and financial performance, including the testing requirements under the No Child Left Behind Act. A wide range of individuals or groups, including parents, educators, nonprofit organizations and universities, may apply to create a charter school. Charter schools are typically nonprofit organizations and, like other nonprofits, are governed by a board of trustees. The board of trustees, which is initially selected by the school founders, oversees legal compliance, financial management, contracts with external parties, and other school policies. School trustees are also responsible for identifying existing and potential risks facing the charter school and taking steps to reduce or eliminate these risks. Charters to operate a school are authorized by various bodies, depending on the state’s laws, but may include local school districts, municipal governments, or special chartering boards. According to a GAO survey, about half of the charter school states and the District of Columbia allowed more than one authorizer, providing charter school founders an opportunity to find support for a wider range of instructional approaches or educational philosophies than might be possible with a single authorizer. Many charter school authorizing bodies have formal procedures to monitor charter school performance in areas such as student performance, compliance with regulations, financial record keeping, and the provision of special education services. If charter schools do not meet expected performance measures, authorizers may revoke a school’s charter or decide not to renew the charter when it expires, resulting in the charter school’s closure. Since the first charter school opened in Minnesota in 1992, about 350 charter schools—of the approximately 3,700 that opened—have closed as of April 2005. The D.C. School Reform Act, a federal law that applies only to D.C., designated two charter school authorizers—the D.C. Board of Education (BOE) and the D.C. Public Charter School Board (PCSB). Both authorizers have similar responsibilities, but are structured differently. While the PCSB was created as an independent board with the sole purpose of approving and overseeing charter schools, the BOE oversees both the 167 traditional public schools that enrolled about 59,000 students in the 2004-2005 school year and charter schools. To effectively manage its oversight responsibilities for both traditional public schools and charter schools, the BOE created an internal Office of Charter Schools to manage its functions as an authorizer. In fiscal years 2003 and 2004, the BOE generally determined how much local funding to allocate to each of the Board’s functions, including charter schools, while Congress determined the level of PCSB’s local funds through its D.C. Appropriations Act. In addition to the two authorizers, several D.C. offices have responsibilities related to the District’s charter schools, including the D.C. Inspector General, the D.C. Auditor, the D.C. Chief Financial Officer, the Mayor’s State Education Office, and the State Education Agency, which is part of the D.C. Public School system (see fig. 1). The D.C. School Reform Act allows the BOE and the PCSB to grant up to 10 charters each per year. Each charter authorizes a school for 15 years, at which point the charter may be renewed if the authorizer approves. To date, no school has reached the end of its 15-year term. After granting charters to schools, each authorizer is responsible for monitoring those schools. Under the D.C. School Reform Act, the BOE and PCSB are required to monitor charter schools’ academic achievement, operations, and compliance with applicable laws. Both authorizers conduct pre- opening visits to new schools and subsequently conduct annual monitoring visits and data reviews to meet this requirement (see fig. 2). All schools granted a charter in D.C. must create an accountability plan that outlines the school’s 5-year academic goals. Accountability plans become part of each school’s charter and are used as guides for the authorizers to monitor academic progress. Additionally, under the D.C. School Reform Act, the D.C. authorizers must conduct more comprehensive reviews of charter schools every 5 years to determine if the schools should be allowed to continue operating. Charter schools that are not meeting academic performance goals may be closed following a 5-year review. Charter schools may be closed for other reasons, such as financial mismanagement or legal noncompliance, at any time. As we noted in our May 2005 report, the BOE first began chartering schools in 1996, and the PCSB chartered its first schools in 1997. As of the 2004-05 school year, 23 BOE and 27 PCSB charter schools had opened. However, between 1998 and September 2005, nine charter schools closed. The BOE has revoked seven charters, and two PCSB charter schools closed; one voluntarily released its charter, and the other had its charter revoked at the end of the 2004-2005 school year. Financial reasons contributed to the closing of most of the schools that had their charters revoked. During the 2004-05 school year, 16 BOE schools and 26 PCSB were in operation. As of January 2005, BOE charter schools enrolled 3,945 students, and PCSB charter schools enrolled 11,555 students. The two D.C. authorizers monitored a diverse set of schools (see table 1). These schools enrolled students at all grade levels, from pre-kindergarten to high school and offered varied instructional and academic models. For example some schools had a particular curricular emphasis, such as math and science, art, or foreign language, while other charter schools focused on specific populations, such as students with learning disabilities, students who have dropped out or are at risk of doing so, youth who have been involved in the criminal justice system, and adults. Additionally, the charter schools pursued a variety of school-specific goals that were aligned with their missions or the student populations they served. The D.C. School Reform Act requires the authorizers to monitor schools’ annual and 5-year progress toward these goals. Some examples of goals included in school charters are improved attendance rates and increased parental satisfaction. Other goals varied widely. For example, Maya Angelou, a high school serving at-risk youth, included among its 5-year goals both an 85 percent graduation rate, as well as a significant reduction in violent behavior by students. JOS-ARZ, a high school serving students with emotional and behavioral problems, included as a goal that at least half of its students would acquire skills that would allow them to function independently and would earn a high school diploma or the equivalent. D.C. charter schools receive funding from a wide range of sources. Charter schools in D.C. receive funding on a per-pupil basis using the same allocation formula for operating expenses that is applied to traditional D.C. public schools. In the 2004-2005 school year, charter and traditional public schools in D.C. received $6,904 to $8,077 for a regular education student depending on grade level. D.C. charter schools received an additional allotment—equal to $2,380 per non-residential student and $6,426 per residential student—to help cover the cost of school facilities. In addition to the per-pupil allotments, charter schools in D.C., like all public schools that meet federal criteria, are eligible for other federal funds, such as funding under the No Child Left Behind Act and the Individuals with Disabilities Education Improvement Act. A few charter schools also receive additional funding from foundation grants and other fundraising efforts. The two D.C. authorizers’ revenue, staff, and use of available D.C. services differed, but the authorizers spent their funds on similar activities. The BOE Office of Charter Schools had less revenue and fewer schools and staff to oversee them than the PCSB. To help fulfill its oversight responsibilities, the BOE Office of Charter Schools occasionally called upon D.C. agencies for financial operations reviews and used some D.C. Public School services. The PCSB, which had more staff and oversaw more schools, had total revenue that was more than twice that of the BOE Office of Charter Schools’ and employed a larger staff. The PCSB also received more revenue per school than the BOE Office of Charter Schools. Unlike the BOE Office of Charter Schools, the PCSB did not use D.C. Public Schools services, which were available to it, but did use D.C. government services on one occasion. Despite these differences, both authorizers used their financial resources similarly. Both spent most of their fiscal year 2004 funds on board operations, including personnel costs, and their remaining funds on consultants to help with monitoring, application review and school closures. The BOE Office of Charter Schools received less funding from its two main sources of revenue—the local funds allocated to it by the BOE and the administrative fees it was permitted to charge the schools it oversaw— than the corresponding amounts received by the PCSB (see table 2). The BOE Office of Charter Schools received $307,340 from the BOE in fiscal year 2004, less than half the amount of local funds the PCSB received in accordance with congressional directives, and collected $251,623 in fees from the schools it oversaw. The BOE Office of Charter Schools collected less in fees from schools than the PCSB, because these fees are based on the number of students per school, and the PCSB oversaw more schools with more students. In total revenue, the BOE Office of Charter Schools received approximately $38,000 per school in fiscal year 2004. In fiscal years 2003 and 2004, the BOE Office of Charter Schools had three staff, including its Executive Director, to oversee its schools, which was less than a third the staff of the PCSB’s. In October 2005, the new budget for the BOE made possible the hiring of three new staff for its Office of Charter Schools. The BOE Office of Charter Schools supplemented its staff by using consultants to help oversee charter schools. In fiscal year 2004, the Office of Charter Schools spent $121,502 on consultants in areas such as reviewing applications, conducting annual monitoring visits, and assisting with school closings. Consultants with issue area expertise also provided specialized assistance associated with monitoring schools’ financial condition and special education compliance. The BOE Office of Charter Schools spent $28,589 per school on both in-house personnel and consultants in fiscal year 2004, which was 16 percent less than the PCSB spent. In addition to using consultants, the BOE Office of Charter Schools augmented its financial and staff resources by leveraging services provided by other D.C. government agencies. On four occasions, the BOE Office of Charter Schools referred schools to the D.C. Auditor or the Office of the Chief Financial Officer for financial operations reviews. For example, in 2002, the BOE Office of Charter Schools asked the Office of the Chief Financial Officer to review the internal controls of one school. Additionally, the BOE Office of Charter Schools referred one school to a special interagency team that included the D.C. Department of Mental Health and the D.C. Child and Family Services Agency, to review the level of services provided to children. For the past 2 years, D.C. Public Schools has provided the BOE Office of Charter Schools with some test score analysis and school performance data, which the office used to determine if its charter schools were in compliance with No Child Left Behind Act requirements. In fiscal years 2003 and 2004, the PCSB received more revenue, had a larger staff and oversaw more schools than the BOE Office of Charter Schools. As an agency independent from the D.C. Board of Education, the PCSB received $660,000 in local funds in fiscal year 2004, as required by Congress. Additionally, the PCSB collected over $500,000 in fees from schools that year and in total received $61,360 per school. The PCSB used its revenue, which was more than double that of the BOE Office of Charter Schools, in part to employ nine people to oversee 22 charter schools in fiscal year 2004. The PCSB also had other revenue sources that the BOE Office of Charter Schools did not, such as grants and interest income. Like the BOE Office of Charter Schools, the PCSB supplemented its staff by using consultants. In fiscal year 2004, the PCSB spent $134,756 on consultants to help with charter school oversight, such as reviewing applications, conducting annual monitoring visits, and assisting with school closings. PCSB consultants also reviewed schools’ financial conditions and special education compliance. The PCSB spent more than its counterpart—$33,897 per school—on both in-house personnel and consultants. The PCSB used fewer services available to the authorizers from D.C. government agencies than the BOE Office of Charter Schools. Unlike the BOE Office of Charter Schools, the PCSB did not use D.C. Public Schools test score analysis to determine if its schools were meeting No Child Left Behind Act standards. However, in April 2005, upon the recommendation of its financial consultants, the PCSB referred one school to the D.C. Inspector General for investigation of certain questionable financial practices. Although the PCSB could refer more cases to D.C. agencies, a PCSB official stated that the PCSB instead tries to resolve all school issues by itself in order to help maintain the organization’s independence as an authorizing board. Although the two authorizers differed in terms of revenue available to them, they both used their financial resources to support oversight activities. Both authorizers used their staff and consultants to perform functions such as reviewing applications, monitoring schools, and overseeing school closures, although the costs associated with in-house staff were not separately tracked by the authorizers. For both authorizers, the majority of their expenses were used to support salaries and benefits for authorizer personnel involved in these activities, and other operational costs, such as conferences and technology. In fiscal year 2004, about three- quarters of the BOE Office of Charter Schools’ expenses, and 88 percent of the PCSB’s expenses were used in this way (see fig. 4). However, a smaller percentage of the PCSB’s expenses were used on personnel and a larger percentage on other operational costs, such as technology, conferences, and books. The PCSB also had to pay for office space, an expense that the BOE Office of Charter Schools did not have to incur as its offices were provided by the BOE. The BOE Office of Charter Schools used a larger percentage of its total expenses for consultants than the PCSB. Consultant fees represented one- quarter of the BOE Office of Charter Schools’ total expenses and one- eighth of the PCSB’s in fiscal year 2004. Both used consultants primarily to monitor schools, but both also hired consultants to review applications and help with school closings. For example, both the BOE and PCSB hired consultants to conduct site visits and review schools’ academic programs. Expenses for both authorizers in fiscal year 2003 were similar to 2004 expenses, as both authorizers used most of their expenses for personnel and other operational costs. See table 3 for detailed expense information for both authorizers. Both D.C. authorizers provided schools technical assistance and oversight of charter schools by tracking schools’ academic achievement and financial condition. Both authorizers provided charter schools with assistance and had similar oversight practices, such as visiting each school at least once annually to assess performance and school operations. However, their approaches to oversight differed. The BOE Office of Charter Schools, staffed with only three employees, provided the same level of oversight to all of its 16 schools and in doing so limited its ability to provide additional assistance to those schools needing more help. Moreover, the BOE, which was also responsible for 167 traditional public schools, did not regularly review information collected by its Office of Charter Schools. BOE board members we interviewed acknowledged that problems were sometimes allowed to go unresolved for too long. By contrast, the PCSB targeted additional oversight on new charter schools and those where problems had been identified. Both authorizers provided charter schools technical assistance in several areas. They often integrated technical assistance and monitoring to help schools improve academic and financial programs, identify potential facilities, and apply for facility funding. For example, the BOE Office of Charter Schools helped a school improve its financial condition after its 2003 audit raised questions about the school’s financial viability. Specifically, its staff helped the school end a disadvantageous relationship with a school management company and negotiate a lower rent and security deposit for new school facilities. In another case, the BOE Office of Charter Schools obtained a financial operations review from the D.C. Chief Financial Officer in 2003 and a multi-agency school review in 2004 to help a school that was identified as having enrollment and funding problems. The PCSB also integrated technical assistance and monitoring. For example, the PCSB referred a school to several local organizations for help, after the PCSB’s 2005 review concluded that the school needed to improve teacher professional development. PCSB has also established a governance project to develop pools of candidates for schools’ boards of trustees, created a financial policy manual for charter schools, and provided guidance to help schools address transitional issues as the schools increase enrollment and add grade levels. To help schools address academic deficiencies identified through monitoring, both authorizers have helped schools develop their academic accountability plans. The BOE Office of Charter Schools and PCSB also helped schools develop school improvement plans to meet No Child Left Behind Act requirements. For example, BOE Office of Charter Schools officials told us that they worked with one school to help it create an academic improvement plan and get relevant training for school leadership after the school did not make Adequate Yearly Progress in reading and math under the No Child Left Behind Act in 2005. They also said that they worked with the seven schools identified as needing improvement as a result of not achieving Adequate Yearly Progress in the 2004-2005 school year by identifying actions each school must take to comply with the law, such as developing school improvement plans outlining corrective strategies and offering services such as tutoring. The BOE Office of Charter Schools plans to use its annual monitoring visits to track these schools’ progress. For the 13 PCSB schools identified in 2004-2005 as needing improvement as a result of not making Adequate Yearly Progress, the PCSB closed one and has made plans to track schools’ improvement efforts through its annual monitoring visits. For example, it has incorporated key questions into its visit protocols in order to measure schools’ progress. The BOE Office of Charter Schools provided oversight to charter schools by using a uniform process to collect academic and financial data from all its schools, as well as other information about schools’ governance structure and compliance with laws. Specifically, it required each school to submit the same information, such as monthly financial statements, annual student test scores, and teacher information. It also visited each school at least twice a year—once before the school year began to check the school’s facilities and operational systems and again during the school year to monitor school performance. At the beginning of the school year, the BOE Office of Charter Schools focused on compliance and governance issues, by checking areas such as school board membership, student record storage, and adequacy of school facilities. In subsequent visits during the school year, the BOE Office of Charter Schools monitored mainly performance information, such as tracking schools’ progress in achieving their own academic goals and reviewing school budgets and annual audits. Although the BOE Office of Charter Schools’ monitoring approach has enabled it to compile comprehensive data on every school, this approach has limited its ability to focus attention on those schools most in need of the monitoring: new schools and schools considered to be at risk. Because the office has collected the same data from all schools regardless of their years of operation or performance history, the small staff has had to spend considerable time sifting through considerable amounts of data on all schools, while problems at some schools go unaddressed. For example, by visiting all schools at the beginning of every school year—rather than only visiting new schools or schools in new locations—the BOE Office of Charter Schools has dedicated staff resources that could have been focused on higher risk schools. Moreover, according to a BOE Office of Charter Schools official, in the past collecting and reviewing monthly financial statements from all of its schools had been nearly a full-time responsibility. Furthermore, while the BOE Office of Charter Schools has relied on the schools’ annual financial statement audits for key information related to financial oversight, it has not developed a system to assign priority to schools whose audits documented ongoing problems. For example, one school’s 2002 audit identified weaknesses that made the school vulnerable to an embezzlement. Although this particular weakness was corrected, the 2003 audit showed additional evidence of weak internal controls. When we asked BOE Office of Charter Schools officials whether it was possible to focus oversight on schools like this one, they told us that their practice was to review all schools during the annual monitoring visit to determine whether the issues have been resolved and require a corrective action plan if they have not. Additionally, the BOE Office of Charter Schools generally has not reviewed schools’ annual financial statement audits when they were submitted to the office, and instead waited until after the authorizers’ financial monitors completed their reviews of the schools. This approach may not allow the BOE Office of Charter Schools to respond in a timely manner to schools with immediate problems and may have contributed to the number of BOE charter schools that closed for financial reasons. Even after BOE Office of Charter Schools staff identified problems, resolution was sometimes prolonged. For example, two Board members we interviewed said that timely action was not taken with regard to the Village Learning Center that eventually closed in 2004. According to the BOE Office of Charter Schools’ own monitoring reports, this school experienced numerous problems over a period of years, including noncompliance with special education requirements, and failure to pay federal taxes and submit required federal grant performance reports. This school was open for 6 years and was granted three probationary periods totaling 180 days. Of the seven BOE schools whose charters were revoked, four with long-standing problems were allowed to remain open 4 years or more. In two of these cases, the BOE allowed the schools to stay open to give them time to correct their deficiencies. According to two BOE Board members we interviewed, the Board did not regularly review information collected by the BOE Office of Charter Schools and has not always acted upon information it received. They also stated that the BOE has not provided adequate oversight of its charter schools; as one Board member explained, it is easy to think of charter school oversight as a secondary concern to overseeing the public school system as a whole. Furthermore, during our review of the BOE Office of Charter Schools, the BOE had not held regular meetings devoted to charter schools and did not have a committee dedicated to charter school oversight. In October 2005, the BOE approved the creation of a charter school committee. Additionally, our review of BOE minutes showed that charter schools were infrequently discussed at BOE’s meetings. According to the BOE Office of Charter Schools Director, BOE Board members devoted some working sessions to charter school oversight issues; however, no minutes were taken of these meetings. The PCSB also monitored schools’ financial condition and academic performance but targeted additional monitoring on schools that needed more oversight. To monitor schools’ financial performance, the PCSB collected and reviewed school budgets, monthly financial statements, and annual financial statement audits. To monitor academics, the PCSB visited schools annually and collected and analyzed test scores and other data to track schools’ outcomes as measured against their own academic goals and D.C. performance standards. Additionally, the PCSB created an annual compliance review process to track compliance with the No Child Left Behind Act, special education requirements, provisions of the D.C. School Reform Act and other laws. When schools have been identified as being out of compliance, the PCSB has used this process to identify specific actions each school must take to comply with the law, such as developing school improvement plans, offering services such as tutoring, and notifying parents. The PCSB has also monitored issues related to the schools’ governance, such as the composition and operation of school boards of trustees, through its annual review of schools’ compliance with applicable laws and regulations. To ensure that new schools or schools identified as at risk receive sufficient oversight, the PCSB has targeted monitoring to ensure that higher-risk schools receive more attention, while lower-risk entities that were operating smoothly received less scrutiny. For example, at the beginning of the school year, the PCSB conducted pre-opening assessments of only new schools and schools opening in new facilities, thereby freeing up staff resources for higher risk schools. The PCSB also provided additional oversight to new schools by conducting a special financial management review of the internal controls of schools in their first year of operation. According to the PCSB, the purpose of this early review was both to assess school compliance and help the schools address issues early in their implementation. Additionally, the PCSB required schools in their first year to prepare a self-study, providing another opportunity for the school to identify challenges that require attention. The PCSB also applied targeted monitoring to its financial reviews. Schools with demonstrated financial performance have been allowed by the PCSB to submit their financial statements on a quarterly—rather than monthly—basis. The PCSB required schools to return to monthly submissions of financial statements when financial concerns emerged. For example, in October 2004, the PCSB required one school to resume monthly reporting after it failed to submit a financial statement audit that was due in November 2003. Additionally, the PCSB has conducted interim financial reviews of schools where financial problems have been detected through regular monitoring. After identifying a potential budgetary shortfall at one school, the PCSB conducted an interim financial review of the school to assess its financial condition. According to a PCSB official, this review provided the PCSB with the opportunity to make recommendations to the school to reduce its expenses, which helped restore the school’s financial condition. The PCSB also modified its annual and 5-year review processes to highlight or prioritize schools considered at risk. For example, PCSB’s annual program review, which has focused on academic performance, labeled some trouble areas as “mission-critical problems,” signaling to the school that inadequate improvement in these areas could threaten the school’s viability. In one case, the PCSB highlighted a school leader’s extended absences as a mission-critical problem that needed to be addressed. Following the PCSB’s monitoring report, the school leader resigned, and the school hired a new principal. Furthermore, the PCSB has applied criteria to schools during their 5th year of operation to determine whether each school has met the majority of its academic and nonacademic goals. Any school that has not met both its academic and nonacademic goals can be placed on PCSB’s “Priority Review List.” In May 2004, the PCSB placed one such school on this list—the SouthEast Academy for Academic Excellence—and subsequently revoked its charter. One PCSB official stated that the rationale for using a targeted monitoring approach is to free up more resources for technical assistance. Although some of its schools have experienced problems, the PCSB’s targeted monitoring approach allowed it to identify and provide technical assistance to schools in need of attention, which helped PCSB schools focus on their deficiencies. For example, the PCSB’s monitoring reports highlighted one school’s need for retaining qualified special education staff as a “mission-critical” problem for 3 consecutive years. In response, the PCSB targeted additional monitoring on this school and worked with the school to develop plans for hiring and training teachers to mitigate the school’s special education skills shortfall. The PCSB’s targeted monitoring approach also highlighted problems at both of the PCSB’s schools that later closed. For example, PCSB monitoring reports show that the authorizer identified problems, such as noncompliance with special education requirements, noncompliance with its charter, and late financial statement audits, at Associates for Renewal in Education Public Charter School, in the school’s second year of operation. The PCSB also identified problems at SouthEast Academy, including the school’s inability to implement its own academic improvement plans. In both cases, the PCSB responded by intensifying monitoring and placing the schools on probation, and in the case of SouthEast Academy, revoked the school’s charter after it failed to correct identified deficiencies. When charter schools have closed, both authorizers undertook a wide range of activities to ensure student records and public assets were safeguarded, parents informed of their children’s school options, and closing schools received the assistance they needed; however, issues arose during closings that both found difficult to readily address. Officials from both authorizers stated that closing charter schools was costly, particularly when the closed schools were financially insolvent, and managing student records was the most challenging aspect of school closures. The authorizers’ processes for closing all nine schools varied in every instance. Authorizers used their staff and financial resources to oversee school closings as well as handle closing logistics, such as distributing student records, inventorying assets, and communicating with parents. Authorizer staff inventoried school property, returned assets bought with federal dollars to the U.S. government, and disbursed remaining assets to other non-profit organizations, including existing charter schools. For example, when BOE Office of Charter Schools closed one of its schools, D.C. Public School staff helped the office inventory and dispose of its property. In a case when the closing school was financially insolvent, BOE Office of Charter Schools staff referred its creditors to appropriate parties for repayment. When one of PCSB’s schools closed, it hired a contractor to inventory the school’s records. Additionally, both authorizers communicated closing procedures to parents and students. BOE Office of Charter Schools officials stated that when charter schools closed, they sent letters to students and spoke with parents about the closure process. When the PCSB closed SouthEast Academy in 2005, GAO staff observed a town hall meeting where PCSB discussed the closure and students’ options for moving to another school. However, parents at the meeting still expressed confusion about the closure process and their questions about whether the closed school would reopen under new management could not be definitively answered at that time. Both the BOE and PCSB incurred costs when closing charter schools, particularly for the five schools that were financially insolvent. For example, a PCSB official stated that when PCSB closed the Associates for Renewal in Education Public Charter School, the PCSB spent over $15,000 from its budget. The authorizers often had to hire additional temporary staff to help with closing logistics. For example, in some instances, the authorizers hired administrators from the closed schools to help transfer records and dispose of inventory, such as textbooks, computers, and desks. BOE Office of Charter School and PCSB staff stated that hiring these school administrators helped make the closing process more efficient because the administrators were knowledgeable of the schools’ financial and student record keeping systems, as well as the school staff, parents and students. Four of the nine schools that were closed were financially solvent. In these cases, the authorizers used school resources to help defray the cost of closing. For example, the PCSB used the remaining assets from one closed school to hire a records management company to help collect, transfer, and store student records. Both authorizers also devoted staff resources to school closings. Staff spent time inventorying and dispersing assets, speaking with parents, and dealing with creditors when schools closed. Both authorizers reported managing and safeguarding student records was the most challenging aspect of closing schools. Authorizers have assumed responsibility for reviewing student records for completeness, collecting records from closing schools, and distributing records to new schools. BOE Office of Charter Schools officials stated that this process can be delayed when student records were missing or were not complete. In some instances, student records may be on teachers’ desks or in classrooms rather than in central files, according to BOE Office of Charter Schools officials. Authorizer staff must then find and collect missing records. Student records can also be missing information or contain incorrect data. For example, BOE Office of Charter Schools officials told us that some student records have not included information about the most recent school quarter, while other records contained grade or class information that the students have stated is not correct. BOE Office of Charter Schools officials stated that updating and correcting these records can be difficult as the administrators or teachers with the pertinent information may no longer be available to provide assistance. PCSB officials also cited problems with managing student records. For example, PCSB officials stated that transferring records to new schools has been complicated when parents and students were unsure which school they would attend in the fall. In instances where students were no longer continuing their education, both authorizers had to find a place to store student records. Although D.C. law requires that student records become the property of D.C. Public Schools when a charter school closes, D.C. Public Schools officials stated that they thought responsibility for charter school records belonged to the authorizers. D.C. Public Schools officials nevertheless told the PCSB in September 2005 that they would be able to transfer these records to the D.C. Public Schools central administrative office once this office was ready to receive them. According to PCSB officials, as of November 2005, D.C. Public School officials have not notified them that they are ready to receive these records and therefore the records remain at the PCSB office. Similarly, BOE Office of Charter Schools staff told us that they also kept the records of students who did not continue their education in their office. Authorizer staff expressed concerns that parents and students could have difficulty locating these records, which they would need if they wished to continue their education or join the military. For example, students—who may have only had contact with school administrators, rather than authorizer staff—might not know which authorizer was responsible for their charter school or how to contact the authorizer when they need to obtain records from a closed charter school. In all nine instances where schools had been closed, neither authorizer has followed a consistent closure process, and each has dealt with issues as they arose on a case-by- case basis. For example, the PCSB recently had to deal with a new closure problem, when for the first time, a charter school was closed that owned rather than leased its facility. PCSB officials expressed concerns that the facility might not remain available to D.C. charter schools, a particular concern as new charter schools have often had difficulty finding adequate space and existing ones have had difficulty acquiring space to expand. In this case, the PCSB established guidance outlining a process by which a closed school could receive petitions from existing charter schools to utilize its facility, and subsequently approved the arrangement reached between schools. BOE Office of Charter Schools and PCSB officials stated that they varied their school closing process based partly on the size and type of school and its financial condition at the time of closing, but such a varied process and the reasons for it may not be evident to parents as they try to think through what they need to do to transition their children to other schools. For example, when the BOE has closed schools, its Office of Charter Schools notified parents of the closure through letters and phone calls, while PCSB officials also held a town hall meeting when it revoked a charter. The handling of student records has varied across school closings and could be confusing for parents. When PCSB closed Associates for Renewal in Education Public Charter School, the PCSB hand delivered student records or sent them by certified mail to the new schools. In a few instances, students also collected their records from the PCSB office. However, when SouthEast Academy had its charter revoked, the PCSB hired a record management company to collect and transfer all records using the funds from the closed school. PCSB staff and board members stated that they are hoping to use the SouthEast Academy closure procedures as a model for future school closings. While both authorizers have gained considerable experience with respect to what is required when schools close, neither has put in place a plan that would better guide school closing that would make it more efficient and clear to parents. With over a fifth of its students in charter schools, the District of Columbia has made a significant investment in charter schools. To protect this investment, the authorizers have a responsibility to provide timely oversight that ensures that students’ interests are served. However, the two authorizers conducted their monitoring differently, with the PCSB targeting its resources and the BOE Office of Charter Schools generally providing the same level of oversight to all its schools regardless of risk. While both approaches comply with the law’s requirements for authorizers, BOE can more effectively focus its resources where possible to oversee charter schools, particularly given its limited staff. Without such targeting, this authorizer may not be well positioned to ensure that its schools receive the assistance they need when they are most at risk. Additionally, the Board of Education did not have a routine structure or process to ensure that its members regularly reviewed monitoring data for the charter schools under its purview. As a result, the Board has not always reviewed the information before it in time to react effectively and in a timely manner. Although the Board has begun to address this issue by taking steps to create a charter school committee, it is not yet clear that this action will be sufficient to ensure that charter schools receive appropriate attention. Finally, when charter schools close, it is critical that a transparent process exist to ensure that schools, parents, and students understand their options. While each authorizer undertook a wide range of activities when schools closed, no process existed to guide the authorizers, schools, and students through the closing. Lacking such a process, school administrators and parents may not understand what is required and expected of them to ensure a smooth educational transition for students. Additionally, the absence of such a process may result in student records being misplaced or difficult for students to locate in the future, particularly if they do not know which entity authorized their school. Additionally, closing charter schools without a systematic approach may not allow the authorizers to build on previous experience or learn from each other. As a result, an opportunity may be lost to develop a uniform, transparent, and efficient process that protects the interests of all parties. To ensure that D.C. charter schools authorized by the BOE receive appropriate oversight, we recommend that the BOE Office of Charter Schools implement a risk-based oversight system that targets additional monitoring resources to new charter schools and those identified at risk. Additionally, we recommend that the BOE create a routine and timely process to review the monitoring information, including audit reports, collected by its Office of Charter Schools. To help alleviate confusion among parents, students and school administrators following the closure of a charter school and to help the D.C. authorizers close schools efficiently, we recommend that the BOE Office of Charter Schools, the PCSB and D.C. Public Schools establish a routine process when schools close, including, among other things, a system for the secure transfer and maintenance of student records. We provided a draft of this report to the BOE Office of Charter Schools. In her response, the Executive Director of the BOE Office of Charter Schools noted that the BOE was taking actions that would address the recommendations in this report. For example, in response to the first recommendation, the Executive Director stated that the hiring of three new staff members will help focus greater oversight on schools in need. Similarly, in response to the second recommendation, the Executive Director stated that the BOE’s newly established committee on charter schools has begun reviewing monitoring data. BOE Board members also provided technical clarifications, which we incorporated as appropriate in this report. In response to Board member comments, we did not change enrollment or school data, because the audited enrollment count and D.C. Public Schools confirmed our initial information was accurate. We also provided a draft of this report to the PCSB. The comments of the PCSB Executive Director supported our recommendation that the BOE Office of Charter Schools, the PCSB and D.C. Public Schools establish a routine process for the secure transfer and maintenance of records when schools close. PCSB officials also provided technical clarifications, which we incorporated as appropriate in this report. We are sending copies of this report to relevant District of Columbia officials, relevant congressional committees, the Secretary of the Department of Education, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-7215 if you or your staffs have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other major contributors to this report are listed in appendix IV. As required by the D.C. Appropriations Act of 2005, we conducted a review of D.C.’s two charter school authorizers, the Board of Education (BOE) and the Public Charter School Board (PCSB). In conducting our analysis, we reviewed the D.C. School Reform Act, as amended, and other applicable federal and District laws and regulations to determine the authorizers’ legal responsibilities. To determine how the authorizers used their resources (financial and otherwise), we analyzed the authorizers’ budgets and expenses for fiscal years 2003 and 2004. We also examined the authorizers’ use of staff resources and D.C. government services. Specifically, we identified the types of services available to the authorizers by D.C. agencies and learned if and how the authorizers used these services. To analyze the authorizers’ provision of oversight, we examined monitoring reports, audits and related documentation from 8 of the 42 DC charter schools operating in school year 2004-2005. We selected these schools using nonprobability procedures. In nonprobability sampling, staff selected a sample based on their knowledge of the population’s characteristics. We selected schools to ensure that our report was able to address a variety of the issues that the oversight boards faced in their monitoring efforts. Results from this nonprobability sample cannot be generalized to the entire population of schools. GAO also convened two focus groups of charter school administrators (one focus group per authorizer) to substantiate and augment information provided by the authorizers. Finally, we examined the actions the authorizers took to address issues arising from the closure of the nine charter schools that have lost their charters to date. (See below for more information about our budget, monitoring, and closure document analysis and use of focus groups.) We interviewed authorizer staff and board members and officials of District agencies, including D.C. Public Schools, the Office of the Chief Financial Officer, the Office of the D.C. Auditor, and the D.C. Office of the Inspector General. We also interviewed representatives from the D.C. Public Charter School Association and Friends of Choice in Urban Schools, a D.C. charter school advocacy group. We conducted our work between January and November 2005 in accordance with generally accepted government auditing standards. To analyze the authorizers’ resources and to learn how they have used them, we examined the PCSB’s and BOE Office of Charter Schools’s income and expense statements for fiscal years 2003 and 2004. We analyzed the income statements to determine the proportion of income each board derived from various sources. Additionally, through interviews with authorizer staff, we identified income that was carried over from the previous fiscal year, which was not specifically labeled as such. We analyzed expenses and categorized these expenses into similar groupings for comparability purposes. We compared these data with projected budgets for corresponding fiscal years to identify differences. Finally, we reviewed the PCSB’s financial statement audits for these fiscal years. While the D.C. School Reform Act explicitly requires the PCSB to obtain an annual financial statement audit, the Act contains no such requirement for the BOE Office of Charter Schools. As a result, we used unaudited financial information from the BOE Office of Charter Schools. To obtain information about the processes both authorizers used to monitor charter schools after they had opened, we examined monitoring documentation for eight charter schools—four from each authorizer. These eight schools were selected for variation in the date the schools opened, grades served, and the schools’ history of probation or sanctions. Additionally, in selecting the 8 schools, we considered whether or not the charter school had gone through the 5-year review process, targeted a special needs population, and had achieved Adequate Yearly Progress in math or reading as required by the No Child Left Behind Act, and chose schools for variation in these areas. While our nonprobability selection of 8 of the 42 D.C. charter schools does not allow GAO to generalize results to all 42 charter schools, our sampling procedures helped ensure that GAO was able to address the full assortment of issues that the oversight boards faced in their monitoring efforts. Once the schools were selected, we requested the authorizers provide us with all monitoring documents, including documentation of pre-opening visits, annual monitoring site visits, annual audited financial statements, and documentation of any sanctions placed on the school. We examined these documents to learn how the BOE and PCSB monitored academics, finances, school governance and compliance with applicable laws and regulations. For each of these areas, we examined the types of deficiencies the authorizers identified at the schools and how the authorizers ensured deficiencies were corrected. We analyzed this information on a year-by-year basis to identify trends in how the BOE and PCSB monitored schools and addressed problems. We also reviewed Board minutes from monthly meetings held between January 2003 and April 2005 to learn information about how the BOE and PCSB address charter school problems at board meetings. We also used this information to learn more about specific issues at D.C.’s charter schools. We also used focus groups to obtain the opinions and insights of D.C. charter school principals and other school officials regarding PCSB and BOE oversight efforts. Focus groups are a form of qualitative research in which a specially trained leader, the moderator, meets with a small group of people who have similar characteristics and are knowledgeable about the specific issue. The results from the discussion groups are descriptive, showing the range of opinions and ideas among participants. However, the results cannot serve as a basis for statistical inference because discussion groups are not designed to (1) demonstrate the extent of a problem or to generalize results to a larger population, (2) develop a consensus for an agreed-upon plan of action, or (3) provide statistically representative samples with reliable quantitative estimates. The opinions of many group participants showed a great deal of consensus, and the recurring themes provide some amount of validation. We conducted two focus groups—one with school leaders from schools overseen by PCSB and one with school leaders from schools overseen by BOE. We invited all of the eight schools whose monitoring documents had been assessed by GAO to attend their respective focus groups. In addition, we invited representatives from a random selection of the remaining charter schools (those whose monitoring documents were not assessed by GAO) to gather information from additional schools. Attendance on the part of invited participants was voluntary. We had three participants from three different schools at each of our two focus groups for a total of six schools participating. A trained focus group moderator led the discussions. We developed a discussion group guide to assist the moderator in leading the discussions. A transcription service recorded and then transcribed the conversations. To determine the actions the authorizers have taken when D.C. charter schools closed, we examined documentation from the seven charter schools closed by the BOE and the two closed PCSB charter schools—one voluntarily and one through charter revocation. In each instance, we reviewed the monitoring documentation for each of the closed schools to determine how the authorizers had identified and reacted to problems at the schools. We reviewed Board minutes to determine if and when sanctions were placed on the schools and how the schools responded to these disciplinary actions. We reviewed revocation documentation, including school and authorizer correspondence, school appeals, and minutes from Board meetings when revocations were discussed. Additionally, we analyzed the authorizers’ budget documents to determine how the authorizers used their financial resources to close schools. We reviewed D.C. Public School policies for closing schools and compared these policies with the authorizers’ school closure procedures. We attended a town hall meeting organized by the PCSB for parents of a charter school that was being closed to observe how the authorizer’s staff and school administrators communicated closure information to students and parents. Finally, we interviewed one school administrator from a school that had its charter revoked to learn about the closure process from the school’s perspective. Sherri Doughty, Assistant Director, and Tamara Fucile, Analyst in Charge, managed this assignment and made significant contributions to all aspects of this report. Christopher Morehouse also made significant contributions, and Carlos Hazera, Walter Vance, and Shannon VanCleave aided in this assignment. In addition, Richard Burkard and Shelia McCoy assisted in the legal analysis, Julie Phillips assisted with the financial analysis, and Tovah Rom and Rachael Chamberlin assisted in the message and report development. Charter Schools: Oversight Practices in the District of Columbia. GAO-05-490. Washington, D.C.: May 19, 2005. Charter Schools: To Enhance Education’s Monitoring and Research, More Charter School-Level Data Are Needed. GAO-05-5. Washington, D.C.: January 12, 2005. No Child Left Behind Act: Education Needs to Provide Additional Technical Assistance and Conduct Implementation Studies for School Choice Provision. GAO-05-7. Washington, D.C.: December 10, 2004. District of Columbia’s Performance Accountability Report. GAO-04-940R. Washington, D.C.: July 7, 2004. Charter Schools: New Charter Schools across the Country and in the District of Columbia Face Similar Start-Up Challenges. GAO-03-899. Washington, D.C.: September 3, 2003. Public Schools: Insufficient Research to Determine Effectiveness of Selected Private Education Companies. GAO-03-11. Washington, D.C.: October 29, 2002. DCPS: Attorneys’ Fees for Access to Special Education Opportunities. GAO-02-559R. Washington, D.C.: May 22, 2002. District of Columbia: Performance Report Reflects Progress and Opportunities for Improvement. GAO-02-588. Washington, D.C.: April 15, 2002. Title I Funding: Poor Children Benefit though Funding per Poor Child Differs. GAO-02-242. Washington, D.C.: January 29, 2002. District of Columbia: Comments on Fiscal Year 2000 Performance Report. GAO-01-804. Washington, D.C.: June 8, 2001. Charter Schools: Limited Access to Facility Financing. GAO/HEHS-00-163. Washington, D.C.: September 12, 2000. Charter Schools: Federal Funding Available but Barriers Exist. GAO/HEHS-98-84. Washington, D.C.: April 30, 1998. Charter Schools: Issues Affecting Access to Federal Funds. GAO/T-HEHS-97-216. Washington, D.C.: September 16, 1997.
D.C. has a larger percentage of students in charter schools than any state. To help oversee D.C. charter schools, Congress established two authorizers--the Board of Education (BOE), which has an Office of Charter Schools responsible for oversight, and the independent Public Charter School Board (PCSB). Congress required the GAO to conduct a study of the authorizers. This report--which completes GAO's May 2005 study--examines the (1) authorizers' resources, (2) oversight practices, and (3) actions taken once charter schools close. GAO examined BOE and PCSB monitoring reports, revenue and expenditure documents, and closure procedures. The two D.C. charter school authorizers differed in revenue, number of staff overseeing schools, and use of D.C. services, but both spent their funds to support oversight activities. The BOE Office of Charter Schools had less revenue and fewer staff overseeing fewer schools than PCSB. It fulfilled its oversight responsibilities by using some D.C. Public School services and also occasionally calling upon D.C. agencies for financial operations reviews. The PCSB had a larger staff that oversaw more schools and had revenue more than two times larger than that of the BOE Office of Charter Schools. The PCSB did not use any D.C. Public Schools services, but did refer one school to a D.C. agency for further examination. Despite these differences, both authorizers used most of their fiscal year 2004 expenses for in-house board operations, such as personnel, and also hired consultants to help monitor charter schools. Both D.C. authorizers provided technical assistance to schools and had similar oversight practices, such as tracking school academics and finances, but took different approaches. The BOE Office of Charter Schools, with only 3 staff, provided the same level of oversight to all of its 16 schools and thereby limited its ability to target additional resources to schools requiring more assistance. Moreover, when the BOE Office of Charter Schools gave its Board monitoring information on its charter schools, the Board--also responsible for the city's 167 traditional schools--did not regularly review that information. In contrast, the PCSB targeted additional oversight on new charter schools and those where problems had been identified. The PCSB also granted more flexibility to well-managed schools. Although problems persisted at some schools, the PCSB's targeted system enabled it to focus more attention on these schools. Once D.C. charter schools closed, both authorizers took a number of actions to safeguard student records and public assets and inform parents of their children's educational options; however, issues arose that both authorizers found difficult to adequately address, particularly when the closed school was insolvent. Managing and safeguarding student records was the most expensive and challenging aspect of closing schools, authorizers reported. Moreover, the authorizers' closure processes were different each of the 9 times charter schools closed, which limited opportunities to build on past experiences.
NQF is a nonprofit organization established in 1999 in order to foster agreement, or consensus, on national standards for measuring and public reporting of health care performance data.more than 400 organizations that represent multiple sectors of the health care system, including providers, consumers, and researchers. NQF’s mission focuses on three core areas: (1) building consensus on national priorities and goals for performance improvement and working in partnership to achieve them, (2) endorsing national consensus standards for measuring and publicly reporting on performance, and (3) promoting the attainment of national goals through education and outreach programs. Its membership includes Prior to its contract with HHS, NQF established a consensus development process (CDP) to evaluate available health care quality measures to determine which ones are qualified to be endorsed—that is, recognized— as national standards. Under this process, organizations that develop quality measures submit them to NQF for consideration, in response to specific solicitations by NQF. NQF forms a committee of experts from its member organizations as well as other organizations and agencies to conduct an objective and transparent review of these quality measures against four standardized criteria established by NQF, such as whether the measures are scientifically acceptable. After this committee evaluates the measures against these criteria, NQF’s process allows for a period during which its member organizations and the public may comment on the committee’s recommendation for each measure. The process also provides for a period for its member organizations to vote on whether the measures should be endorsed by NQF as a national standard. Ultimately, NQF’s board of directors makes a final decision on whether NQF should formally endorse the measures. As of October 2011, NQF has endorsed over 600 health care quality measures in 27 areas, such as cancer and diabetes. HHS uses NQF- endorsed measures in its programs and initiatives to promote quality measurement, and NQF continues to endorse quality measures separate from its contract with HHS. NQF’s work under the contract includes endorsement of quality measures and other activities that are expected to support HHS’s quality measurement efforts, such as through value-based purchasing programs. Specifically, NQF’s work under the contract consists of various projects under the nine contract activities related to health care quality measurement. The work plans developed annually to respond to MIPPA and NQF’s technical proposal to respond to PPACA delineate the projects NQF is required to conduct under the nine contract activities, as well as expected time frames and cost estimates for the projects for each year. Table 1 provides more detailed information on the nine contract activities. Some of these activities are required by either MIPPA or PPACA, while others are quality measurement activities established by HHS or administrative activities. To help determine the activities and the projects under the nine contract activities that NQF is expected to perform during each contract year, HHS has established an interagency workgroup that comprises officials from multiple divisions within HHS, including the Agency for Healthcare Research and Quality, the Centers for Medicare & Medicaid Services (CMS), and the Office of the National Coordinator for Health Information Technology. The workgroup is responsible for prioritizing and selecting the activities and projects under each activity that NQF is expected to perform during each contract year. HHS officials told us that the representatives from these various HHS agencies provide input on the work NQF is expected to perform, including determining quality measures requested from NQF for their respective programs. The activities and projects selected by the interagency workgroup become part of NQF’s scope of work under the contract. Some of the projects under the contract activities that NQF is expected to perform during the year will be ongoing from the previous contract year while new work will be incorporated into the work plan as necessary. For the NQF contract, HHS selected a cost-plus-fixed-fee contract, under which HHS reimburses NQF for actual costs incurred under the contract in addition to a fixed fee that is unrelated to costs. Cost-plus-fixed-fee contracts are used for efforts such as research, design, or study efforts where costs and technical uncertainties exist and it is desirable to retain as much flexibility as possible in order to accommodate change. However, this type of contract provides only a minimum incentive to the contractor to control costs. As we reported in 2009, these contracts are suitable when the cost of work to be done is difficult to estimate and the level of effort required is unknown. This cost-plus-fixed-fee contract is NQF’s first cost-reimbursement contract. For cost-reimbursement contracts, the Federal Acquisition Regulation (FAR) requires appropriate government surveillance during performance to provide reasonable assurance that efficient methods and effective cost controls are used. Under the FAR, contracts are to contain a provision for agency approval of a contractor’s subcontracts. HHS’s contract with NQF contains this provision and also requires the approval of consultants engaged under the contract. The review and approval of NQF’s use of subcontractors and consultants require appropriate support documentation provided by NQF to HHS, including a description of the services, the proposed price, and a negotiation memo that reflects the principal elements of the price negotiations between NQF and the subcontractor or consultant. Under its contract with HHS, NQF has utilized 31 subcontractors and 16 consultants since January 14, 2010, to provide support to NQF on many of the contract activities and associated projects. Two HHS components are principally responsible for administering the NQF contract: the office of the Assistant Secretary for Planning and Evaluation (ASPE) and CMS—an agency within HHS. Specifically, the project officer for the NQF contract is a representative of ASPE. This individual is responsible for program management and works with the contracting officer to oversee the contract. The contracting officer for the NQF contract, responsible for administering the contract, is a representative of CMS.contracting officer is required to perform, such as conducting an annual evaluation of the contractor’s performance. From January 14, 2010, through August 31, 2011, NQF made progress on projects under its contract activities. However, our review of NQF documents found that NQF had not met or did not expect to meet time frames on more than half of the projects, and it exceeded its cost estimates for projects under three of the contract activities. HHS did not use all tools for monitoring that are required under the contract. From January 14, 2010, through August 31, 2011, NQF has made progress on 60 of the 63 projects under the activities required under its contract with HHS. Specifically, NQF had completed 26 projects and was (App. III provides the continuing to work on the remaining 34 projects.status of all contract activities and the projects under each activity NQF was expected to perform during our reporting period.) Examples of projects under the contract activities include both completed and continuing projects: Endorsement of Measures Activity. NQF endorsed 101 measures since the beginning of the second contract year by conducting work on endorsement projects on different topic areas. Specifically, NQF completed two projects to endorse 38 outcome measures related to 20 high-priority conditions identified by CMS that account for the majority of Medicare’s costs, and mental health and child health conditions; and 21 performance measures for chronic and postacute care nursing facilities. NQF also worked on two projects related to child health quality and patient safety. As of August 2011, NQF endorsed 41 child health quality measures and 1 patient safety measure under these projects. NQF expected to complete the child health quality project in September 2011 and the patient safety project in December 2011. In addition, NQF completed a contractually required review of its endorsement process, subcontracting with Mathematica Policy Research, Inc. (Mathematica). The review focused on the timeliness and effectiveness of the endorsement process; identified inefficiencies, including those that may contribute to delays; and recommended, among other steps, that NQF create a schedule for its endorsement process for measure developers and develop feasible time lines that include clear goals for each endorsement project. HHS officials stated that Mathematica’s recommendations were valuable because much of the work under the NQF contract needs to be completed in an accelerated timeline to help fill critical measurement gaps associated with HHS’s health care quality programs and initiatives. For more information about this review, see appendix IV. Maintenance of Endorsed Quality Measures Activity. NQF maintained—that is, updated or retired—124 measures under the contract since the beginning of the second contract year. These included 41 measures reviewed under NQF’s 3-year review cycle related to diabetes, mental health, and musculoskeletal conditions. In addition, 83 measures were maintained under NQF’s other maintenance review processes. NQF was also continuing to work on maintenance projects it initiated in 2010 for measures related to cardiovascular and surgery measures. As of August 2011, the two projects were expected to be completed by December 2011 and January 2012, respectively. Promotion of the Development and Use of Electronic Health Records Activity. NQF has made progress on three projects related to retooling—that is, converting previously endorsed quality measures to an electronic format that is compatible with electronic health records (EHR). First, NQF completed initial retooling of 113 measures. This work is intended to allow data from EHRs to be used for quality measurement, which is a part of HHS’s long-term goal to use health information technology to exchange information and improve quality of care. Second, as of August 2011, NQF convened an expert review panel to review the retooled measures to ensure that each retooled measure is properly formatted, the logic is correctly stated, and the intent of the measures is maintained in the electronic format that will use data obtained from EHRs, instead of from claims as originally formatted. Third, as of August 2011, NQF was expected to complete another project to provide an updated list of the 113 retooled measures to HHS by December 2011, which would incorporate any revisions identified by the expert review panel and others involved in the retooling process. After these updated measures are completed, HHS officials told us that they will contract with other entities to conduct testing of some of the 113 retooled measures to assess the feasibility of implementing the measures in the electronic format. Although NQF’s endorsement process requires that measure developers submit data on validity and reliability testing of measures they submit for endorsement, this testing does not include feasibility testing for implementing the measures in an electronic format for performance measurement. As of December 2011, HHS officials did not provide an expected date of completion for this feasibility testing but told us that they have awarded two contracts that include this in their scope of work. In addition to the retooling projects, NQF is developing a software tool—the Measure Authoring Tool—to allow measure developers to create standardized electronic measures that help capture information in EHRs so that less retooling would be needed in the future. As of August 2011, NQF was completing final testing of the beta, or initial, version of this tool. NQF expected to complete testing and publish an updated version for public use by January 2012. Multistakeholder Input into HHS’s National Quality Strategy Activity. NQF convened the National Priorities Partnership (NPP), a multistakeholder group expected to provide annual input on national priorities, among other things, to be considered in the National Quality Strategy. As of August 2011, the NPP was completing a report on this input, which was then published in September 2011. The report noted the need for a national comprehensive strategy that identifies core sets of standardized measures to meet each of the national priorities HHS identified in the 2011 National Quality Strategy, among other things. The NPP noted in the report that a common data platform, core measure set, and public reporting mechanism are key components of the infrastructure for performance measurement. It also highlighted that a strategic plan, road map, and timeline for establishing an infrastructure should be accelerated to allow for rapid implementation over the next 5 years. Additionally, the NPP reported that it was critical that all federal programs drive toward the establishment of a common platform for measurement and reporting. Multistakeholder Input on the Selection of Quality Measures Activity. NQF has convened the Measure Applications Partnership (MAP). The MAP is a multistakeholder group that is expected to conduct work in two areas. First, the MAP is expected to provide input to the Secretary of HHS on the selection of quality measures for use in payment programs and value-based purchasing programs required by PPACA, among others. The MAP will review a list of measures published by the Secretary of HHS on December 1 of each year, and develop a report that contains a framework to help guide measure selection. The MAP will provide its annual input beginning February 1, 2012, for measures used in the following 11 programs: hospice, hospital inpatient, hospital outpatient, physician offices, cancer hospitals, end-stage renal disease (ESRD) facilities, inpatient rehabilitation facilities, long-term care hospitals, hospital value-based purchasing, psychiatric hospitals, and home health care. Second, the MAP is expected to publish reports that provide input on the selection of measures for use in various quality reporting programs, including those for physicians. As of August 2011, the MAP had held meetings and initiated its work for reports due October 1, 2011. Other Health Care Quality Measurement Activity. NQF completed a project to endorse six imaging efficiency measures. NQF was also continuing to work on a project to identify existing quality measures and gap areas related to measurement of regionalized emergency care services. Our review of NQF documents found that NQF had not met or did not expect to meet time frames on more than half of the projects under the contract activities that were completed or ongoing, as of August 2011. Specifically, our review of documents found that NQF had not met expected time frames on 18 of the 26 projects it completed under the nine contract activities. Further, NQF did not expect to meet time frames on 14 of the 34 projects on which it was continuing to work. The delays of these projects under the contract activities varied in time from about 1 to 12 months. HHS officials told us they approved all changes to the time frames, which were established by HHS and NQF in NQF’s 2010 and 2011 final annual MIPPA work plans and the PPACA technical proposal. Appendix III provides the status for all projects related to each of the nine contract activities, including information on their expected and actual time frames for completion during our reporting period. Examples of projects under the contract activities for which NQF did not meet or did not expect to meet expected time frames include the following: Endorsement of Measures Activity. NQF did not meet or did not expect to meet time frames for all five endorsement projects under the endorsement contract activity. (See app. III for details on the five projects.) For example, NQF was expected to complete an endorsement project for nursing home quality measures in July 2010; however, the measures were not endorsed until February 2011. (See fig. 1 for estimated time frames and actual completion dates for all projects related to the endorsement contract activity.) NQF officials stated that several factors contributed to NQF exceeding the expected time frames for the five endorsement projects, including the high volume of measures submitted for review, the amount of time it took to harmonize measures between measure developers, and a need for additional technical expertise on review panels. Promotion of the Development and Use of Electronic Health Records Activity. NQF did not meet or did not expect to meet expected time frames for five out of eight projects related to the EHR contract activity. For example, NQF was expected to complete its initial retooling of 113 endorsed quality measures into electronic formats by September 2010, but this effort was not completed until December 2010. (See fig. 2 for estimated time frames and actual completion dates for all projects related to the EHR contract activity.) In addition, NQF was expected to complete the project to convene an expert panel to review the 113 retooled measures by January 2011. However, the panel did not complete its review of the 113 measures until June 2011. According to HHS and NQF officials, several factors contributed to NQF exceeding expected time frames for the retooling project under the EHR contract activity. HHS officials stated that the first set of 44 retooled measures submitted had errors that required correction. For example, HHS officials stated that they found errors in the electronic coding of these 44 retooled measures requiring NQF and its subcontractors who retooled the measures to make corrections. In addition, HHS and NQF officials stated that after starting the retooling project, they quickly learned that the estimated time frames for the retooling project, as well as other projects related to the EHR activity, were overly ambitious, given the scope and complexity of the work. For example, HHS officials noted that retooling of quality measures into electronic format had never been attempted before and the technical complexity and labor required to complete the project were greater than anticipated. NQF officials also told us that HHS’s requests to modify the scope of work for this project often required changing the time frame for completing the retooled measures. These factors resulted in an extension of the project that delayed the final delivery of the 113 retooled measures as well as contributed to the need for additional staff at NQF. Other Health Care Quality Measurement Activity. NQF was expected to complete two projects under the other health care quality measurement activity related to efficiency and resource use—one white paper on resource use and another on geographic-level efficiency by July 2010. These white papers were intended to provide information for an endorsement project on resource-use measures that began in January 2011. However, as of August 2011, the resource-use paper was still under review by HHS, and NQF officials stated they expected to receive comments in September 2011. The geographic-level efficiency paper was canceled in June 2010 at the request of HHS. NQF initially intended to subcontract the work on these two projects, but officials told us that they were unable to identify a subcontractor at the level of funding approved for this project. As a result, HHS approved NQF’s proposal to complete this work internally. HHS officials stated that the drafts NQF submitted on both topics were poor in quality and did not meet its needs, resulting in HHS requesting additional revisions for the resource-use white paper that delayed its completion, and requesting the cancellation of the geographic-level efficiency white paper. Administrative Activity. NQF did not meet the expected time frames for completing one of the required projects under the administrative activity—finalizing its annual work plan. Specifically, the NQF contract requires NQF to develop an annual work plan and to receive final approval from HHS within the first 4 weeks of each contract year; however, NQF did not meet this requirement in 2010 or 2011. For example, the final 2011 MIPPA annual work plan was not developed by NQF and approved by HHS until April 1, 2011. According to NQF and HHS officials, the 2011 MIPPA work plan was not developed and approved on time due to extended discussions on the scope and cost estimates of NQF’s EHR activities. HHS officials told us that the primary reason for the extended discussions was that they expected the costs to reflect all the work needed to complete the Measure Authoring Tool (MAT) by the end of the second contract year. However, they said that NQF only submitted a beta version of the tool by the end of the second contract year, which was not the version expected by HHS. NQF officials told us that the version was never intended to be final but rather a beta version, consistent with their understanding of HHS’s expectations. As a result, HHS and NQF officials needed to evaluate the scope of work and cost estimates for this and other projects. Further, NQF officials told us the delay in completing the 2011 MIPPA annual work plan resulted in the interruption of NQF’s ongoing work related to the MAT under the EHR contract activity. The delay also delayed its receipt of funding for some new or ongoing work under the contract. In some instances, NQF chose to start new or continue ongoing work with its own funding. For example, NQF officials stated that NQF began work related to the MAP using its own funds until HHS authorized the work. In addition, the delay in completing the 2011 MIPPA work plan resulted in the need to set the start date for fall 2011 rather than earlier in the contract year for some of the projects under the maintenance activity. NQF also exceeded its cost estimates for projects under three of the contract activities. HHS officials told us they approved the changes to the cost estimates and in some cases modified NQF’s scope of work to help ensure that NQF’s costs did not exceed the amount HHS had obligated for the contract activities. NQF officials stated that in certain cases, not meeting expected time frames contributed to NQF exceeding these cost estimates. For example, the delays in projects related to the EHR contract activity, including expanding the scope of the retooling project, contributed to NQF exceeding its cost estimate of about $3.8 million for the entire EHR contract activity by about $560,000 in the second contract year. In another example, the delays in finalizing the 2010 and 2011 MIPPA work plans contributed in part to NQF exceeding its cost estimate for developing and finalizing these plans, which is a project under the administrative contract activity. Specifically, while NQF estimated that completion of the annual work plan would cost approximately $77,000, NQF reported an actual cost of $176,590. In addition, NQF also exceeded its cost estimate for the endorsement contract activity during the second contract year for various reasons, including a need for additional technical experts for review panels. Specifically, NQF exceeded estimated costs of about $3.1 million for the entire endorsement activity by about $146,000 While HHS officials told us they approved in the second contract year.all changes to the cost estimates, in certain cases they reduced the scope of NQF’s work in 2011 to ensure that total available funding for the contract year was not exceeded and that sufficient funding was available for ongoing projects. For example, HHS officials told us that they had hoped to start several new endorsement projects beginning in 2011; however, these projects were not included in the 2011 final annual MIPPA work plan so that funding would be available for NQF to complete its ongoing projects, including work that was delayed under the EHR contract activity. In addition, HHS requested that NQF discontinue its work on one project related to the development of a public website for 2011, which is associated with the administrative contract activity. HHS officials told us that to help monitor NQF’s performance on the projects under the contract activities, they rely on NQF to report any issues, including those related to time frames or cost estimates, in the monthly progress reports that NQF is required to submit to HHS or during phone calls held at least monthly. While HHS monitored NQF’s progress and approved changes to the time frames and cost estimates for the projects under the contract activities, HHS did not use available tools for monitoring that are required under NQF’s contract. These tools could have helped to provide an opportunity for HHS to make any appropriate changes to NQF’s projects. For example, HHS did not conduct an annual performance evaluation required by the contract that would assess timeliness and cost control issues, among other things, for the previous contract year. The results of such an evaluation could help HHS officials to consider potential timeliness and cost issues when determining NQF’s scope of work for the next year. Further, while monthly progress reports and invoices include information on NQF’s costs, these documents do not compare reported costs to initial cost estimates. HHS officials told us that, prior to August 2011, they had not enforced a contractual requirement for NQF to submit—nor had it received from NQF—a financial graph in its monthly progress reports that provides information comparing NQF’s monthly incurred costs for each of the contract activities with initial cost estimates. Instead, HHS officials informally requested that NQF provide them with the financial status of the contract activities in midyear 2010, which helped them to plan for NQF’s work under the contract for 2011. Having a financial graph in the monthly progress report could have helped HHS officials to identify instances where any contract activity was approaching or exceeding NQF’s initial cost estimates prior to HHS’s midyear review. This, in turn, could have provided HHS and NQF an opportunity to adjust estimates of future costs for these or related activities earlier in the contract year. HHS officials had asked NQF to begin to include such a financial graph in its monthly progress reports beginning in August 2011. From January 14, 2010, through August 31, 2011, NQF reported a total of approximately $22.4 million in costs and fixed fees on monthly invoices submitted to HHS for projects under activities conducted in response to MIPPA and PPACA. Specifically, NQF reported about $12.8 million in total costs and fixed fees for the contract activities it performed during the second contract year—January 14, 2010, through January 13, 2011. From January 14, 2011, through August 31, 2011, part of the third contract year, NQF reported an additional $9.6 million in total costs and fixed fees. During the second contract year, the majority of NQF’s reported costs were related to the promotion of the development and use of EHRs (36 percent, or $4.6 million) and endorsement of health care quality measures (26 percent, or $3.3 million). Figure 3 illustrates the costs and fixed fees NQF reported for eight of the nine contract activities we reviewed that occurred during the second contract year. The ninth contract activity relates to multistakeholder input on the selection of quality and efficiency measures, as directed by PPACA. This contract activity did not begin until after January 14, 2011, which is the start of the third contract year. For the part of the third contract year covered in our review—January 14, 2011, through August 31, 2011—almost one-half of NQF’s reported costs were for the activity to promote the development and use of EHRs and for the activity to provide multistakeholder input on the selection of quality and efficiency measures. Each of these activities accounted for 22 percent, or about $2.1 million of NQF’s reported costs. Other costs reported by NQF include those for the activity related to providing multistakeholder input into HHS’s annual National Quality Strategy ($1.55 million, or 16 percent) and those for the activity related to the maintenance of endorsed quality measures activity (13 percent, or $1.29 million). Figure 4 illustrates the costs and fixed fees NQF reported for the part of the third contract year covered in our review for each of the nine contract activities we reviewed. According to HHS, as of August 2011, about $55.2 million remains available for the NQF contract. About $15.1 million in MIPPA funding remains available for work to be conducted through January 13, 2013. In addition, HHS plans to obligate approximately $40.1 million of its PPACA funding through 2014 for NQF’s activities related to health care quality measurement in response to PPACA. For its various programs or initiatives, HHS has used or planned to use about one-half of the quality measures that NQF has endorsed, maintained, or retooled under the contract, as of August 31, 2011, and HHS officials expect to evaluate if and how the remaining measures will be used. However, HHS has not comprehensively determined how it will use NQF’s work under the contract to implement PPACA requirements related to quality measures. According to HHS officials, HHS has used or planned to use about one- half (164) of the 344 health care quality measures it has received from NQF through various endorsement, maintenance, and retooling projects under the contract, as of August 31, 2011. For example, of the 164 measures used or planned for use, 44 were used in CMS’s Medicare and Medicaid EHR Incentive Program after being retooled—that is, converted to an electronic format that is compatible with EHRs—under the NQF contract. Although these 44 retooled measures were used in the EHR Incentive Program, HHS officials stated that NQF and HHS detected coding and other errors in the versions of the 44 retooled measures that were published in the program’s final rule in July 2010 that required NQF to make corrections to them after publication of the final rule. NQF did not submit the revised versions of the 44 retooled measures published in the final rule to HHS until December 2010. HHS officials stated that because the final rule had already been published prior to receiving the final formatted measures, CMS listed general guidance on its website to address the errors. HHS officials told us that these 44 measures are being used but have not yet been tested to assess the feasibility of implementing them in the electronic format. Until the testing is complete, HHS runs the risk that some of these measures may not work as intended when implemented in electronic format for performance measurement. As a result, the agency does not have reasonable assurance that the retooled versions of the measures will correctly capture information from EHRs. In addition to the 44 retooled measures used in the EHR Incentive Program, HHS also has used or planned to use 120 measures that it received from endorsement and maintenance projects under the NQF contract for various HHS programs and initiatives. (See table 2 for details on specific programs in which HHS has used or planned to use health care quality measures received from NQF under the contract.) HHS officials told us that they expect to evaluate if and how they could use all of the remaining 180 of the 344 quality measures that were endorsed, maintained, or retooled under the NQF contract that are not currently in use or planned for use in HHS programs or initiatives. According to HHS officials, any measure developer can submit a measure to be considered for NQF endorsement. Therefore, all the measures received under the contract may not be applicable to a particular HHS health care quality program or initiative. HHS officials told us that they will review the remaining 180 measures to determine if they are applicable to their health care quality programs or initiatives. The officials expect that many of these measures will be used in HHS programs or initiatives required by PPACA. For example, HHS officials told us that they will consider implementation of most of the retooled measures in future stages of the EHR Incentive Program. In addition, PPACA directed HHS to establish a hospital value-based purchasing program, as well as to make plans or begin pilot programs for value-based purchasing in other settings of care. The hospital value-based purchasing program will use various quality measures and depend on the information collected on them to determine payments to providers. PPACA also required the development of no less than 10 provider-level outcome measures for hospitals and physicians by March 2012. Further, PPACA directed HHS to identify quality measures that could be used to evaluate hospice programs and publish these measures by October 1, 2012. HHS officials told us that they are in the process of determining whether or to what extent the remaining 180 measures HHS has received under the NQF contract can be used to address the new measurement needs and priorities established by PPACA. HHS officials told us that they prefer to use NQF-endorsed measures to meet HHS’s measurement needs because these quality measures are nationally recognized standards and in some cases HHS is required to use them. Although HHS has taken steps to determine how it can use the measures received under the contract with NQF, the agency does not have a comprehensive plan for determining how it will use the remainder of the work conducted under NQF’s contract to implement PPACA requirements, including plans for additional quality measures that need to be endorsed during the remaining contract years. HHS officials told us that HHS determines on an annual basis which activities—including work on quality measures—NQF is to perform under the contract through the interagency workgroup. The workgroup is comprised of representatives from various HHS agencies and allows them to provide input on their needs, including quality measures that need endorsement from NQF, for their respective programs. However, HHS officials told us that each HHS program assesses its quality measurement needs separately and provides varying levels of detail about its needs. Therefore, it is unclear the extent to which all programs consistently incorporate PPACA’s quality measurement requirements and deadlines into these assessments. The NPP’s September 2011 report noted the importance of greater alignment of national quality measurement efforts, including the establishment of a comprehensive measurement strategy that identifies core measure sets, among other things. In addition, the report noted that all federal programs should work toward the establishment of a common platform for measurement and reporting. Without a comprehensive plan that delineates HHS’s quality measurement needs, and given that each program assesses its quality measurement needs separately, the interagency workgroup may not be able to systematically ensure that all of HHS’s quality measurement needs that implement PPACA requirements align with the selection and prioritization of activities for NQF to complete under the contract. While HHS has begun various efforts to assess its quality measurement needs, the lack of a plan that comprehensively determines the impact of PPACA on its needs could affect the agency’s progress on its quality measurement efforts as well as how it selects and prioritizes NQF’s contract activities. Officials told us that prior to PPACA’s enactment, CMS maintained a 5-year plan that listed its measurement needs based on agency priorities and the priorities established by the NPP for some of its programs.to reflect the requirements related to quality measurement and time frames established by PPACA. In March 2011, HHS published the National Quality Strategy as required by PPACA, which included six priority areas of focus. The report was required by PPACA to include agency-specific plans, goals, benchmarks, and standardized quality metrics for each priority area, but did not do so. HHS officials stated that this document describes HHS’s initial plan for these elements and that they may be included in future versions of the strategy. In June 2011, HHS officials told us that they plan to convene a Quality Measurement Task Force within CMS with a goal to comprehensively align, coordinate, and approve the development, maintenance, and implementation of health care quality measures for use in various CMS programs. As of August 2011, the task force was in an early stage of development, and therefore it is too early to determine whether it will accomplish its goal. Although these various HHS efforts are key steps toward helping the agency meet its quality measurement needs, they are not guided by a comprehensive plan that synthesizes key priority areas identified in various sources, such as those reported by the NPP or in the National Quality Strategy, for which measures may be needed. Without such a plan, HHS may be limited in its efforts to prioritize which specific measures it needs to develop and have endorsed by NQF for its health care quality programs and initiatives established by PPACA. As a result, HHS may be unable to ensure that the agency receives the quality measures needed to meet PPACA requirements and specified time frames related to quality measurement. Health care quality measures are increasingly important to HHS as it uses and will continue to use them in its existing and forthcoming programs and initiatives to evaluate health care delivery. For example, HHS’s value- based purchasing programs are pay-for-performance programs that will require providers to collect and report information on health care quality measures and adjust payment levels based on providers’ performance against the measures. PPACA has increased HHS’s quality measurement needs, and the time frames specified in the law have also increased the urgency of obtaining endorsed quality measures—which are nationally recognized standards and in some cases are required by statute—to meet these needs. Given that NQF is the entity in the United States with lead responsibility for endorsing health care quality measures, NQF’s endorsement activities under the contract are of key importance to help meet HHS’s quality measurement needs. However, NQF’s endorsement process takes time. For more than half of the projects, including all five projects in the endorsement activity, NQF did not meet or did not expect to meet the initial time frames approved by HHS. In addition, projects under three of the contract activities have exceeded initial cost estimates, which resulted in HHS’s modification of NQF’s scope of work in some instances to help ensure that NQF’s costs did not exceed the funding allocated for the contract activities. While HHS received information in monthly progress reports to help monitor NQF’s performance under the contract, the agency did not use all of the monitoring tools required under the contract to help address issues related to time lines and cost estimates. These monitoring tools included an annual performance evaluation that could help HHS officials consider potential issues related to NQF’s time frames and cost estimates when planning work for the next year and a financial graph to be included in NQF’s monthly progress reports. The graph would have compared reported costs to initial cost estimates, which is something that monthly progress reports do not do. Although HHS officials reported that they recently began in August 2011 to enforce the contractual requirement for NQF to submit the graph, they have not implemented the required annual performance evaluation. By not taking advantage of these tools, HHS runs the risk of not having detailed and timely information that could help identify instances in which NQF might be at risk of not meeting time frames or exceeding estimated costs. Identifying such instances could provide an opportunity for HHS to make any appropriate changes to NQF’s scope of work, including setting priorities to ensure that HHS receives the quality measures it needs in a timely manner. With the time remaining under the contract, HHS has an opportunity to ensure that the work performed under NQF’s contract better meets the agency’s needs for its programs and initiatives. However, HHS has not developed a plan that comprehensively identifies its quality measurement needs for its programs and initiatives in light of PPACA’s requirements or determines how it will use the work conducted during the remaining years of the NQF contract to help it meet these needs. In addition, critical tasks may need to be completed outside of the NQF contract. For example, HHS requested that NQF retool 113 measures under the contract and used 44 of the 113 measures that included errors in its EHR Incentive Program. As of November 2011, feasibility testing related to implementation of the retooled measures had not been completed, and HHS expected to perform this work outside of the NQF contract. Until the testing is completed, HHS runs the risk that some of the retooled measures may not work as intended when implemented in electronic format for performance measurement, which is a concern because use of these measures is an important component of HHS’s long-term goal for providers to use health information technology (IT) to exchange information and improve the quality of care. Without a comprehensive plan, HHS lacks assurance that its selection of the work to be performed by NQF—and the approximately $55.2 million that the agency expects to spend for remaining work under the NQF contract—will be prioritized in the most effective way possible. Given that PPACA includes time frames for the implementation of quality measurement programs, NQF’s pace in completing some of the work under the contract—particularly the endorsement activity—raises concerns. If the endorsement projects continue to require extended completion times, HHS runs the risk of not having all the endorsed measures it needs for implementing its programs and initiatives. Should this occur, HHS may need to select nonendorsed measures for its programs and initiatives that have not undergone an objective and transparent review by NQF. To help ensure that HHS receives the quality measures it needs to effectively implement its quality measurement programs and initiatives within required time frames, we recommend that the Secretary of HHS take the following three actions: use monitoring tools required under the NQF contract to obtain detailed and timely information on NQF’s performance and use that information to inform any appropriate changes to time frames, projects, and cost estimates for the remaining contract years; ensure that testing of the electronic versions of the measures retooled by NQF that are being used or are planned for use in the Medicare and Medicaid EHR Incentive programs is completed in a timely manner to help identify potential errors and address issues of implementation; and develop a comprehensive plan that identifies the quality measurement needs of HHS programs and initiatives, including PPACA requirements, and provides a strategy for using the work NQF performs under the contract to help meet these needs. We provided a draft of this report to HHS and NQF for review and comment. HHS neither agreed nor disagreed with our recommendations and provided general comments. NQF concurred with many of the findings in the report and provided clarification and additional context on the findings and recommendations. HHS and NQF’s letters conveying their comments are reproduced in appendixes V and VI, respectively. In addition to the overall comments discussed below, we received technical comments from HHS and NQF, which we incorporated into our report as appropriate. HHS’s comments included separate general comments from CMS and ASPE that provided context on aspects of our findings and recommendations. CMS’s comments stated that the draft report suggests that CMS must use all of the measures endorsed by NQF, and noted that not all NQF-endorsed measures are suitable for HHS quality reporting and public reporting programs. Although our draft report did not state that CMS must use all of the measures endorsed by NQF, we modified it to note specifically, among other things, that all measures received under the contract may not be applicable to a particular HHS health care quality program or initiative. CMS also stated that the report suggests that CMS has not developed measurement plans for various provisions of PPACA related to quality reporting, public reporting, and value-based purchasing programs. CMS provided additional context for current planning efforts to address these requirements, including its Quality Measurement Task Force. The draft report acknowledged this and other CMS planning efforts to address the health care quality requirements contained in PPACA and noted that, as of August 2011, this initiative was just beginning. Further, while various efforts are underway and CMS’s comments state that it has documented how quality measures will be used to address all relevant provisions of PPACA, CMS has not provided documentation of comprehensive plans to address PPACA requirements that include alignment across programs, detailed time frames to meet PPACA deadlines, or how it will use the NQF contract to help ensure that it receives the endorsed measures it needs to meet these requirements. ASPE’s comments noted, with respect to our first recommendation, that HHS used all except two of the monitoring tools called for in the contract. As noted in the draft report, HHS began receiving the monthly financial graph—one of the two monitoring tools—from NQF in August 2011. Also, ASPE noted its plans to update its performance evaluation system with NQF performance information for the first 2 contract years—the period January 14, 2009, through January 13, 2011—and to complete a final performance evaluation at the end of the contract in January 2013, which is the end of the fourth contract year. It did not indicate any plans to conduct the annual performance evaluation for the third contract year— January 14, 2011, through January 13, 2012—which would be consistent with the contract’s requirements. With respect to our second recommendation, ASPE provided technical comments and also told us that CMS issued a contract solicitation to test the retooled measures, but CMS did not receive any bids. Instead, ASPE noted in its comments that two of CMS’s current contractors will conduct feasibility testing on 69 of the 113 retooled measures that are planned for use in HHS’s EHR Incentive programs. CMS does not plan to issue a solicitation for a new contract to test the feasibility of the remaining 44 retooled measures, which are currently being used in HHS’s EHR Incentive Program. We noted these comments in the report. Regarding our third recommendation, ASPE stated that the measures that are not currently in “use” are being evaluated by HHS and that any conclusions that they will not be used are not accurate. Our draft report provided information on which measures were used or planned for use as of August 2011, and indicated that the remaining measures may be used in the future. Specifically, the report noted that HHS officials expect that many of these measures will be used in HHS programs or initiatives, and that HHS officials told us that they will review all the measures received under the contract to determine if they are applicable to their health care quality programs or initiatives. ASPE’s comments also noted that our draft report did not include information on all NQF-endorsed measures used by the various agencies within HHS. As noted in the draft report, we relied on HHS to identify programs and initiatives across HHS that use or plan to use these health care quality measures and recognize that those included in our report may not represent a comprehensive list of all health care quality programs and initiatives. As we recommended in our report, having a comprehensive plan could help HHS identify programs or initiatives that use or plan to use health care quality measures, including those endorsed by NQF. NQF’s comments state that it is providing its services to HHS under a cost reimbursement contract, which is used in circumstances where aspects of performance, such as time frames, cost estimates, and scope of work, cannot be reasonably estimated, and therefore, should not be expected. As noted in the draft report, the contract type used for this work is used for efforts such as research, design, or study efforts where costs and technical uncertainties exist and it is desirable to retain as much flexibility as possible in order to accommodate change. However, the draft report also noted that this type of contract provides only a minimum incentive to the contractor to control costs. Given the risk associated with this type of contract, the fact that NQF has not met expected time frames on about half of its projects as of August 2011, and that NQF exceeded its initial cost estimates for some of its projects under its contract activities, it is especially important that HHS obtain detailed and timely information on NQF’s performance and use that information to inform any appropriate changes to time frames, projects, and cost estimates for the remaining contract years, as noted in our recommendations. NQF’s comments also state that time frames and costs for the work performed under the contract were initial estimates based on an early understanding of the work, that HHS and NQF understood that there would likely be changes to them as a result of the complexity and novelty of the work, and that they have worked collaboratively throughout the contract period to address these and other factors. As noted in the draft report, the final work plans, the technical proposal, and other documents that we reviewed included initial time frames for all projects and costs for the work performed during the contract year that were approved by HHS in collaboration with NQF. The draft report also notes several examples of reasons why the time frames and costs were modified over time. Contributing factors include the high volume of measures submitted, changes to the scope of work, and the novelty and complexity of the work. We are sending copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at kohnl@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix VII. Figure 5 illustrates a health care quality measurement framework of the various stages that a quality measure will go through, as described by the Department of Health and Human Services (HHS) and National Quality Forum (NQF) officials and others. These stages include measure development, endorsement, selection, and use, among others. This framework also shows examples of which entities, including HHS and NQF, are involved in each of the stages. As an example of actions taken during the second stage of this quality measurement framework, HHS officials described two different processes used for planning and identifying gap areas. The Centers for Medicare & Medicaid Services (CMS) Office of Clinical Standards and Quality has developed a standardized approach to identify quality measures that it uses in its health quality initiatives and programs using CMS’s Measures Management System. The Measures Management System requires the convening of a technical expert panel in the initial planning stage. Once convened, the technical expert panel is expected to work with measure developers who will gather information that will help the panel determine whether measures need to be developed for a program or initiative. During this stage, measure developers may conduct environmental scans or literature reviews, to determine the existence of measures that could be used for a program or initiative. If a measure does not exist, then the developer will work with CMS to develop the needed measures for the program or initiative, including measure testing. Upon development of the measures, the technical expert panel will evaluate them based on (1) importance to making significant gains in health care quality and improving health outcomes, (2) scientific acceptability of the measure properties including tests of reliability and validity, (3) usability, and (4) feasibility. Measures recommended by the panel are generally submitted for NQF endorsement. In contrast, CMS’s Center for Medicaid, Children’s Health Insurance Program (CHIP), and Survey & Certification Office—the CMS center which implements CHIP—uses a measure identification process that relies on existing measures rather than development of new measures, according to officials. This office worked with a technical advisory group, the Subcommittee on Children’s Healthcare Quality Measures for Medicaid and CHIP, to recommend an initial core set of measures for the CHIP. With assistance from CMS, the subcommittee evaluated measures based on importance, validity, and feasibility. CMS officials told us that they considered existing NQF-endorsed and non-NQF-endorsed measures based on the measurement needs of the program, and relied on measure testing conducted by the measure developers. Officials stated that they have also relied on the subcommittee to evaluate candidate measures for Medicaid child health programs. Officials said that they are not required to submit measures that will be used for Medicaid programs for NQF endorsement. From January 14, 2010, through August 31, 2011, the National Quality Forum’s (NQF) contract with the Department of Health and Human Services (HHS) included 16 tasks that NQF is required to perform. For purposes of our work, we categorized these tasks into nine contract activities. Specifically, in certain cases, we grouped activities that covered related areas of work into a single contract activity. For example, we consolidated the six administrative activities NQF is required to perform into a single contract activity. (See table 3 that shows how we consolidated these contract activities.) NQF was required to perform specific projects under the nine contract activities we identified. For example, under the endorsement contract activity, NQF was required to complete an endorsement project related to patient outcome measures. For purposes of our work, we identified and reviewed 63 projects NQF is required to perform under the nine contract activities, as shown in appendix III. The tables below provide a status update on the projects that the National Quality Forum (NQF) is required to complete under the nine contract activities we identified (see app. II). The contract activities and the projects under the activities NQF is expected to perform are determined on an annual basis by the Department of Health and Human Services (HHS) and NQF. As a result, the number of projects under the contract activities varies by contract year. For our reporting period—January 2010 through August 2011—we determined that NQF was required to conduct work on 63 projects under the contract activities we reviewed. To determine initial time frames for each project, we calculated the approximate time between expected start and end dates established in NQF’s 2009, 2010, and 2011 final annual Medicare Improvements for Patients and Providers Act of 2008 (MIPPA) work plans, the 2011 Patient Protection and Affordable Care Act (PPACA) technical proposal, and other NQF documents. Actual time frames were determined by calculating the approximate time between the actual start date and the actual date of completion. For projects that were not yet complete as of August 2011, we included an expected time frame based on the approximate difference between the actual start date and the expected date of completion. NQF and HHS officials stated that any changes to the initial time frames were approved by HHS. As part of a project under its contract with the Department of Health and Human Services (HHS), NQF was required to review its endorsement process. To complete this project, the National Quality Forum (NQF) subcontracted with Mathematica Policy Research, Inc. (Mathematica), to conduct a review of NQF’s endorsement process, as requested by HHS. HHS officials stated that, given the importance of the endorsement process as part of the health care quality measurement framework, they requested that an objective and thorough review of NQF’s endorsement process that focused on timeliness, efficiency, and effectiveness should be conducted. For example, they stated that they were interested in whether there were any efficiencies that could be implemented to shorten the process while maintaining an objective review of the health care quality measures that were evaluated under the process. Mathematica initiated its review of NQF’s endorsement process in October 2009 and completed the work in December 2010. In December 2010, Mathematica submitted a final report to NQF and recommended eight areas where improvements could be made and inefficiencies could be addressed in the endorsement process. In the final report, Mathematica noted that the current process is lengthy and the timeliness of the endorsement projects varies substantially. The report further noted that the length of the endorsement process affects the availability of endorsed measures for end users, such as HHS. To help reduce the time required to complete projects, Mathematica recommended that NQF create a schedule for its endorsement process for measure developers and develop feasible time lines that include clear goals for each endorsement project. As of May 2011, NQF officials stated that NQF has taken steps or plans to take steps in its future projects to address the eight areas for improvement Mathematica identified. For example, as of May 2011, NQF has solicited measures earlier based on a tentative annual project schedule to reduce the time lines of its endorsement process and reduced the period for voting by NQF member organizations from 30 to 15 days. NQF officials stated that they believe their efforts to implement the recommendations will shorten the time lines for the endorsement projects by 3 to 4 months without compromising the integrity of the endorsement process and measures to be evaluated under the process. HHS officials stated Mathematica’s recommendations were valuable because much of the work under the NQF contract needs to be completed in an accelerated time line to help fill critical measurement gaps associated with HHS’s health care quality programs and initiatives. They noted that it is too soon to tell the effects of these changes on the endorsement process, but they plan to monitor implementation of the changes in NQF’s 2011 endorsement projects under the contract. In addition, as of September 2011, HHS approved a new project under the contract to identify how the endorsement process can best align with HHS’s time frame for needed measures. As part of this project, NQF is expected to work with a consulting group to identify key performance metrics and define milestones and time lines to help streamline its endorsement process. In addition to the contact named above, Will Simerl, Assistant Director; La Sherri Bush; Krister Friday; Amy Leone; Carla Lewis; John Lopez; Elizabeth Martinez; Lisa Motley; Teresa Tucker; Carla Willis; and William T. Woods made key contributions to this report.
The Medicare Improvements for Patients and Providers Act of 2008 (MIPPA) directed the Department of Health and Human Services (HHS) to enter into a 4-year contract with an entity to perform various activities related to health care quality measurement. In January 2009, HHS awarded a contract to the National Quality Forum (NQF), a nonprofit organization that endorses health care quality measures—that is, recognizes certain ones as national standards. In 2010, the Patient Protection and Affordable Care Act (PPACA) established additional duties for NQF. This is the second of two reports MIPPA required GAO to submit on NQF’s contract with HHS. In this report—which covers NQF’s performance under the contract from January 14, 2010, through August 31, 2011—GAO examines (1) the status of projects under NQF’s required contract activities and (2) the extent to which HHS used or planned to use the measures it has received from NQF under the contract to meet its quality measurement needs, as of August 2011. GAO interviewed NQF and HHS officials, reviewed relevant laws, and reviewed HHS and NQF documents. NQF has made progress on projects under its contract activities, as of August 2011. Specifically, NQF has completed or made progress on 60 of 63 projects. For example, NQF has completed projects to endorse measures related to various topics, including nursing homes. However, for more than half of the projects, NQF did not meet or did not expect to meet the initial time frames approved by HHS. For example, NQF completed one project to retool measures—that is, convert previously endorsed quality measures to an electronic format. While the retooling project was expected to be completed by September 2010, its completion was delayed by 3 months. NQF and HHS officials identified various reasons that contributed to this delay, including an expansion of the project’s scope and complexity. As a result of the delay, HHS did not have all the retooled measures it expected to include in its Electronic Health Records (EHR) Incentive Program. The delay of this project was also a contributing factor to NQF exceeding its estimated cost for its entire contract activity related to EHR by about $560,000 in the second contract year— January 14, 2010, through January 13, 2011. While HHS monitored NQF’s progress through monthly progress reports and approved changes to time frames and costs, HHS did not use all of the tools for monitoring that are required under the contract. Specifically, HHS did not conduct an annual performance evaluation to assess timeliness and cost issues that could have helped to inform NQF’s future scope of work. Until August 2011, HHS did not enforce the provision for NQF to submit a financial graph to compare monthly costs for each contract activity with cost estimates, which is information not included in monthly progress reports. These tools could have provided additional, more detailed information to help identify instances in which NQF might have been at risk of not meeting time frames or exceeding cost estimates, which could have provided HHS an opportunity to make any appropriate changes to NQF’s activities. HHS had used or planned to use about half of the measures—164 of 344—that it received from NQF under the contract, as of August 2011. For example, HHS used 44 measures that NQF retooled under the contract in its EHR Incentive Program. HHS officials stated that the 44 measures used in the program contained errors, which required corrections. HHS officials also have not yet tested the retooled measures to assess the feasibility of implementing them in the electronic format; therefore, HHS runs the risk that some of these measures may not work as intended when implemented. HHS officials told GAO they expect to evaluate if and how they could use all of the remaining measures HHS received under the contract. However, HHS has not determined how PPACA requirements for quality measurement may have changed its needs for endorsed quality measures. As a result, HHS has not established a comprehensive plan that identifies its measurement needs and time frames for obtaining endorsed measures and that accounts for relevant PPACA requirements. Without such a plan, HHS may be limited in its efforts to prioritize which specific measures it needs to develop and to have endorsed by NQF during the remainder of the NQF contract. As a result, HHS may be unable to ensure that the agency receives the quality measures needed to meet PPACA requirements, including time frames for implementing quality measurement programs. GAO recommends HHS: (1) use all monitoring tools required under the contract to help address NQF’s performance, (2) complete testing of retooled measures, and (3) comprehensively plan for its quality measurement needs. HHS neither agreed nor disagreed with these recommendations. NQF concurred with many of the findings in the report and provided additional context.
The Air Force depot maintenance activity group is part of the Air Force Working Capital Fund, a revolving fund that relies on sales revenue rather than direct congressional appropriations to finance its operations. DOD policy requires working capital fund activity groups to (1) establish sales prices that allow them to recover their expected costs from their customers and (2) operate on a break-even basis over time—that is, not make a profit nor incur a loss. DOD policy also requires the activity group to establish its sales prices prior to the start of each fiscal year and to apply these predetermined or “stabilized” prices to most orders received during the year—regardless of when the work is actually accomplished or what costs are actually incurred. For the depot maintenance activity group, DOD policy also requires the group to recoup unbudgeted losses of $10 million or more in the year in which they occurred. In the case of losses that occur in the fourth quarter, the losses are to be recovered in the first quarter of the next fiscal year. Developing accurate prices is challenging since the process to determine the prices begins about 2 years in advance of when the work is actually received and performed. In essence, the activity group’s budget development has to coincide with the development of its customers’ budgets so that they both use the same set of assumptions. To develop prices, the activity group estimates (1) labor, material, overhead, and other costs based on anticipated demand for work as projected by customers, (2) total direct labor hours for each type of work performed, such as aircraft, engines, and repairable inventory items, (3) the workforce’s productivity, and (4) savings due to productivity and other cost avoidance initiatives. In order for an activity group to operate on a break-even basis, it is extremely important that the activity group accurately estimate the work it will perform and the costs of performing the work. Higher-than-expected costs or lower-than-expected customer demand for goods and services can cause the activity group to incur losses. Conversely, lower-than-expected costs or higher-than-expected customer demand for goods and services can result in profits. With sales prices based on assumptions that are made as long as 2 years before the prices go into effect, some variance between expected and actual costs is inevitable. We have previously reported that DOD has had long-standing problems in preparing accurate working capital fund financial reports. The DOD Inspector General and/or the Air Force Audit Agency have not been able to express an opinion on the reliability of the working capital fund’s financial statements for fiscal years 1993 through 2003. The auditors reported that the financial information was unreliable and financial systems and processes, as well as associated internal control structures, were inadequate to produce reliable financial information. The Air Force recognized that the existing legacy depot maintenance accounting systems that were designed in the 1960s and 1970s did not produce usable, complete, reliable, timely, consistent, and auditable information. According to the Air Force, among other things, these systems (1) were not transaction driven, (2) did not capture costs at the task level, and (3) did not produce accurate financial statements. To help improve the depot maintenance activity group’s financial management operations, in January 1998, the Assistant Secretary of the Air Force for Financial Management approved the implementation of the Depot Maintenance Accounting and Production System—which includes an accounting system called the Defense Industrial Financial Management System (DIFMS) that originally belonged to the Navy—at the depots located at the air logistics centers. According to the Air Force, this system is designed to provide the accurate task-level cost data that are needed to support (1) financial analysis and cost management and (2) the development of prices that more accurately reflect the cost of providing goods and services to customers. The Air Force is in the process of implementing this system and plans to complete the implementation during fiscal year 2004. We identified five factors that accounted for about 95 percent of the sales price increase from $119.99 per direct labor hour in fiscal year 2000 to $237.84 per hour in fiscal year 2004. By far the most significant of these factors was material costs, which accounted for about 67 percent of the total increase. Air Force depot maintenance officials have yet to complete an effective and comprehensive analysis to determine the underlying causes of the material cost increases. Our analysis of the other four factors identified a variety of underlying causes, some of which were beyond the activity group’s control, such as rising health care costs and maintenance and modernization of equipment and facilities. However, our analysis of the two factors that involved surcharges determined that the carryover surcharge (based on anticipated losses on work carried over from the previous fiscal year) was probably too high for fiscal year 2004 and may have been unnecessary, while the fiscal year 2004 cash surcharge was unnecessary and should not have been added to the depot’s composite hourly sales price. Details on the five factors follow. Higher budgeted material costs accounted for about 67 percent of the total increase in the composite hourly sales price. Air Force depot maintenance officials provided anecdotal evidence to show that the higher material costs were caused at least partly by (1) the need to replace component parts more frequently because of both safety concerns and the aging of aircraft and engines and (2) increases in the prices that the depot maintenance activity group must pay its suppliers for component parts. However, because Air Force depot maintenance officials have not completed a comprehensive analysis to determine the underlying causes of why their material costs have increased, they cannot quantify the impact of the identified causes and are unsure if they have identified all of the major causes. Higher budgeted labor costs accounted for about 10 percent of the increase in the activity group’s composite hourly sales price. Our analysis showed that the higher labor costs were caused largely by factors beyond the activity group’s control, such as annual salary increases for federal employees and rising health care costs. Higher non-labor, non-material overhead costs, which the Air Force calls business operations costs, accounted for about 8 percent of the total increase. Our analysis showed that the primary causes were (1) costs related to the implementation of a new accounting system and (2) the fact that the fiscal year 2004 budget provided significant increases in several areas where expenditures had been constrained for several years, such as the maintenance and modernization of equipment and facilities. An increase in the surcharge included in the composite hourly sales prices to recoup anticipated losses on work carried over from the previous fiscal year (carryover surcharge) accounted for about 7 percent of the total increase in the sales price. Our analysis showed that the fiscal year 2004 carryover surcharge was probably too high and may have been unnecessary. An increase in the surcharge included in the composite hourly sales prices to generate additional cash (cash surcharge) accounted for about 3 percent of the total increase in the sales price. Our analysis also showed that the fiscal year 2004 cash surcharge was unnecessary because the Air Force Working Capital Fund’s $2.5 billion cash balance as of January 31, 2004, was already more than $1.3 billion higher than the maximum level allowed by DOD policy. Either the Office of the Secretary of Defense or the Congress could use this unneeded cash to satisfy other requirements. Table 1 shows the impact these factors had on the group’s composite sales price. As table 1 also shows, about 5 percent of the price increase was due to factors we either did not identify or could not quantify. As shown in table 1, although many factors contributed to the increase that occurred in the composite hourly sales price for fiscal years 2000 through 2004, higher budgeted material costs were, by far, the most significant. Further, our analysis showed that higher budgeted material costs had an even greater impact on some workloads. For example, the sales price for the repair of E-3 airborne warning and control system (AWACS) aircraft increased from $119.69 per hour in fiscal year 2000 to $330.06 per hour in fiscal year 2004, about 176 percent, and the price for the repair of F108-100 engines used in the KC-135 aircraft increased from $183,240 per engine in fiscal year 2000 to $1,214,124 per engine in fiscal year 2004, about 563 percent. Figure 1 shows the activity group’s budgeted and reported actual material costs per direct labor hour of work accomplished (material expense rate) for fiscal years 2000 through 2004. While Air Force depot maintenance officials can provide anecdotal evidence on why the activity group’s overall material costs have increased, they have yet to complete an effective and comprehensive analysis to determine why material costs have increased. Air Force depot maintenance officials believe the activity group’s higher material costs can be attributed, to a large extent, to increased material usage that has been caused by (1) the aging of the Air Force’s aircraft and engine inventory and (2) safety concerns. Further, they can provide anecdotal evidence to support their views. For example: Material costs related to the F-15 aircraft, which is more than 30 years old, have increased significantly over the past several years, in part, due to its age. For example, the Air Force is in the process of replacing the aircraft’s structural surfaces with a material—called gridlock—that is more expensive than the material that was used in the past to make structural repairs. According to Air Force officials at the air logistics center making the repairs, the new material is not prone to the problems that plagued the old material. As a result, the new material should allow for longer intervals between structural surface repairs and reduce structural repair costs in the future. For fiscal year 2003, the gridlock material added $24.5 million to the estimated cost of material to be used to repair the F-15 and $20.47 to the hourly rate charged customers for maintenance work on the aircraft. Due primarily to actions taken in response to safety concerns, the depot maintenance activity group raised the sales price for the repair of F101- GE-102 high-pressure turbine rotor assemblies from $144,464 in fiscal year 2003 to $261,872 in fiscal year 2004, an increase of $117,408 or 81 percent. From 1999 through 2002, three F-16 aircraft crashed due to engine failures caused by metal fatigue on the engine’s high-pressure turbine rotor. To address this safety problem, the Oklahoma City air logistics center began replacing the rotors on similar aircraft engines more frequently and started using more expensive rotors that were made of a stronger, more heat resistant metal alloy. Depot maintenance officials have also determined that another major cause of their higher material costs is price growth. The activity group pays various suppliers for component parts that it uses to repair aircraft, engines, and other items. Depot maintenance officials stated that their analysis showed that the amount they had to pay for repairable component parts in fiscal year 2003 was about 9 percent higher than the price they had to pay for the same component parts in fiscal year 2002. Similarly, the activity group’s fiscal year 2004 budget and, in turn, its fiscal year 2004 prices were based on the assumption that the prices it would have to pay its suppliers for repairable component parts would increase an additional 14 percent. Another major cause of the activity group’s higher material costs relates to the workloads that were transferred from two closing air logistics centers (Sacramento and San Antonio) to the three remaining centers (Ogden, Oklahoma City, and Warner Robins) in the late 1990s. Depot maintenance officials acknowledged that when workloads were moved from the closing air logistics centers to the remaining centers in the late 1990s, millions of dollars of material were also transferred. Officials at one center acknowledged that this material was never recorded in the center’s accounting records. When the maintenance shops needed component parts to accomplish the transferred workloads, they used the transferred material and did not record an expense in their financial records. This caused their reported material expenses to be understated in fiscal year 2000. However, since most of the transferred material has now been consumed, they now have to record the new material being purchased as expenses in their financial records. Consequently, part of what appears to be higher material costs is a more accurate reflection of actual costs. In August 2000, we reported that the Air Force depot maintenance activity group did not have an effective, systematic process for identifying and analyzing variances between planned and actual material costs. The report noted that such an analysis is frequently used for manufacturing processes to determine if material usage has increased and, if so, to determine the impact on material costs. The report also pointed out that such an analysis could be used to validate Air Force officials’ view that increased material usage is caused by external factors beyond the Air Force Materiel Command’s control, such as the aging of the Air Force’s aircraft and engine inventory. The report recommended that the Secretary of the Air Force direct the Commander, Air Force Materiel Command, to develop a systematic process to identify and analyze variances between depot maintenance activities’ planned and actual material usage. In its comments on our report, the Department of Defense concurred with our recommendation and stated, among other things, that the Air Force Materiel Command planned to develop a database that could be used to analyze material usage. As summarized below, the Air Force Materiel Command has subsequently taken numerous actions to gain a better understanding of its material cost and usage increases. From September through November 2000, material analysis teams were established at Air Force Materiel Command headquarters and at each air logistics center. In November 2000, Air Force Materiel Command headquarters developed a material analysis plan that (1) identified some of the material problems that would be addressed by the material analysis teams and (2) indicated that one of the key functions of the material analysis teams would be to link ongoing and planned material studies— thereby helping to reduce duplication of effort and increase coordination on ongoing studies. From January 2001 through February 2002, Air Force Materiel Command conducted a comprehensive analysis of the material cost and usage increases that occurred between fiscal years 1999 and 2000. However, for a variety of reasons, the Command concluded that its analysis of these data was inadequate; for example, because it did not include all work. Consequently, from March 2002 through November 2003, the Command developed a database to facilitate its material analyses. In November 2003, the Air Force Materiel Command initiated an analysis of the material cost and usage increases that occurred between fiscal years 2002 and 2003. Air Force Materiel Command officials believe, and we agree, that the revised methodology for analyzing material cost variances should provide more reliable results than the one they used to analyze the depot maintenance activity group’s fiscal year 1999 and fiscal year 2000 material cost and usage data. However, when they completed their preliminary analysis, they determined that additional work was needed on their methodology. According to the activity group’s fiscal year 2005 budget estimate, the revised model should be fully functional in November 2004. As shown in table 1, higher budgeted labor costs per direct labor hour of work accomplished (labor expense rate) accounted for $11.86, or about 10 percent, of the total increase that occurred in the activity group’s composite sales price between fiscal years 2000 and 2004. This increase, which is shown in table 2, was due to both an increase in the budgeted average cost of civilian labor and a decline in budgeted productivity. Although the increase in the labor expense rate was the second most significant reason for the composite sales price’s increase during this period, the labor costs’ relative impact on the overall composite sales price declined significantly during this 5-year period. Specifically, in fiscal year 2000, the budgeted labor expense rate ($53.32) was $5.49 higher than the budgeted material expense rate ($47.83), but by fiscal year 2004, it was more than $60 per hour less. Further, our analysis showed that about 61 percent of the higher labor cost was due to factors that are largely beyond the activity group’s control, such as annual cost-of-living increases and increased costs for health benefits for federal employees. Specifically, our analysis showed that about $7.25 of the $11.86 increase in the budgeted labor expense rate was due to an increase in the average cost of civilian labor from about $57,434 per work year per employee in fiscal year 2000 to about $65,132 in fiscal year 2004. This increase, in turn, was due to two factors: (1) budget estimates for the average annual cost of employee compensation (for basic salary and such variables as holiday and overtime pay) increased by $5,649 per work year per employee, or about 3 percent per year, and (2) budget estimates for the average annual cost of employee benefits (employer contributions for such things as health and life insurance) increased by about $2,049, or about 5 percent per year. The rest of the increase in the budgeted labor expense rate—about $4.61 per hour—was the result of a 7 percent decline in budget estimates for worker productivity. Our analysis showed that this decline was not the result of an actual decline in reported actual worker productivity, but rather was due to overly optimistic productivity assumptions for fiscal years 2000 through 2003 and what appears to be an overly pessimistic productivity assumption for fiscal year 2004. Specifically, our analysis showed that (1) the activity group’s reported actual productivity increased about 4 percent between fiscal year 2000 and fiscal year 2003, but was consistently less than the budget estimate and (2) the fiscal year 2004 budget estimate was based on the assumption that the fiscal year 2004 productivity would be 3 percent less than the reported actual level for fiscal year 2003. When we asked Air Force Materiel Command officials why the activity group’s fiscal year 2004 budget estimate was based on the assumption that the workforce’s productivity would decline, they acknowledged that the budget assumption was probably too pessimistic. However, they also stated that they still believe that several initiatives that the Command is implementing will cause some decline in reported actual productivity during fiscal year 2004. For example, they indicated that the workforce’s overall productivity is likely to decline, at least in the short term, because they plan to add about 167 overhead positions in order to implement a more effective process improvement strategy, improve the activity group’s management of its infrastructure, and develop a methodology and tool to improve financial forecasting. We attempted to review reported actual productivity data for the first part of fiscal year 2004 to determine if the fiscal year 2004 budget estimate was based on an overly pessimistic productivity assumption. However, we were unable to do so because, as of February 2004, problems related to the implementation of a new accounting system prevented the activity group from producing reliable productivity data. This data reliability problem is discussed later in this report. Business operations costs are non-labor, non-material overhead costs for such things as the repair and modernization of equipment and facilities and accounting automated data processing services. An increase in business operations costs accounted for $9.64, or about 8 percent, of the total sales price increase as shown in table 1. Most ($7.99, or 83 percent) of this increase occurred between the fiscal year 2003 and fiscal year 2004 budget estimates. An Air Force Materiel Command official stated that the large increase in business operations funding for fiscal year 2004 was due largely to the Air Force’s realization that infrastructure support and other essential support services had been budgeted too low for several years and needed to be a higher priority in fiscal year 2004. For example, Air Force Headquarters reduced business operations cost projections that were included in the activity group’s initial fiscal year 2003 budget estimate by about $92 million because of concern about the projected large price increase and a desire to hold down costs, if possible. This reduction, in turn, forced the activity group to cut back on certain requirements, such as the repair and modernization of facilities and equipment. The Air Force Materiel Command official stated that the fiscal year 2004 budget estimate considered the years of deferred maintenance and modernization and allowed for significant increases in these areas. Another major cause of the large increase in business operations costs from fiscal year 2000 to 2004 is the activity group’s ongoing conversion of its legacy accounting systems to the Depot Maintenance Accounting and Production System. According to Air Force depot maintenance officials, this conversion is the primary reason why budgeted costs for automated data processing and software support increased from about $63.6 million in 2000 to about $115.5 million in 2004. Similarly, Air Force depot maintenance officials stated that the decision to phase out their old legacy systems is the primary reason why depreciation costs for automated data systems and equipment increased from about $92.2 million in fiscal year 2000 to about $122.7 million in fiscal year 2004. During periods of increasing costs, the depot maintenance activity group generally incurs financial losses on work that is carried over from one fiscal year to the next. The reason for this is that DOD’s stabilized price policy requires working capital fund activities to establish sales prices prior to the start of each fiscal year and to apply these predetermined or “stabilized” prices to most orders received during the year—regardless of when the work is accomplished or what costs are actually incurred. In other words, the activity group generally incurs financial losses on its “carryover” work because (1) the cost of doing the work generally goes up from one year to the next and (2) the stabilized price policy prevents the activity group from increasing its prices to cover the higher costs. If losses are expected on carryover work, the activity group adds a surcharge to the price of its new work in order to recoup the losses that are anticipated on its carryover work. Conversely, in the rare instance where costs are expected to decrease from one year to the next, a negative surcharge can be added. As shown in table 1, about $8.00, or 7 percent, of the increase in the depot maintenance activity group’s composite hourly sales price can be attributed to an increase in the carryover surcharge. Our analysis showed that the fiscal year 2004 carryover surcharge added about $164 million to activity group customers’ fiscal year 2004 depot maintenance costs. Our analysis also indicated that the fiscal year 2004 carryover surcharge was probably too high and may have been unnecessary. Specifically, since most of the work carried over from fiscal year 2003 should have been accomplished during the first quarter of fiscal year 2004 and most of the work on fiscal year 2004 orders (which had the carryover surcharge) should have been accomplished in subsequent quarters, carryover losses should have occurred during the first quarter of fiscal year 2004. However, the activity group reported a profit of about $80 million for the first quarter. When we discussed this inconsistency with depot maintenance officials, they agreed that most of the carryover losses should have occurred in the first quarter, but they also indicated that they did not know if the reported profit (1) indicated that the carryover surcharge was unnecessary or (2) was unreliable due to problems related to the implementation of a new accounting system which, as discussed later in this report, has adversely affected the reliability of the activity group’s reported accounting data. Working capital funds are required to maintain cash balances that are sufficient to finance the operations of their activity groups, but are not to tie up resources. DOD policy requires working capital funds to maintain cash balances at sufficient levels to cover 7 to 10 days of operational costs and 6 months of capital disbursements. For the Air Force Working Capital Fund, which includes several activity groups including depot maintenance and the U.S. Transportation Command, this equates to a cash balance of between $924 million and $1,221 million. It is important to note that (1) this cash requirement applies to the total working capital fund and (2) there is no requirement for individual activity groups to maintain a specific cash balance (for example, a cash surplus in one activity group can offset a deficit in another). If projections of cash disbursements and collections indicate that cash balances will drop below prescribed levels, the Air Force Working Capital Fund can generate additional cash by adding a surcharge to one or more of its activity groups’ composite sales prices. Additionally, if for some reason the cash balance becomes too low and there is a possibility of an Antideficiency Act violation, the working capital funds are required to generate additional cash. One way to raise cash is by advance billing customers for work not yet performed. Conversely, if the cash balances are too high, customer prices can be reduced or possibly either the Office of the Secretary of Defense or the Congress can transfer the unneeded funds to other appropriations to either reduce budget requests or finance additional requirements. About $3.55, or 3 percent, of the increase in the depot maintenance activity group’s composite hourly sales price can be attributed to an increase in the cash surcharge. Our work showed that Air Force Headquarters decided to include a cash surcharge in the depot maintenance activity group’s fiscal year 2004 sales price. Our work also showed that the depot maintenance activity group’s fiscal year 2004 cash surcharge is unnecessary and, more importantly, that the Air Force Working Capital Fund will have a substantial amount of excess cash on hand at the end of fiscal year 2004 unless either the Office of the Secretary of Defense or the Congress uses this unneeded cash to satisfy other requirements. As noted previously, DOD policy guidance requires the Air Force Working Capital Fund to maintain a cash balance of 7 to 10 days of operational costs and 6 months of capital disbursements, which equates to $924 million and about $1.2 billion. Our analysis showed that the Air Force Working Capital Fund’s end-of-month cash balance was at least $2.2 billion for each of the first 4 months of fiscal year 2004 and was more than $2.5 billion as of January 31, 2004. The $2.5 billion amount was more than $1.3 billion higher than the maximum allowed by DOD policy. Most of the excess cash was generated by the work performed by the U.S. Transportation Command, whose cash is included in the Air Force Working Capital Fund. When we contacted Office of the Secretary of Defense officials in March 2004 about the Air Force Working Capital Fund’s excess cash, in general, and the depot maintenance activity group’s fiscal year 2004 cash surcharge, in particular, they stated that they allowed the Air Force to include a cash surcharge in its depot maintenance activity group’s fiscal year 2004 sales prices because (1) problems related to ongoing efforts to implement a new accounting system made the reliability of the activity group’s accounting data questionable and (2) uncertainty related to ongoing actions to remove contract depot maintenance operations from the Air Force Working Capital Fund made it difficult to reliably project future cash collections and disbursements. However, they also acknowledged that the Air Force Working Capital Fund has had a substantial amount of excess cash throughout fiscal year 2004 and stated that they would be exploring possible uses for the excess cash over the next few months. Prices that the depot maintenance activity group charged customers were not set high enough to recover the group’s reported costs of performing the work. Air Force officials at the three air logistics centers and the Air Force Materiel Command informed us that the activity group’s prices were not set high enough because the Air Force artificially constrained the activity group’s prices for fiscal years 2000 through 2003 by not including all anticipated costs in the prices. In part, because the sales prices were set too low during this period, the activity group reported losing about $1.1 billion, as shown in table 3. To help recoup the losses, the activity group billed and collected more than $1 billion from customers outside the pricing structure. As a result, the effective prices actually paid by customers were significantly higher during fiscal years 2000, 2001, and 2002. The Air Force changed its sales price development philosophy in 2002 in an effort to bring prices charged customers in fiscal year 2004 more in line with operating costs. In addition, the Air Force allowed out-of-cycle price increases in fiscal years 2002 and 2003 to alleviate projected losses. Even though the activity group made out-of-cycle price increases, the activity group still reported losses for those two fiscal years. Air Force officials told us the prices were constrained to help ensure that the activity group’s customers would be able to get needed work done with the amount of funds provided them through the budget process. Our work at the air logistics centers showed that customer sales prices were in fact constrained. For example, at one center we found that sales prices for work on inventory items performed by the avionics shop were constrained by not including all estimated costs of materials to be used in accomplishing the work. In developing its fiscal year 2003 customer prices, this shop estimated that its material costs would be about $160 million. However, because of the pricing constraints levied by the Air Force, the avionics shop was only allowed to include about $123 million of material costs in its prices, a difference of about $37 million. However, constraining prices is contrary to DOD policy (DOD Financial Management Regulation, 7000.14-R, Volume 2B, Chapter 9) that requires activity groups to set prices to recover the full cost of providing goods and services to customers so that the working capital fund activity group would operate on a break-even basis—that is, not make a profit or incur a loss. During fiscal year 2002, Air Force headquarters reversed its philosophy of constraining customer sales prices when it was developing the depot maintenance activity group’s fiscal year 2004 prices to reduce the risk of future financial losses. In addition, the Air Force allowed the activity group to impose out-of-cycle customer price increases in fiscal years 2002 and 2003 to lessen projected losses resulting, in part, from its price constraining philosophy that had been in place when these fiscal year prices were developed for the budget. Specifically, in June 2002, the Office of the Assistant Secretary of the Air Force directed the Air Force Materiel Command to direct each air logistics center to increase the sales (repair) prices on 20 inventory items that it estimated were going to lose the most dollars. These price increases were effective beginning July 1, 2002—the last quarter of fiscal year 2002. Our analysis of data provided by the three air logistics centers showed that this action increased the activity group’s revenue by about $23 million, thus avoiding additional losses by this same amount. By authority of the same June 2002 directive, the three air logistics centers were also directed to increase their fiscal year 2003 sales prices to avoid an estimated $443 million loss that was being projected for fiscal year 2003 at that time. This out-of-cycle increase resulted in the prices charged customers increasing from $179.42 an hour to $199.66 an hour, approximately $20 per hour. The air logistics centers were not provided guidance regarding how the price increase was to be applied to their individual workloads. One center applied the increase “across the board” to all workloads. Another center applied the increase primarily to its aircraft workload. The third center applied the increase primarily to its aircraft workload and also increased the sales price for one of its engines. As shown in table 4, how this increase was implemented had a profound impact on some of the fiscal year 2003 prices charged customers, resulting in price increases significantly higher than the average $20 per hour. In some cases the prices increased by more than 50 percent. The $20 per hour average sales price increase for fiscal year 2003 was intended to make the activity group break even at the end of fiscal year 2003 based on projected losses at the time the decision was made to increase prices. Even though sales prices were increased—significantly in some cases as shown in table 4—the activity group still reported a financial loss at the end of the fiscal year. According to an Air Force Materiel Command official, when the estimated price increase was developed they did not consider that some of the revenue from the fiscal year 2003 price increase would be realized in fiscal year 2004 because of work started and/or accepted in fiscal year 2003 that had to be carried over and completed in fiscal year 2004. Further, we found that the amount of the reported loss at the end of fiscal year 2003 was questionable. Based on our analysis of the financial data and discussions with activity group officials, the Air Force’s implementation of the new accounting system, DIFMS, resulted in wide swings in the group’s reported net operating results during fiscal year 2003. For example, one air logistics center’s net operating results went from a $1 million loss to a $94 million loss over a period of 1 month due to the implementation of DIFMS. Another center reported a profit throughout most of fiscal year 2003, including a reported profit of $137 million at the end of August 2003. However, the center ended the fiscal year with a reported loss of $17 million—a $154 million shift in 1 month—due to the implementation of the new accounting system. Air Force officials told us that implementing DIFMS was a major effort and were aware of system implementation problems and were working to resolve them. The Air Force lacks systematic and effective processes for controlling costs. In an effort to better control cost growth, the Air Force Materiel Command has (1) been trying since 2000 to develop a systematic methodology to better understand the reasons for the rapidly increasing material costs and (2) implemented a depot maintenance process improvement program. Although these efforts represent a positive step in trying to better understand and control its depot maintenance costs, the Command has not (1) completed a successful methodology for analyzing the reasons for the rapid material cost increases and (2) entered data into a data repository that is to be used to share cost-saving ideas among the three air logistics centers on process improvements and track the costs and savings for these improvements. These actions are necessary in order for management to control the increasing depot maintenance costs. In August 2000, we reported that the Air Force depot maintenance activity group did not have an effective, systematic process for identifying and analyzing variances between planned and actual material costs. In its comments on our report, the DOD concurred with our recommendation and stated, among other things, that the Air Force Materiel Command planned to develop a database that could be used to analyze material usage. However, as discussed earlier in this report, the Command still has not completed the methodology for analyzing material cost increases. It is imperative that the Air Force Materiel Command complete this methodology for analyzing material costs since material costs have increased significantly over the past few years. Specifically, budgeted material costs for fiscal year 2004 are about $2.8 billion and are expected to account for about 57 percent of the activity group’s total fiscal year 2004 costs. A second reason is the fact that, as discussed previously, higher material costs account for about 67 percent of the total sales rate increase that occurred between fiscal years 2000 and 2004. The Air Force has recognized the need to make its depot maintenance activities more effective and efficient by incorporating best business practices that commercial companies used. The three air logistics centers undertook various process improvement initiatives designed to improve the efficiency and effectiveness of their operations. However, as discussed in the next section, the activity group does not have an effective mechanism for tracking costs and documenting savings that may have resulted from these initiatives. According to Air Force depot maintenance documentation, these initiatives are intended to eliminate waste or non- value-added processes for selected business lines, thereby reducing the number of flow days, improving the usage of available workspace, and reducing the overtime worked. In implementing these initiatives, Air Force officials visited over 35 private industry companies to gather information to improve their processes. For example, officials at the Oklahoma City Air Logistics Center consulted with Standard Aero (San Antonio), Inc. to reengineer its constant speed drive repair process. According to the center’s documentation, this initiative, to date, has reduced flow days by 20, reduced the part rejection rate by 25 percent, and resulted in an additional $2.9 million in revenue over pre-2002 levels. When we visited Standard Aero (San Antonio), Inc. we found that these efficiencies were obtained by applying a cellular approach to depot maintenance repair work that differed significantly from the traditional functional approach. Other process improvement initiatives included the following. The Oklahoma City Air Logistics Center’s initiative for the KC-135 aircraft cut in half the number of aircraft awaiting scheduled depot maintenance according to center documentation and officials. Further, the center reported that this effort reduced the number of flow days from 421 days in fiscal year 2000 to 221 days in fiscal year 2003 with a goal to have it down to 178 days by fiscal year 2005. This initiative (1) included the renovation of nine depot maintenance docks and the associated support areas and (2) implemented the “continuous flow” concept that consists of having as many aspects of the job in one area as possible and arranged so that the work flows from one step to the next without unnecessary movement to create more effective cells of productivity. Project officials noted that these changes have enabled the center to become much more efficient and put the needed aircraft back into the warfighter’s hands more quickly. The Ogden Air Logistics Center reported that its central gearbox initiative—which is one of six projects initiated to improve the processes it uses to repair brakes, gearboxes, pylons, struts, actuators, and wheels—has increased both the efficiency and effectiveness of the gearbox repair process. Specifically, according to the center’s process improvement manager, the gearbox project has allowed the center to (1) reduce gearbox’s average shop flow days from 90 days to 52 days, (2) reduce the average number of gearbox assemblies in work at any given time from 46 to 21, and (3) reduce the gearbox’s average labor standard from 236 hours per gear box to 68 hours. The initiative is also expected to reduce annual direct labor costs by about $5 million, beginning in fiscal year 2005. The process improvement manager stated that the Ogden Air Logistics Center achieved the reduction in labor costs by streamlining processes under the cellular repair concept, which eliminated bottlenecks in staging areas and cut out wasteful, unneeded repair steps. The Air Force Materiel Command has not effectively implemented its data repository, which is a key part of its Process Improvement Program. Because the air logistics centers did not enter all the process improvement initiative data into the data repository, the Command (1) has been unable to properly document and implement a shared, standard process improvement program to continuously measure, analyze, and improve its depot maintenance processes and (2) does not have an effective mechanism for tracking costs and documenting savings that could have resulted from these initiatives. Recognizing the need for better oversight of its process improvement efforts, the Air Force Materiel Command issued Instruction 21-137 on August 20, 2003, which established the policies and procedures for process improvements within all maintenance divisions at the centers. The instruction points out that process improvement within the Command is vital to becoming “World Class Depots providing the world’s best warfighter support.” It goes on to add that leveraging process improvement initiatives across the command requires standardized guidance, integration, and tracking. Accordingly, the instruction established a standard methodology by which the three centers would accomplish process improvement and become “World Class.” This was to be done by documenting and implementing a shared, standard process improvement program to continuously measure, analyze, and improve the Command depot maintenance processes. A key component was the establishment of the Command data repository to enable the Command to track process improvement results and to share the lessons learned among the centers. As of October 2003, the data repository contained 108 process improvement initiatives. We found three problems with the implementation of this instruction and the creation of the Command data repository. First, we found that several large process improvement initiatives were not included in the data repository. For example, the process improvement projects that make up the Oklahoma City Air Logistics Center’s initiative to transform the largest industrial facility in the DOD—its building 3001—into a world class depot maintenance facility were not included in the Command data repository. According to Command Depot Maintenance Transformation officials, this initiative is beyond what they were targeting to document and capture in the data repository, but they agreed that the individual projects resulting from this initiative should be included. These officials also acknowledged that the major projects that currently make up their F-15 Trailblazer initiative—to evaluate, test, and redefine business processes for repairing the Air Force’s F-15 aircraft—were not in the data repository. Air Force Materiel Command officials stated that the projects from these two large initiatives need to be included in the data repository in order for the Command to oversee the process improvement initiatives at each of the centers. The officials added that they plan to add these initiatives to the data repository as they become better defined. The Command officials also stated that the data repository has not been as fully used as envisioned and that not all process improvement initiatives have been entered as required by the Air Force Materiel Command Instruction 21-137. Second, while the Air Force Materiel Command created a data repository of ongoing initiatives to provide needed oversight of its improvement initiatives, the information in the data repository has not proved useful because in many cases the centers failed to fill in the data fields for each initiative. As a result, we found that some of the required data fields were missing important information needed to centrally manage the process improvement initiatives. For example, 51 of the 108 initiatives had no title clearly describing the initiative. Another important required data field to identify the root causes of the problem to be corrected or improved was not completed for 54 of the 108 initiatives. Command officials agreed that the data repository has not been as useful as envisioned because many of the initiatives entered have not been fully documented since the centers have not completed the needed or required data fields. Third, the Air Force Materiel Command Instruction 21-137 also requires that the process improvement results be recorded and tracked in the Command data repository including the costs and benefits associated with each initiative. However, the Command’s input guidance to record process improvements in the data repository does not require that the data fields for costs, return on investment, and quantifiable results be completed. This contradicts the Command Instruction 21-137, which requires this information. As a result, we found the following: Cost information to implement the initiative was not recorded in the data repository for 89 of the 108 initiatives. Of the 19 initiatives containing some cost information, only 10 initiatives had recorded costs totaling $6,328,000. The remaining 10 initiatives had recorded costs as “minimal” or “not applicable.” Return on investment information—such as dollar savings—was not recorded in the data repository for 93 of the 108 initiatives. Of the 15 initiatives containing some return on investment information, only two initiatives had recorded a return on investment totaling $828,000. The remaining 13 initiatives had recorded return on investment information with no dollar savings identified or as not applicable. Quantifiable results information—such as flow days reduced—was not recorded in the data repository for 64 of the 108 initiatives. We analyzed the recorded information for the remaining initiatives containing quantifiable results and found that they did report improvements such as reducing the number of flow days and man days and improving the usage of available workspace. An official at one air logistics center pointed out that in addition to reporting their improvement initiatives in the Command data repository, they maintain their projects on two additional local databases. Since none of these databases can communicate with one another, each database is separately maintained and updated by the program managers and the process improvement office. This is difficult to do in a timely manner and leads to differences among the databases. The center has approved a process improvement initiative to standardize these databases. Additionally, a Command depot maintenance transformation official stated that in preparing for a presentation to the Command’s depot maintenance management team he had to contact the three air logistics centers directly to obtain complete project information for his presentation. He emphasized that this would not have been necessary if the three centers had been updating the data repository with complete and useful information as required. Without complete and useful information, the data repository cannot serve as an effective tool for management to oversee these initiatives and the Command runs the risk of the centers duplicating efforts and developing stovepipe processes that hinder the Command’s efforts to provide world class depot maintenance services. The Air Force depot maintenance activity group has not always operated like a business entity and thus, has not achieved the goals envisioned under the working capital fund concept—that is, to operate like a business by developing and using effective methods to control operating costs, charging customers prices that recover operating costs, and ensuring that established management tools to measure the results of operational improvement efforts are used as intended. Specifically, the group has been unable to develop an analytical methodology to effectively identify the causes of and take corrective actions, as appropriate, on its continuously upward spiraling material costs. Further, working capital fund activities are to establish sales prices that allow them to recover their expected costs from their customers. However, the activity group intentionally set its sales prices lower than what was required to recover its operating costs and, as a result, incurred operating losses. Although several promising improvement initiatives are underway at the three centers, these efforts are stove piped and management has been unable to clearly show that the benefits of the initiatives exceed their costs. The congressional defense committees have shown interest in the amount of cash in the Defense Working Capital Fund in past years. The Air Force Working Capital Fund cash balance has exceeded the maximum cash requirement by over $1.3 billion for each of the first four months of fiscal year 2004. If DOD does not take action to reduce the cash balance to the 7 to 10 day requirement, the Congress may wish to take action to reduce the amount of excess cash in the Air Force Working Capital Fund. To improve the business operations of the Air Force Working Capital Fund including cash management and the setting of prices and efforts to control costs of the depot maintenance activity group, we are making two recommendations to the Secretary of Defense and four recommendations to the Secretary of the Air Force. We recommend that the Secretary of Defense take action to reduce the amount of excess cash in the Air Force Working Capital Fund. direct the Secretary of the Air Force to develop prices that cover the total costs of providing goods and services to customers and not constrain prices as has been done in the past. We recommend that the Secretary of the Air Force direct the Commander, Air Force Materiel Command to develop and complete a viable, systematic methodology for analyzing material cost variances that encompasses both the price paid for material and material usage that would enable the Air Force Materiel Command to better understand the underlying causes of the rapidly increasing material costs and take actions to control material costs, as appropriate. hold the air logistics centers’ managers accountable for compliance with the Command’s mandatory Instruction 21-137 requiring the centers to enter all initiatives and related data into the data repository completely and accurately. This should include initiative information on costs, return on investment, and quantifiable results for all process improvement initiatives. At a minimum, the Command needs to issue a memorandum to the air logistics centers reiterating their responsibilities for compliance with the instruction. periodically review the data contained in the data repository to (1) determine whether the data provided by the air logistics centers is complete and useful and (2) identify ways to consolidate initiatives and share lessons learned from the initiatives with the three centers. summarize and determine the actual savings and/or real benefits as compared to the costs from the improvement initiatives already contained in the repository. DOD provided written comments on a draft of this report. In its comments, DOD concurred with the six recommendations in the draft report and is taking action to implement them. In fact, DOD has already taken action to help eliminate the excess cash in the Air Force Working Capital Fund by transferring $1.1 billion of the excess cash to the Army and Navy Operation and Maintenance appropriation accounts in April 2004. However, the Air Force Working Capital Fund had about $400 million of excess cash as of the end of April 2004. Recognizing that cash balances fluctuate from month to month, we continue to believe that it would be appropriate for the Congress to monitor the working capital fund cash balances and take action to reduce the amount of excess cash if the balances continue to be in excess of amounts necessary. Concerning our recommendation on the Air Force developing prices that cover the total costs of providing goods and services to customers, DOD stated that the DOD Comptroller will perform a more intensive review of the Air Force depot maintenance billing rates to ensure that the proposed pricing structure is adequate to cover the costs of operations. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services; the Subcommittee on Readiness and Management Support, Senate Committee on Armed Services; the Subcommittee on Defense, Senate Committee on Appropriations; the House Committee on Armed Services; the Subcommittee on Readiness, House Committee on Armed Services; and the Ranking Minority Member, Subcommittee on Defense, House Committee on Appropriations. We are also sending copies to the Secretary of Defense, Secretary of the Air Force, and other interested parties. Copies will be made available to others upon request. Should you or your staff have any questions concerning this report, please contact Gregory D. Kutz, Director, at (202) 512-9505 or kutzg@gao.gov or William M. Solis, Director, at (202) 512-8365 or solisw@gao.gov. An additional contact and key contributors to this report are listed in appendix III. To determine what factors were primarily responsible for causing the composite sales price to increase from $119.99 per hour in fiscal year 2000 to $237.84 per hour in fiscal year 2004, we obtained and analyzed budget documents that provided information on cost factors, such as material costs, overhead costs, and salaries used in developing the prices. We determined which factors caused the prices to increase the most and discussed the reasons for the price increases with officials at the Air Force Materiel Command and the three air logistics centers. In addition, we obtained information on the impact of the price increases on certain aircraft and engines such as the F-15 and E-3 aircraft. We also reviewed and analyzed the Air Force February 2002 report on depot maintenance material usage and cost analysis study to determine why prices have increased. Finally, we met with Air Force Materiel Command officials to determine what actions they were taking to identify the causes for increasing material costs—a significant factor causing prices to increase— since the Air Force issued its February 2002 report. To determine if the prices charged customers during fiscal year 2000 through fiscal year 2003 recovered the reported actual costs of performing the work, we obtained and analyzed budget documents and accounting data that provided information on budgeted and actual revenue, direct costs, overhead costs, and net operating losses. When the activity group reported losses, we met with officials to determine (1) why the prices charged customers did not recover costs incurred in providing them the goods and services and (2) how the Air Force recovered these losses. To determine if the Air Force has taken effective steps to improve efficiency and control the activity group’s costs, we obtained the Command’s depot maintenance database that contained 108 initiatives aimed at improving depot maintenance operations. We analyzed the database to determine if each initiative had information on the (1) cost to implement the initiative and (2) the amount of dollar savings associated with implementing the initiative. Information on cost and savings is critical to determine if the initiative is cost beneficial. We also analyzed the database to determine if there was sufficient information that would enable the air logistics centers to share information with each other on the initiatives thereby reducing or eliminating redundant efforts. We also met with officials from Air Force Materiel Command and the air logistics centers to discuss (1) process improvement initiatives, especially information on initiative costs, savings, and the sharing of information and (2) whether all initiatives were included in the database. We performed our work at the headquarters, Office of the Under Secretary of Defense (Comptroller) and the Office of the Secretary of the Air Force, Washington, D.C.; Air Force Materiel Command, Ohio; the Oklahoma City Air Logistics Center, Tinker Air Force Base, Oklahoma; the Ogden Air Logistics Center, Hill Air Force Base, Utah; and the Warner Robins Air Logistics Center, Robins Air Force Base, Georgia. We also visited Standard Aero (San Antonio) Inc. and discussed with company officials the Oklahoma City Air Logistics Center’s initiative to reengineer its constant speed drive repair process. We did not verify the accuracy of the accounting and budget information used in the tables in this report, all of which was provided by the Air Force in then-year dollars. We conducted our work from June 2003 through April 2004 in accordance with U.S. generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Defense or his designee. DOD provided written comments, and these comments are presented in the Agency Comments and Our Evaluation section of this report and are reprinted in appendix II. Staff who made key contributions to this report were Francine DelVecchio, Karl Gustafson, Keith E. McDaniel, Christopher Rice, Harold P. Santarelli, Ron Tobias, and Eddie Uyekawa. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The Air Force depot maintenance activity group in-house operations generate about $5 billion in annual revenue principally by repairing aircraft, missiles, engines, and other assets. In doing so, the group operates under the working capital fund concept, where customers are to be charged the anticipated costs of providing goods and services to them. The group's average price for in-house work almost doubled between fiscal years 2000 and 2004 from $119.99 per hour to $237.84 per hour. GAO was asked to determine (1) what factors were primarily responsible for the price increase, (2) if the prices charged recovered the reported actual costs of performing the work, and (3) if the Air Force has taken effective steps to improve efficiency and control the activity group's costs. GAO identified five primary factors that showed why the Air Force depot maintenance activity group's average price increased from $119.99 per direct labor hour of work in fiscal year 2000 to $237.84 per hour in fiscal year 2004. An increase in material costs accounted for about 67 percent of the total increase and was by far the most significant factor. The Air Force has identified some of the causes of the higher material costs such as aging aircraft, but has yet to complete an effective and comprehensive analysis of material cost increases. As a result, it (1) cannot quantify the extent to which individual causes contributed to higher costs and (2) does not know if it has identified all of the major causes. GAO's analysis of the other four factors showed that (1) the increase in labor costs was due largely to events beyond the group's control, such as annual salary increases, (2) the increase in business operations costs was due partly to costs related to implementing a new accounting system, (3) a surcharge intended to recoup anticipated losses on work carried over from the previous fiscal year may have been unnecessary, and (4) a surcharge intended to generate additional cash in fiscal year 2004 for the Air Force Working Capital Fund was unnecessary. GAO's analysis showed that due in part to these surcharges (1) the Air Force Working Capital Fund, which includes the depot maintenance and several other activity groups, had a $2.5 billion cash balance as of January 31, 2004 and (2) this balance was more than $1.3 billion higher than the maximum level allowed by DOD policy. Either the Office of the Secretary of Defense or the Congress could use this unneeded cash to satisfy other requirements. DOD officials told us that they are exploring options on what to do with the excess cash. GAO's analysis of the group's financial reports showed that prices charged customers were not set high enough to recover about $1.1 billion of the group's reported costs for fiscal years 2000 through 2003. The activity group is required by DOD policy to set prices to recoup the cost of doing work. However, Air Force officials informed us that the prices were artificially constrained to help ensure that the group's customers would be able to get needed work done with the amount of funds provided to them through the budget process. The Air Force changed its sales price development philosophy to bring prices charged customers in fiscal year 2004 more in line with operating costs. In addition, the Air Force allowed out-of-cycle price increases in fiscal years 2002 and 2003 to alleviate projected losses. Further, the Air Force Materiel Command has not been successful in its efforts to control costs. Although several promising initiatives are underway, the Command has not (1) developed a successful methodology for analyzing the reasons for the rapid material cost increase and (2) effectively utilized an established data repository for sharing cost-saving ideas among the three air logistics centers on process improvements and to demonstrate whether its cost savings initiatives have been successful.
While the decennial census has long collected data on race and ethnicity, a specific question on Hispanic origin was first added to the 1970 Census in response to the 1965 Voting Rights Act, which required the data to ensure equality in voting. Today, antidiscrimination provisions in a number of statutes require census data on race and Hispanic origin in order to monitor and enforce equal access to housing, education, employment, and other areas. The Office of Management and Budget (OMB), through its Federal Statistical Policy Directive No. 15, sets the standards governing federal agencies’ collection and reporting of race and ethnicity data. At least seven cabinet-level government departments, the Federal Reserve, every state government, and a number of public and private organizations use Hispanic data. Although not required by federal legislation or OMB standards, Hispanic subgroup data are also used for many of these same purposes. In addition, subgroup data are especially important to communities with rapidly growing and diverse Hispanic populations. Collecting data on race and ethnicity has been a persistent challenge for the Bureau. Race and ethnicity are subjective characteristics, which makes measurement difficult. Moreover, the Bureau has found that some Hispanics equate their ethnicity—Hispanic—with race, and thus find it difficult to classify themselves by the standard race categories that include, for example, white, black, and Asian. The Bureau’s preparations for the 2000 Census included an extensive research and testing program to improve the Hispanic count. In 1990, the Bureau estimated that it did not enumerate 5 percent of the Hispanic population. Further, the ethnicity question, which was posed to all respondents, appeared to confuse both Hispanics and non-Hispanics. For example, many non-Hispanics, thinking the question only pertained to Hispanics, did not answer the question. Overall, 10 percent of respondents failed to answer the 1990 Hispanic question—the highest of any short form item in 1990. As a result, the Bureau made improving the Hispanic count a major priority for the 2000 Census. Our objectives were to review (1) the Bureau’s decision-making process that led to its dropping the list of subgroup examples from the Hispanic question on the 2000 Census form, (2) the research conducted by the Bureau to aid in this decision, and (3) the Bureau’s future plans for collecting Hispanic subgroup data. To address each of these objectives, we interviewed key Bureau officials and examined Bureau, OMB, and other documents, including planning materials and internal memos. To obtain a local perspective of how municipal governments and community leaders use Hispanic subgroup data, we met with data users in New York City, including representatives of the New York Department of Planning and the Dominican and Puerto Rican communities. We also attended a meeting of the Dominican American National Round Table, a Dominican American advocacy group that discussed issues relating to the 2000 Census count of Dominican Hispanics. We also attended meetings of the Census Advisory Committee on Race and Ethnicity that addressed the issue of the quality of the Hispanic subgroup data. Finally, to examine the research behind the Bureau’s decision to remove the example subgroups from the 2000 questionnaire, we reviewed the results of the Bureau’s National Content Survey, Targeted Race and Ethnicity Test, and other research conducted throughout the 1990s in preparation for the 2000 Census. Additionally, we reviewed information from the Bureau’s meetings with its Advisory Committee on the Decennial Census and its Advisory Committee on Race and Ethnicity. We also examined relevant materials from OMB’s Interagency Committee for the Review of the Racial and Ethnic Standards. To review the Bureau’s future plans for collecting Hispanic subgroup data, we attended meetings of the National Academy of Science Panel on Future Census Methods, the Decennial Census Advisory Committee, and the Census Advisory Committee on Race and Ethnicity. We also discussed these plans with Bureau officials. Our audit work was conducted in New York City and Washington, D.C., and at the Bureau’s headquarters in Suitland, Maryland, from January through September 2002. Our work was done in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Commerce. On November 27, 2002, the Secretary forwarded the U.S. Census Bureau’s written comments on the draft. The comments are reprinted in appendix I. We address these comments at the end of this report. Collecting accurate ethnic data has challenged the Bureau for over 30 years. Since the 1970 Census, when the Bureau first included a question on Hispanic origin, every census has had comparatively high Hispanic undercounts that reduced the quality of the data. As a result, the Bureau has modified the Hispanic question on every census since then as part of a continuing effort to improve the Hispanic count. (See fig. 1.) In addition, a Spanish language version of the census form has been available upon request since 1980. For the 2000 Census, Hispanics could identify themselves as Mexican, Puerto Rican, Cuban, or “other Spanish/Hispanic/Latino.” Respondents who checked off this last category could write in a specific subgroup such as “Salvadoran.” Although this approach was similar to that used for the 1990 Census, as shown in figure 1, the “other” category in the 1990 Census included examples of other Hispanic subgroups. The Bureau deleted these examples as one of several changes to the Hispanic question for the 2000 Census. Other changes included (1) adding the word “Latino” to the designation Spanish/Hispanic, (2) dropping the word “origin” from the question, and (3) moving the location of instructions on writing in an unlisted subgroup. According to Bureau officials, these latter three changes were made to improve the Hispanic count. The Bureau removed the subgroup examples as part of a broader effort to simplify the questionnaire and thus help reverse the downward trend in mail response rates that had been occurring since 1970. Indeed, evaluations of the 1990 Census indicated that the overall design of the form was confusing to many and contributed to lower response rates, particularly among some hard-to-enumerate groups such as Hispanics. In redesigning the questionnaire, the Bureau added as much white space as possible, and removed unnecessary words to make the questionnaire shorter and more readable. As shown in figure 2, the 2000 questionnaire appears more “respondent-friendly” compared to the 1990 questionnaire. The Bureau initially proposed removing the example write-in subgroups during 1990 through 1992. A first version of the questionnaire without the example subgroups was used in the 1992 National Census Test. However, as discussed in the next section, testing continued from 1992 to 1996 to ensure that removing the write-in example groups did not harm the overall count of Hispanics. From 1995 to 1997, after testing showed that removal of the write-in example groups would not harm the overall Hispanic count, the Bureau finalized its decision to remove the example subgroups. Although federal law and OMB standards only require information on whether an individual is Hispanic, Bureau officials told us they collect subgroup data to help improve the overall Hispanic count. According to the Bureau, many Hispanics do not view themselves as Hispanic, but identify instead with their country of origin or with a particular Hispanic subgroup. State and local governments, academic institutions, community organizations, and marketing firms, among other organizations, also use Hispanic subgroup data for a variety of purposes. For example, officials in the New York City Department of Planning told us that they need accurate information on the number and distribution of Hispanic subgroups in planning the delivery of numerous city services. According to a Bureau official, no data are available on the precise impact the questionnaire redesign had on overall response rates in part because it was made in conjunction with other efforts to improve the response rate, such as a more aggressive outreach and promotion campaign. However, the initial mail response rate was 64 percent, 3 percentage points higher than the Bureau’s expectations, and comparable to the similar 1990 mail response rate. Moreover, evaluations conducted since the 2000 Census by the Bureau indicate that the Bureau obtained a more complete count of Hispanics in the 2000 Census than it did in 1990. For example, Bureau data show that the 2000 Census missed an estimated 2.85 percent of the Hispanic population compared to an estimated 4.99 percent in 1990—a 43 percent reduction of the undercount. The Bureau credits the improvement in part to the changes it made to the questionnaire. However, as discussed in the next section, removing the examples of Hispanic subgroups may have reduced the completeness of data on individual segments of the Hispanic population. Bureau guidance requires that any changes to the census form must first be thoroughly tested. For example, according to Bureau officials, before changing a question, the Bureau must first conduct research studies, cognitive tests, and field tests to determine how best to sequence and word the question, and to see if the proposed changes are likely to achieve the desired results. Additionally, the census questionnaire is to be reviewed by a variety of census advisory groups, OMB, and Congress before it is finalized. Nevertheless, while the Bureau conducted a number of tests of the sequencing and wording of the race and ethnicity questions, according to Bureau officials, it did not specifically design any tests to determine the impact of the changes on the quality of Hispanic subgroup data. Because OMB standards do not require data on Hispanic subgroups, Bureau officials said that the Bureau targeted its resources on testing and research aimed at improving the overall count of Hispanics. Throughout the 1990s, in revising the race and ethnicity questions, the Bureau sought input from several expert panels, including the Interagency Committee formed by OMB and the Census Advisory Committee on Racial and Ethnic Populations, one of several panels with which the Bureau consulted to help it plan the 2000 Census. In addition, the Bureau conducted several tests of the questionnaire to assess respondents’ understanding of the questions and their ability to complete them properly. They included the 1992 National Census Test, which field tested potential questions for the 1996 National Content Survey, which examined a number of issues to improve race and ethnic reporting; and 1996 Race and Ethnic Targeted Test, which tested alternative formats for asking race and ethnic questions. In addition, the Bureau analyzed the results of Hispanic data from the 1990 Census (which led to its conclusions about the undercount), but did not conduct any specific evaluations of the quality of the 1990 Hispanic subgroup data. The consultation, research, and testing played a key role in the Bureau’s decisions to place the ethnicity question before the race question and make several other changes discussed earlier in this report. The test results also indicated that the example subgroups could produce conflicting results. On the one hand, the Bureau found that providing the example subgroups could help prevent respondents’ confusion over how to describe their ethnicity. On the other hand, the Bureau found that removing the example subgroups could help reduce the bias caused by the example effect, which occurs when a respondent erroneously selects a response because it is provided in the questionnaire. Although the Bureau conducted a dress rehearsal for the 2000 Census in 1998 in order to test its overall design, the dress rehearsal did not identify any problems with the Hispanic subgroup question. According to Bureau officials, this could have been because none of the three test sites—the city of Sacramento, California; Menominee County, Wisconsin, including the Menominee American Indian Reservation; and the city of Columbia, South Carolina, and its 11 surrounding counties—had a large and diverse enough Hispanic population for the problems to become evident. In May 2001, the Bureau released data on Hispanics and Hispanic subgroups as part of its first release summarizing the results of the 2000 Census, called the SF-1 file. The Bureau also published The Hispanic Population, a 2000 Census brief that provided an overview of the size and distribution of the Hispanic population in 2000 and highlighted changes in the population since the 1990 census. For the first time, the Bureau released data on Hispanic subgroups as a part of its release of the full count SF-1 data even though it had not fully tested the impact of questionnaire changes on the subgroup data and provided little discussion of the potential limitations of the data. Following the initial release of the Hispanic data, local government officials and Hispanic advocacy groups raised questions about the accuracy of the counts of Hispanic subgroups listed as examples on the 1990 census form, but not the 2000 form. The 2000 Census showed lower counts of several Hispanic subgroups than analysts had expected based on their own estimates using a variety of information sources such as vital statistics, immigration statistics, population surveys, and other data. In New York City, local government officials and representatives of Hispanic subgroups who partnered with the Bureau to improve the enumeration of Hispanics told us that they were particularly concerned about low subgroup counts in their communities in part because they needed accurate numbers to plan and deliver specialized services to particular subgroups. Moreover, they said that because “official census numbers” are often considered definitive, problems with the released Hispanic subgroup numbers could lead to faulty decision making by data users. Since the release of the 2000 Census Hispanic data, the Bureau has conducted evaluations of the data that provided more information on how removing the subgroup examples may have affected the quality of Hispanic subgroup data. One key evaluation was the Alternative Questionnaire Experiment, in which the Bureau sent out 1990-style census forms to a sample of individuals as part of the 2000 Census. As shown in figure 3, the Bureau’s research indicates that the 1990-style form elicited more reports of specific Hispanic subgroups than the 2000-style questionnaire. Indeed, 93 percent of Hispanics given the 1990-style form reported a specific subgroup, compared to 81 percent of Hispanics given the 2000-style form. Moreover, virtually every subgroup reported in the 2000-style form composed a smaller percentage of the overall Hispanic count than the 1990- style form. Thus, while the Bureau reported what respondents checked off on their questionnaires, because of respondents’ confusion over the wording of the question, the 2000 subgroup data could be misleading. Figure 3 also suggests that one possible reason for this might be that many respondents did not understand what they were supposed to write in, as many more people on the 2000-style form wrote in “Hispanic,” “Spanish,” or “Latino” (as opposed to a specific subgroup) compared to the 1990-style questionnaire. Additionally, a higher percentage of the respondents did not provide codeable (useable) responses. Moreover, based on its analysis of the Census 2000 Supplementary Survey—an operational test for collecting long-form-type data based on a nationwide sample of 700,000 households—the Bureau estimated that there were about 150,000 more Dominican Hispanics than were counted in the 2000 Census. Some attribute the discrepancy to the fact that many respondents to the supplementary survey provided their answers by telephone, where enumerators were able to help them better understand the question on Hispanic subgroups. Because of concerns relating to the 2000 Census counts of Hispanic subgroups, Bureau officials said that they plan to focus testing and research on these questions in preparation for the 2010 Census. In particular, they stated that the Bureau would examine the likely impact of including Hispanic subgroup examples in the question again, as well as other aspects of the question that caused problems for some respondents. Before deciding on a new version of the Hispanic question, the Bureau must finish evaluating the results of the 2000 Census, conduct a number of cognitive tests, and field-test proposed changes to the question. The Bureau plans to begin testing the Hispanic question in 2003 and, as part of a field test in 2004, to administer the questionnaire in parts of Queens, New York, which the Bureau selected for its racial and ethnic diversity. The Bureau intends to complete its testing and decide on changes to the Hispanic question from 2006 through 2008. Any changes to the Hispanic question are relevant not only for the 2010 Census, but also for other Bureau questionnaires, such as the proposed ACS. Bureau officials told us that they expect that the ACS will continue to use the 2000 Census Hispanic question until research and testing on a new version is complete. While continued research could help the Bureau collect better-quality Hispanic subgroup data, it will also be important for the Bureau to address what led it to release data that could mislead users. A key factor in this regard is that the Bureau lacks adequate guidelines for making decisions about how data quality considerations affect the release of data to the public. Had such guidelines been in place prior to releasing the Hispanic subgroup data, they could have (1) prompted the Bureau to apply more rigorous quality checks on the Hispanic subgroup data, (2) provided a basis for either releasing, delaying, or suppressing the data, and (3) informed decisions on how to describe any limitations to data released. This is not the first time that the lack of Bureau-wide guidelines on the level of quality needed for census results to be released to the public has created difficulties for the Bureau and data users. As we noted in our companion report on the Bureau’s methods for collecting and reporting data on the homeless and others without conventional housing, one cause of the Bureau’s shifting position on reporting those data and the resulting public confusion appears to be its lack of documented, clear, transparent, and consistently applied guidelines on the level of quality needed to release data to the public. With the Hispanic subgroup data, the Bureau released the information as planned before it could properly assess its quality, identify problems, and report its limitations. More rigorous guidelines could help ensure that decisions about the quality of all census data the Bureau releases are more consistent and better understood by the public. In 2000, the Bureau initiated a program aimed at documenting Bureau-wide protocols designed to ensure the quality of data it collected and released. Because this effort is still in its early stages, we could not assess it. However, Bureau officials believe that the program is a significant first step in addressing the Bureau’s lack of data quality guidelines. As the Bureau develops its protocols further, it will be important that they be well documented, transparent, clearly defined, consistently applied, and properly communicated to the public. Throughout the 1990s, the Bureau went to great lengths to improve response rates to the 2000 Census in general, and participation of Hispanics in particular. Although the unique contributions of the individual components of the Bureau’s efforts cannot be determined, the mail response rate was similar to the 1990 level, and the Bureau’s preliminary data suggest that the 2000 Census count of Hispanics was an improvement over the 1990 count. However, the counts of Hispanic subgroups do not appear to have been improved and, in fact, there is concern that some of these subgroup counts may be less accurate than the 1990 counts. Moreover, the Bureau’s experience in simplifying the questionnaire in part by removing the examples of the Hispanic subgroups shows the challenge the Bureau faces in trying to improve one component of the census count without adversely and unintentionally affecting other aspects of the census count. In light of these findings, it will be important for the Bureau to continue with its planned research on how best to enumerate Hispanic subgroups. The Bureau’s release of Hispanic subgroup numbers raised questions about the quality of the reported data and the Bureau’s decision to report these data as a part of its release of the SF-1 data. Although the specific questions about the Hispanic subgroup data differed from those identified in our review of the Bureau’s efforts to collect and report data on the homeless and others without conventional housing, a common cause of both sets of problems was the Bureau’s lack of agencywide guidelines for its decisions on the level of quality needed to release data to the public. As we recommended in our report on homeless counts, the Bureau needs to develop well-documented guidelines that spell out how to characterize any limitations in the data, and when it is acceptable to suppress these data. The Bureau should also ensure that these guidelines are documented, transparent, clearly defined, consistently applied, and properly communicated to the public. To ensure that the 2010 Census will provide public data users with more accurate information on specific Hispanic subgroups, we recommend that the Secretary of Commerce ensure that the Director of the U.S. Census Bureau implements Bureau plans to research the Hispanic question, taking steps to properly test the impact of the wording, format, and sequencing on the completeness and accuracy of the data on Hispanic subgroups and Hispanics overall. In addition, as we also recommended in our companion report on the homeless and others without conventional housing, we recommend that the Bureau develop agencywide guidelines governing the level of quality needed to release data to the public, when and how to characterize any limitations, and when it is acceptable to delay or suppress data. The Secretary of Commerce forwarded written comments from the U.S. Census Bureau on a draft of this report (see app. I). The Bureau agreed with our conclusions and recommendations and, as indicated in the letter, is taking steps to implement them. However, it expressed several general concerns about our findings. The Bureau’s principal concerns and our response are presented below. The Bureau also suggested minor wording changes to provide additional context and clarification. We accepted the Bureau’s suggestions and made changes to the text as appropriate. The Bureau took exception to our findings concerning the adequacy of its data quality guidelines noting that it “conducted the review of the data on the Hispanic origin population using standard review techniques for reasonableness and quality.” We do not question the Bureau’s commitment to presenting quality data. Rather, our point is that the Bureau needs to translate its commitment to quality into well documented, transparent, clearly defined guidelines to provide a basis for consistent decision making on the level of quality needed to release data to the public, and on when and how to characterize any limitations. During our review, Bureau officials, including the Associate Director for Methodology and Standards, told us that the Bureau had few written guidelines, standards, or procedures related to the quality of data released to the public. A second general concern expressed by the Bureau dealt with our characterization of problems with the Hispanic subgroup counts. The Bureau said that the data met an acceptable level of quality because they accurately reflect what people reported and therefore cannot be characterized as erroneous. We agree with the Bureau on this specific point. However, we take a broader view of data quality. Specifically, we believe that questions about the accuracy of the Hispanic subgroup data must also take into account problems that the respondents had in understanding the meaning of the question. The Bureau challenged our assertion that the wording of the question “confused” some respondents, preferring to say that some respondents may have “interpreted” the question wording, instructions, and examples differently than expected. We agree with the Bureau that additional research will be required to understand the extent of this problem. Nevertheless, we believe there is sufficient evidence from the Bureau’s subsequent research and from analysis of trends in the data to support our concerns about the accuracy of Hispanic example subgroup counts in the 2000 Census. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the Chairman of the House Committee on Government Reform, the Secretary of Commerce, and the Director of the U.S. Census Bureau. Copies will be made available to others on request. This report will also be available at no charge on GAO’s home page at http://www.gao.gov. Please contact me on (202) 512-6806 or by E-mail at daltonp@gao.gov if you have any questions. Other key contributors to this report were Robert Goldenkoff, Christopher Miller, Elizabeth Powell, Timothy Wexler, Ty Mitchell, Benjamin Crawford, James Whitcomb, Robert Parker, and Michael Volpe. Decennial Census: Methods for Reporting and Collecting Data on the Homeless and Others without Conventional Housing Need Refinement. GAO-03-227. Washington, D.C.: January 17, 2003. 2000 Census: Refinements to Full Count Review Program Could Improve Future Data Quality. GAO-02-562. Washington, D.C.: July 3, 2002. 2000 Census: Coverage Evaluation Matching Implemented As Planned, but Census Bureau Should Evaluate Lessons Learned. GAO-02-297. Washington, D.C.: March 14, 2002. 2000 Census: Best Practices and Lessons Learned for a More Cost- Effective Nonresponse Follow-Up. GAO-02-196. Washington, D.C.: February 11, 2002. 2000 Census: Coverage Evaluation Interviewing Overcame Challenges, but Further Research Needed. GAO-02-26. Washington, D.C.: December 31, 2001. 2000 Census: Analysis of Fiscal Year 2000 Budget and Internal Control Weaknesses at the U.S. Census Bureau. GAO-02-30. Washington, D.C.: December 28, 2001. 2000 Census: Significant Increase in Cost Per Housing Unit Compared to 1990 Census. GAO-02-31. Washington, D.C.: December 11, 2001. 2000 Census: Better Productivity Data Needed for Future Planning and Budgeting. GAO-02-4. Washington, D.C.: October 4, 2001. 2000 Census: Review of Partnership Program Highlights Best Practices for Future Operations. GAO-01-579. Washington, D.C.: August 20, 2001. Decennial Censuses: Historical Data on Enumerator Productivity Are Limited. GAO-01-208R. Washington, D.C.: January 5, 2001. 2000 Census: Information on Short- and Long-Form Response Rates. GAO/GGD-00-127R. Washington, D.C.: June 7, 2000. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
To help boost response rates of both the general and Hispanic populations, the U.S. Census Bureau (Bureau) redesigned the 2000 questionnaire, in part by deleting a list of examples of Hispanic subgroups from the question on Hispanic origin. While more Hispanics were counted in 2000 compared to 1990, the counts for Dominicans and other Hispanic subgroups were lower than expected. Concerned that this was caused by the deletion of Hispanic subgroup examples, congressional requesters asked us to investigate the research and management activities behind the changes. In both the 1990 and 2000 census, Hispanics could identify themselves as Mexican, Puerto Rican, Cuban, or other Hispanic. Respondents checking off this latter category could write in a specific subgroup such as "Salvadoran." The "other" category in the 1990 Census included examples of subgroups to clarify the question. For the 2000 Census, the Bureau removed the subgroup examples as part of a broader effort to simplify the questionnaire and help improve response rates. The Bureau removed unnecessary words and added blank space to shorten the questionnaire and make it more readable. Although the Bureau conducted a number of tests on the sequencing and wording of the race and ethnicity questions, and sought input from several expert panels, no Bureau tests were designed specifically to measure the impact of the questionnaire changes on the quality of Hispanic subgroup data. According to Bureau officials, because federal laws and guidelines require data on Hispanics but not Hispanic subgroups, the Bureau targeted its resources on research aimed at improving the overall count of Hispanics. Bureau evaluations conducted after the census indicated that deleting the subgroup examples might have confused some respondents and produced less-than-accurate subgroup data. A key factor behind the Bureau's release of the questionable subgroup data was its lack of adequate guidelines governing the quality needed before making data publicly available. As part of its planning for the 2010 Census, the Bureau intends to conduct further research on the Hispanic origin question, including a field test in parts of New York City. However, until research on a new version of the question is finalized, Bureau officials said that other census surveys will continue to use the 2000 Census format of the Hispanic origin question.
Posing as private citizens, our undercover investigators purchased several sensitive excess military equipment items that were improperly sold to the public at DOD liquidation sales. These items included three ceramic body armor inserts identified as small arms protective inserts (SAPI), which are the ceramic inserts currently in demand by soldiers in Iraq and Afghanistan; a time selector unit used to ensure the accuracy of computer- based equipment, such as global positioning systems and system-level clocks; 12 digital microcircuits used in F-14 Tomcat fighter aircraft; guided missile radar test sets used to check the operation of the data link antenna on the Navy’s Walleye (AGM-62) air-to-ground guided missile; and numerous other electronic items. In instances where DOD required an EUC as a condition of sale, our undercover investigator was able to successfully defeat the screening process by submitting bogus documentation and providing plausible explanations for discrepancies in his documentation. In addition, we identified at least 79 buyers for 216 sales transactions involving 2,669 sensitive military items that DOD’s liquidation contractor sold to the public between November 2005 and June 2006. We are referring information on these sales to the appropriate federal law enforcement agencies for further investigation. Our investigators also posed as DOD contractor employees, entered DRMOs in two east coast states, and obtained several other items that are currently in use by the military services. DRMO personnel even helped us load the items into our van. These items included two launcher mounts for shoulder-fired guided missiles, an all-band antenna used to track aircraft, 16 body armor vests, body armor throat and groin protectors, six circuit card assemblies used in computerized Navy systems, and two Palm V personal data assistant (PDA) organizers. Using a fictitious identity as a private citizen, our undercover investigator applied for and received an account with DOD’s liquidation sales contractor. Our investigator was then able to purchase several sensitive excess military items noted above that were being improperly sold to the public. During our undercover purchases, our investigator engaged in numerous conversations with liquidation sales contractor staff during warehouse inspections of items advertised for sale and with DRMS and DLA’s Criminal Investigative Activity (DCIA) staff during the processing of our EUCs. On one occasion our undercover investigator was told by a DCIA official that information provided on his EUC application had no match to official data and that he had no credit history. Our investigator responded with a plausible story and submitted a bogus utility bill to confirm his mailing address. Following these screening procedures, the EUC was approved by DCIA and our undercover investigator was able to purchase our targeted excess military items. Once our initial EUC was approved, our subsequent EUC applications were approved based on the information on file. Although the sensitive military items that we purchased had a reported acquisition cost of $461,427, we paid a liquidation sales price of $914 for them—less than a penny on the dollar. We observed numerous sales of additional excess sensitive military items that were improperly advertised for sale or sold to the public, including fire control components for weapon systems, body armor, and weapon system components. The demilitarization codes for these items required either key point or total destruction rather than disposal through public sale. Although we placed bids to purchase some of these items, we lost to higher bidders. We identified at least 79 buyers for 216 public liquidation sales transactions involving 2,669 sensitive military items. On July 13, 2006, we briefed federal law enforcement and intelligence officials on the details of our investigation. We are referring public sales of sensitive military equipment items to the federal law enforcement agencies for further investigation and recovery of the sensitive military equipment. During our undercover operations, we also noted 13 advertised sales events, including 179 items that were subject to demilitarization controls, where the items were not sold. In 5 of these sales involving 113 sensitive military parts, it appears that DOD or its liquidation sales contractor caught the error in demilitarization codes and pulled the items from sale. One of these instances involved an F-14 fin panel assembly that we had targeted for an undercover purchase. During our undercover inspection of this item prior to sale, a contractor official told our investigator that the government was in the process of changing demilitarization codes on all F-14 parts and it was likely that the fin panel assembly would be removed from sale. Of the remaining 8 sales lots containing 66 sensitive military parts, we could not determine whether the items were not sold because DOD or its contractor caught the demilitarization coding errors or because minimum bids were not received during the respective sales events. Our investigators used publicly available information to develop fictitious identities as DOD contractor personnel and enter DRMO warehouses (referred to as DRMO A and DRMO B) in two east coast states on separate occasions in June 2006, to requisition excess sensitive military parts and equipment valued at about $1.1 million. Our investigators were able to search for and identify excess items without supervision. In addition, DRMO personnel assisted our investigators in locating other targeted items in the warehouse and loading these items into our van. At no point during either visit, did DRMO personnel attempt to verify with the actual contractor that our investigators were, in fact, contractor employees. During the undercover penetration at DRMO A, our investigators obtained numerous sensitive military items that were required to be destroyed when no longer needed by DOD to prevent them from falling into the wrong hands. These items included two guided missile launcher mounts for shoulder-fired missiles, six Kevlar body armor fragmentation vests, a digital signal converter used in naval electronic surveillance, and an all- band antenna used to track aircraft. Posing as employees for the same DOD contractor identity used during our June 2006 penetration at DRMO A, our investigators entered DRMO B a day later for the purpose of testing security controls at that location. DRMO officials appeared to be unaware of our security penetration at DRMO A the previous day. During the DRMO B undercover penetration, our investigators obtained 10 older technology body armor fragmentation vests, throat and groin protection armor, six circuit card assemblies used in Navy computerized systems, and two Palm V personal digital assistants (PDA) that were certified as having their hard drives removed. Because PDAs do not have hard drives, after successfully requisitioning them, we asked our Information Technology (IT) security expert to test them and our expert confirmed that all sensitive information had been properly removed. Shortly after leaving the second DRMO, our investigators received a call from a contractor official whose employees they had impersonated. The official had been monitoring his company’s requisitions of excess DOD property and noticed transactions that did not appear to represent activity by his company. He contacted personnel at DRMO A, obtained the phone number on our bogus excess property screening letter, and called us. Upon receiving the call from the contractor official, our lead investigative agent explained that he was with GAO, and we had performed a government test. Because significant numbers of new, unused A-condition excess items still being purchased or in use by the military services are being disposed of through liquidation sales, it was easy for our undercover investigator to pose as a liquidation sales customer and purchase several of these items for a fraction of what the military services are paying to obtain these same items from DLA supply depots. For example, we paid $1,146 for several wet-weather and cold-weather parkas, a portable field x-ray enclosure, high-security locks, a gasoline engine that can be used as part of a generator system or as a compressor, and a refrigerant recovery system used to service air conditioning systems on automobiles. The military services would have paid a total acquisition cost of $16,300 for these items if ordered from supply inventory, plus a charge for processing their order. Several of the items we purchased at liquidation sales events were being ordered from supply inventory by military units at or near the time of our purchase, and for one supply depot stocked item—the portable field x-ray enclosure—no items were in stock at the time we made our undercover purchase. At the time of our purchase, DOD’s liquidation contractor sold 40 of these x-ray enclosures with a total reported acquisition cost of $289,400 for a liquidation sales price of $2,914—about a penny on the dollar. We paid a liquidation sales price of $87 for the x-ray enclosure which had a reported acquisition cost of $7,235. In another example, we purchased a gasoline engine in March 2006 for $355. The Marine Corps ordered 4 of these gas engines from DLA supply inventory in June 2006 and paid $3,119 each for them. At the time of our undercover purchase, 20 identical gasoline engines with a reported acquisition cost of $62,380 were sold to the public for a total liquidation sales price of $6,221, also about a penny on the dollar. In response to recommendations in our May 2005 report, DOD has taken a number of actions to improve systems, processes, and controls over excess property. Most of these efforts have focused on improving the economy and efficiency of DOD’s excess property reutilization program. However, as demonstrated by our tests of security controls over sensitive excess military equipment, DOD does not yet have effective controls in place to prevent unauthorized parties from obtaining these items. For example, although DLA and DRMS have emphasized policies that prohibit batch lotting of sensitive military equipment items, we observed many of these items being sold in batch lots during our investigation and we were able to purchase several of them. In addition, DLA and DRMS have not ensured that DRMO personnel and DOD’s liquidation sales contractor are verifying demilitarization codes on excess property turn-in documentation to assure appropriate disposal actions for items requiring demilitarization. Further, although DLA and DRMS implemented several initiatives to improve the overall reutilization rate for excess A-condition items, our analysis of DRMS data found that the reported reutilization rate as of June 30, 2006, remained the same as we had previously reported—about 12 percent. This is primarily because DLA reutilization initiatives are limited to using available excess A-condition items to fill customer orders and to maintain established supply inventory retention levels. As a result, excess A-condition items that are not needed to fill existing orders or replenish supply inventory are disposed of outside of DOD through transfers, donations, and public sales, which made it easy for us to purchase excess new, unused DOD items. Despite the limited reutilization supply systems approach for reutilization of A-condition excess items, DLA and DRMS data show that overall system and process improvements since the Subcommittee’s June 2005 hearing have saved $38.1 million through June 2006. According to DLA data, interim supply system initiatives using the Automated Asset Recoupment Program, which is part of an old DOD legacy system, achieved reutilization savings of nearly $2.3 million since July 2005 and Business System Modernization supply system initiatives implemented in January 2006 as promised at the Subcommittee’s June 2005 hearing, have resulted in reutilization savings of nearly $1.1 million. In addition, DRMS reported that excess property marketing initiatives implemented in late March 2006 have resulted in reutilization savings of a little over $34.8 million through June 2006. These initiatives include marketing techniques using Web photographs of high-dollar items and e-mail notices to repeat customers about the availability of A-condition items that they had previously selected for reutilization. Our most recent work shows that sensitive military equipment items are still being improperly released by DOD and sold to the public, posing a significant national security risk. The sensitive nature of these items requires particularly stringent internal security controls. Our tests, which were performed over a short duration, were limited to our observations, meaning that the problem may likely be more significant than what we identified. Although we have referred the sales of items identified during our investigation to federal law enforcement agencies for follow-up, the solution to this problem is to enforce controls for preventing improper release of these items outside DOD. Further, liquidation sales of items that military units are continuing to purchase at full cost from supply inventory demonstrates continuing waste to the taxpayer and inefficiency in DOD’s excess property reutilization program. Mr. Chairman and Members of the Committee, this concludes my statement. I would be pleased to answer any questions that you or other members of the committee may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-7455 or kutzg@gao.gov. Major contributors to this testimony include Mario L. Artesiano, Donald L. Bumgardner, Matthew S. Brown, Paul R. Desaulniers, Stephen P. Donahue, Lauren S. Fassler, Gayle L. Fischer, Cinnimon Glozer, Jason Kelly, John Ledford, Barbara C. Lewis, Richard C. Newbold, John P. Ryan, Lori B. Ryza, Lisa M. Warde, and Emily C. Wold. Technical expertise was provided by Keith A. Rhodes, Chief Technologist, and Harold Lewis, Assistant Director, Information Technology Security, Applied Research and Methods. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In light of GAO's past three testimonies and two reports on problems with controls over excess DOD property, GAO was asked to perform follow-up investigations to determine if (1) unauthorized parties could obtain sensitive excess military equipment that requires demilitarization (destruction) when no longer needed by DOD and (2) system and process improvements are adequate to prevent sales of new, unused excess items that DOD continues to buy or that are in demand by the military services. GAO investigators posing as private citizens purchased several sensitive military equipment items from DOD's liquidation sales contractor, indicating that DOD has not enforced security controls for preventing sensitive excess military equipment from release to the public. GAO investigators at liquidation sales purchased ceramic body armor inserts currently used by deployed troops, a cesium technology timing unit with global positioning capabilities, a universal frequency counter, two guided missile radar test sets, 12 digital microcircuits used in F-14 fighter aircraft, and numerous other items. GAO was able to purchase these items because controls broke down at virtually every step in the excess property turn-in and disposal process. GAO determined that thousands of military items that should have been demilitarized (destroyed) were sold to the public. Further, in June 2006, GAO undercover investigators posing as DOD contractor employees entered two excess property warehouses and obtained about $1.1 million in sensitive military equipment items, including two launcher mounts for shoulder-fired guided missiles, several types of body armor, a digital signal converter used in naval surveillance, an all-band antenna used to track aircraft, and six circuit cards used in computerized Navy systems. At no point during GAO's warehouse security penetration were its investigators challenged on their identity and authority to obtain DOD military property. GAO investigators posing as private citizens also bought several new, unused items currently being purchased or in demand by the military services from DOD's excess property liquidation sales contractor. Although military units paid full price for these items when they ordered them from supply inventory, GAO paid a fraction of this cost to purchase the same items, demonstrating continuing waste and inefficiency.
IRS uses multiple channels to provide customer service to taxpayers and process tax returns: Telephone service for tax law and account questions: Taxpayers can speak with IRS staff to obtain information about their accounts throughout the year or to ask basic tax law questions during the filing season. Taxpayers can also listen to recorded tax information or use automated services to obtain information on the status of refund processing as well as account information such as balances due. Taxpayer access to telephone assistance has declined for the past several years, and we have made recommendations for IRS to improve its performance. For example, in 2010, we recommended that IRS determine a customer service telephone standard based on the quality of service provided by comparable organizations and what matters most to the customer, and resources required to achieve this standard. In 2014, we again reported that IRS was missing an opportunity to improve its customer service by not systematically comparing its telephone service to the best in business in order to inform Congress about gaps in actual and desired service and resources needed to improve the level of service provided to taxpayers. Correspondence: Taxpayers may also use paper correspondence to communicate with IRS, which includes responding to IRS requests for information or data, providing additional information, or disputing a notice. Assistors in IRS’s Accounts Management office respond to taxpayer inquiries on a variety of tax law and procedural questions, and handle complex account adjustments such as amended returns and duplicate filings. IRS tries to respond to paper correspondence within 45 days of receipt; otherwise, such correspondence is considered “overage.” Minimizing overage correspondence is important because delayed responses may prompt taxpayers to write again, call, or visit walk-in sites, and IRS would be required to pay interest on refunds owed to taxpayers if it did not process amended returns within 45 days. Online services: IRS’s website is a low-cost method for providing taxpayers with basic interactive tools to, for example, check refund status, make payments, and apply for plans to pay taxes due in scheduled payments (installment agreements). Taxpayers can use the website to print forms, publications, and instructions, and can use IRS’s Interactive Tax Assistant application to get answers to tax law questions without calling or writing to IRS. Face-to-face assistance: Face-to-face assistance remains an important part of IRS’s service efforts, particularly for low-income taxpayers. Taxpayers can receive face-to-face assistance at IRS’s walk-in sites or at thousands of sites staffed by volunteer partners. At walk-in sites, IRS staff provide services including answering basic tax law questions, reviewing and adjusting taxpayer accounts, taking payments, authenticating Individual Taxpayer Identification Number applicants, and assisting identity theft victims. At sites staffed by volunteers, taxpayers can receive free return preparation assistance as well as financial literacy information. Tax return processing: IRS processes millions of paper and electronically filed (e-filed) returns and issues billions of dollars in refunds each year. A key step in the process is identifying and correcting millions of errors that taxpayers make on their returns or that occur during processing. IRS expends significant resources correcting errors and the process can affect how long it takes IRS to issue refunds. IRS’s annual appropriations declined from a high of $12.1 billion in fiscal year 2010 to $10.9 billion in fiscal year 2015, a reduction of about 10 percent. In our prior work, we reported that despite regularly realizing efficiency gains, IRS was struggling to provide taxpayers access to services, and IRS’s performance would likely continue to suffer unless it made tough choices about what services to provide. For fiscal years 2014 and 2015, IRS implemented service initiatives that included reducing or eliminating certain telephone and walk-in services, and redirecting taxpayers toward other service channels such as IRS’s website. See appendix II for more details on these service initiatives. A major challenge for IRS is responding quickly, accurately, and effectively to tax law changes, some of which can be extensive. For example, IRS has been preparing to implement provisions of PPACA for several years and carrying out these provisions has been a significant undertaking. The 2015 filing season was the first that taxpayers were required to report health care coverage information on their tax returns. IRS began processing these returns in January 2015. Individuals could purchase health insurance through state or federally-facilitated marketplaces, and some of those who did so were eligible for the premium tax credit (PTC), an advanceable, refundable tax credit designed to help eligible individuals and families with low or moderate income afford health insurance. Taxpayers can have the PTC paid in advance to their insurance company, and those who do so must reconcile the amount of advance PTC received with the PTC they are eligible for based on their actual income reported on their tax return. Overall, as mentioned above, annual fiscal year 2015 appropriations were reduced by 10 percent compared to fiscal year 2010 (from $12.1 billion to $10.9 billion), with appropriations for taxpayer services remaining level with the previous year. However, resources allocated by IRS to taxpayer services decreased in fiscal year 2015 from about $2.4 billion to $2.3 billion (or about 4.7 percent). IRS has statutory authority to supplement its annual appropriations with user fee receipts from various services it provides. IRS allocated approximately $45 million of user fee receipts to taxpayer services—about 76 percent less than the $183 million it allocated to taxpayer services in fiscal year 2014. See appendix III for details on IRS resource allocation for taxpayer services. IRS allocated most user fee receipts in fiscal year 2015 to fund information technology (IT) investments to implement PPACA requirements and other services, and support for mainframes and servers, which can help IRS better respond to taxpayers in the future. IRS officials also said they shifted user fee funds to combat identity theft- related refund fraud, strengthen cybersecurity, and implement tax provisions from other recently enacted legislation such as the Foreign Account Tax Compliance Act (FATCA). As a result of these trends, IRS reduced staff answering both telephones and correspondence by about 9 percent (from about 12,500 to 11,400 full-time equivalents (FTE)) between fiscal years 2014 and 2015 (see figure 1). Moreover, IRS eliminated most overtime for IRS assistors until after the end of the filing season, resulting in fewer total hours worked by assistors to answer telephones and correspondence. Early in the 2015 filing season, IRS officials said they devoted a higher percentage of assistor FTEs to answering correspondence than telephones to prevent a growing inventory of correspondence that they estimated could have taken over a year to work through if they did not take this action. IRS’s action in fiscal year 2015 continues a trend of shifting more assistors to answering correspondence; the percentage of FTEs used for working correspondence cases increased from 32.6 to 45 percent between fiscal years 2010 and 2015. Telephone: A reduction of about 34 percent in the number of assistors answering telephone calls between fiscal years 2010 and 2015 contributed to the lowest level of telephone service in fiscal year 2015 compared to recent years. The number of calls from customers seeking to speak to an assistor decreased about 6 percent (from about 54.3 million to 51.1 million) between fiscal years 2010 and 2015. However, as figure 2 illustrates, IRS answered about 50 percent fewer calls from taxpayers seeking an assistor (from about 36.7 million to about 18.2 million) during the same period, while about 73 percent more calls were abandoned, disconnected by IRS, or met with a busy signal (from about 32.4 million to 56.2 million). Calls answered by assistors also took longer to complete with average times of about 13.4 minutes, which is about 2 minutes more (a 13 percent increase) compared to fiscal year 2010. While the increase in the length of a call was small, in total IRS spent more than 476,000 additional hours on telephone calls than it would have with average times from fiscal year 2010. According to IRS officials, IRS assistors are handling calls that are more complex to resolve—including calls pertaining to PPACA and identity theft—while IRS is diverting calls with less complex inquiries to self-service options. IRS officials also noted that assistors are taking additional time on the calls to explain self-service options to taxpayers, while taxpayers can spend up to 15 seconds discussing issues with wait times or disconnected calls, thus driving up time needed to complete calls. To reduce call lengths, IRS officials said they are taking steps to expand the authority of assistors to abate penalties rather than taking time to request documentation from callers. These officials also said they are studying the reduction or elimination of assistor-answered calls on tax law questions. They are considering deploying subject-matter experts to help assistors become more efficient and resolve issues from callers more quickly. IRS answered about 7 percent more calls using automated assistance (from 35.1 million to 37.5 million) between fiscal years 2010 and 2015. Answering as many calls as possible through automation is a significant efficiency gain because IRS estimates that in 2015 it cost an average of 51 cents per call to provide an automated answer compared to an average of about $55 per call with a live assistor, which was about an 85 percent increase from 2010. IRS implemented service changes in fiscal year 2015 to drive demand for customer service from the telephone to IRS’s website, such as directing taxpayers who met Online Payment Agreement qualifications to apply for and set up installment payments online instead of calling or visiting IRS. See appendix II for more details on service changes launched in fiscal year 2015. Figure 3 shows that a key indicator of taxpayer service, the level of service (LOS)—defined as the percentage of people who want to speak with an IRS assistor who were able to reach one—declined to about 38 percent in fiscal year 2015. While the IRS Commissioner characterized this as “abysmal” service, it was in line with IRS’s projected LOS for fiscal year 2015 and a decrease from last year when LOS was about 64 percent. IRS also experienced declines in LOS on many telephone lines used to answer questions on taxpayer accounts, including those for responding to identity theft inquiries and calls from tax practitioners (see appendix IV for more details). Additionally, average wait times have almost tripled from about 11 minutes to more than 30 minutes since fiscal year 2010. In spite of these challenges, the quality of telephone service provided by IRS has remained consistently high since fiscal year 2010 with assistors providing an accuracy rate of higher than 90 percent in answering both account and tax law questions. Correspondence: Figure 4 shows that the amount of correspondence Accounts Management received and closed (or completed) slightly decreased between fiscal years 2010 and 2015. During the same period, the average time needed to close cases once they were assigned to an assistor increased from about 35 to 47 days. However, this time has decreased from its peak of 67.4 days in fiscal year 2013. According to IRS officials, IRS implemented a new approach to managing inventory in 2014. This approach reduced the overall time needed to close cases by balancing work between new correspondence receipts that are quick to complete and overaged cases. As shown in figure 5, the percentage of correspondence cases in IRS’s inventory classified as “overage”—cases generally not processed within 45 days of receipt by IRS—has stayed close to 50 percent since fiscal year 2013. However, this is more than double fiscal year 2010’s overage rate. IRS officials stated that ongoing efforts to consolidate correspondence scanning from 10 to 5 sites contributed to higher overage rates in fiscal year 2015 compared to prior years. An increasing overage rate could lead to increased interest paid to taxpayers who are owed refunds. The composition of correspondence cases received by IRS since fiscal year 2010 has been changing, with a higher number of cases involving identity theft and fewer cases involving amended returns and duplicate filings (see appendix V for additional details). Despite these changes, IRS reported that it maintained a high degree of accuracy when closing them. In fiscal year 2015, IRS found that assistors correctly answered and provided appropriate resolutions to correspondence cases about 89 percent of the time, which is comparable to customer accuracy scores in prior years. During the same period, IRS assistors also maintained scores of well above 90 percent for adhering to statutory, regulatory, and other process requirements when making determinations on taxpayer accounts. Online: IRS has taken steps in recent years to increase online services to help reduce calls and written correspondence from taxpayers, but encountered security issues in 2015. For example, IRS’s Get Transcript application allowed taxpayers to obtain a viewable and printable transcript on IRS’s website. Use of this application increased about 49 percent (from about 19 million to 28 million) between fiscal years 2014 and 2015. However, IRS took the Get Transcript self-service web application offline on May 21, 2015, because of significant security problems. In June 2015, the IRS Commissioner testified that unauthorized third parties had gained access to taxpayer information from the application. According to officials, criminals used taxpayer-specific data such as Social Security information, dates of birth, and street addresses acquired from non-IRS sources to gain unauthorized access to information on approximately 100,000 tax accounts. In August 2015, IRS updated this number to about 114,000, and reported that an additional 220,000 accounts had been inappropriately accessed, bringing the total to about 330,000 accounts. IRS sent letters to affected taxpayers and offered them free credit protection and Identity Protection Personal Identification Numbers. As of November 2015, IRS officials said they were working with subject matter experts to identify and review various authentication options for the Get Transcript application and may have a new authentication process in place for relaunching the application in spring 2016. Taxpayers still have several options to request a transcript. In spite of these challenges, IRS officials said they are developing an online account access feature so taxpayers can view balance due, make a payment, view payment status and history, and view account transcripts. In 2015, IRS began development of an online account application that will enable taxpayers to view their balance due. IRS is aiming to make the online account access feature available to the public in 2016. In January 2015, we reported that IRS created a group aimed at centralizing several prior ad-hoc efforts to authenticate taxpayers across its systems, but did not have a plan to assess costs, benefits, and risks to inform decisions about whether and how much to invest in various options to enhance authentication. We recommended that IRS estimate and document such costs, benefits, and risks. IRS agreed with this recommendation, but as of October 2015, had yet to implement it. We also found that IRS’s taxpayer authentication tools have limitations. For example, identity thieves can easily find the information needed to falsely obtain an e-file personal identification number, allowing them to bypass some, if not all, of IRS’s current automatic checks. Moreover, a small number of taxpayers receive Identity Protection Personal Identification Numbers or undergo knowledge-based authentication, which uses questions about personal information that only the taxpayer should know to confirm taxpayers’ identities. Authenticating a taxpayer online is one of several key steps needed for IRS to enhance online services. In December 2011, we recommended that IRS complete a strategy for providing online services, and further expanded on that recommendation in April 2013. IRS agreed with those recommendations and, in response, is developing a long-term strategy, known as Service on Demand, in part to improve online services. In September 2015, we reviewed IRS's Service on Demand strategy and found that it implemented part of our April 2013 recommendation to link investments in security to its long-term strategy for improving web services. Specifically, we found that IRS incorporated investments in security for enhanced web services, including for authentication capabilities and taxpayer communication channels. This plan should help to ensure that activities, core processes, and resources are aligned to support the mission of providing better service to taxpayers and delivering service more efficiently. While IRS experienced security problems with Get Transcript, it continued to build on progress in directing more taxpayers to other online resources. IRS’s website received approximately 493 million visits in fiscal year 2015, which is about a 13 percent increase from the prior year. Use of self-service tools, such as the Online Payment Agreement and Interactive Tax Assistant applications, experienced substantial increases during the same period. See appendix VI for additional information on uses of IRS’s website. Walk-in and volunteer sites: As a result of budget cuts, IRS officials said IRS reduced staff devoted to face-to-face assistance at walk-in sites and directed customers to self-service options. IRS reduced staff at walk- in sites by about 4 percent in fiscal year 2015 compared to the previous year (from 1,938 to 1,867 FTEs). However, the percentage of customers at walk-in sites waiting for longer than 30 minutes for service increased by 7 percentage points in fiscal year 2015 (from about 25 to 32 percent) during the same period. IRS officials said that the FTE reductions were the largest factor in the increase in wait time, but IRS staff must handle tasks that require more time to complete as taxpayers move to self- service channels for simple tasks. IRS officials said they are taking steps to better serve taxpayers with limited resources by testing appointment scheduling at 44 walk-in sites. They determined that availability of appointments significantly improved service availability, with fewer customers at participating sites waiting more than 30 minutes for service. Additionally, IRS made other service changes in fiscal year 2015 by providing fewer forms, instructions, and publications at walk-in sites and encouraging taxpayers to get them online instead. IRS also increased promotion of electronic payment options, such as IRS’s Direct Pay application and Facilitated Self Assistance kiosks. To promote these options, IRS updated forms, publications, and outreach materials on its website; IRS officials also said they used automated messages on the telephone, signs at walk-in sites, social media posts, and added information in notices sent to taxpayers. Consequently, total contacts at walk-in sites for forms and payments in fiscal year 2015 decreased by 32.3 and 10.3 percent, respectively, compared to the previous year. See appendix II for a full list of fiscal year 2015 service initiatives. At the 12,057 partner sites staffed by volunteers in fiscal year 2015, taxpayers could receive return preparation assistance as well as financial literacy information. These sites prepared about 3.8 million tax returns in fiscal year 2015—a 3 percent increase from the previous year. See appendix VII for additional information on taxpayer use of walk-in and volunteer site services. IRS routes each piece of correspondence through several steps before it reaches an assistor. To make the process more efficient, IRS officials said that teams within the Wage and Investment division—which oversees both Accounts Management and Submission Processing—have been working to identify opportunities to improve IRS’s performance in working and closing correspondence cases. According to IRS officials we interviewed at Wage and Investment headquarters and Accounts Management campuses, they formed individual teams to make correspondence handling more accurate and timely, and have coordinated reviews with IRS campuses to ensure that IRS staff scanning correspondence into IRS’s systems code them correctly so they are routed to the appropriate assistor; reviewed IRS’s efforts to consolidate scanning of correspondence from 10 sites to 5, and identified opportunities to standardize work processes and use resources more flexibly to address correspondence backlogs; identified significant differences in the procedures used at various campuses when screening correspondence before scanning it into IRS’s systems and are working to standardize such processes; and helped IRS fully implement its new inventory process across all Accounts Management campuses by June 2014 and measure results of the transition. As a result, average times for closing correspondence cases once they reached an assistor have declined since fiscal year 2013. IRS believes the new process will help it reduce unnecessary follow-up contacts with taxpayers and manage correspondence inventory in a strategic and logical manner. IRS’s launching of its Get Transcript tool—one of the service initiatives IRS implemented in fiscal year 2014—helped to drive down correspondence. Specifically, the number of transcripts sent to taxpayers via postal mail decreased about 50 percent (from about 3.3 million to 1.7 million) between fiscal years 2013 and 2014. In February 2015, Accounts Management began a pilot to improve the consistency and quality of reviews of correspondence and telephone work at selected campuses. Under the pilot, called the Centralized Evaluative Review (CER), a centralized team of technical reviewers perform monthly reviews of assistors’ work, instead of the assistors’ immediate supervisors. IRS believes CER will standardize reviews for assistors, improve the rebuttal process for both assistors and supervisors, and provide more opportunities for staff to receive one-on-one mentoring from their supervisors. We conducted discussion groups with 17 Accounts Management managers overseeing assistors at four sites, including two sites participating in the CER pilot (see appendix I for a detailed methodology of how we conducted discussion groups with assistors and managers). Front-line managers at the pilot sites told us the CER pilot shows promise. Most (seven of nine) of the managers we spoke with at the two campuses piloting CER said it was beneficial for a centralized group to perform reviews rather than frontline supervisors. IRS is taking steps toward implementing CER across all Accounts Management sites, such as drafting an implementation document for CER that IRS officials intend to update and use for nationwide implementation of CER once IRS reaches an agreement with the union representing assistors nationwide. As of November 2015, IRS is negotiating with the union. However, even if IRS and the union ratify an agreement before the end of this year, IRS officials expect to wait until after the 2016 filing season to expand CER to other Accounts Management sites because of the difficulties in implementing new projects during the filing season. Since fiscal year 2010, IRS assistors achieved customer accuracy scores of 85 percent or higher when working correspondence cases. However, they have made increasing numbers of errors in either not sending required correspondence to taxpayers after closing a case, or sending inaccurate information in that correspondence. The number of these errors increased almost 200 percent from 1,165 to 3,377 errors found in correspondence cases sampled by IRS between fiscal years 2010 and 2015. According to IRS officials, an analysis of these errors showed that in fiscal year 2015, more than a third of these errors originated from incorrect dates, amounts due, and other information, while another 20 percent originated from assistors failing to issue correspondence to taxpayers. Managers in our discussion groups concurred that IRS faced problems in sending out accurate correspondence to taxpayers. About half (eight of 17) of the managers in our discussion groups said that a common issue was that assistors did not send required correspondence at all. In addition, seven of the 17 managers said assistors incorporated incorrect information into correspondence to taxpayers. Failure to send correspondence, or providing inaccurate information to taxpayers, may spur taxpayers to write again to IRS about the same problem or call or visit IRS, requiring additional time and resources to resolve cases by both taxpayers and IRS. IRS officials confirmed that assistors’ failure to send required correspondence to taxpayers was one of the most common errors made by assistors in recent years, likely stemming from a lack of attention because assistors work too quickly to get through cases and do not remember to send correspondence. In response, Accounts Management took steps to enhance training and remind assistors of requirements for sending outgoing correspondence with accurate and complete information. For instance, in May 2014, Accounts Management developed and distributed a job aid for providing quality and timely responses to taxpayers and provided refresher training on outgoing correspondence at some sites during fiscal year 2014 training. In 2015, Accounts Management launched a communication campaign with flyers and other visual and verbal reminders for assistors to send required correspondence. Officials said they experienced reductions in errors after the campaign, though as time passed such errors gradually increased. In addition, IRS provided recommendations to Accounts Management sites for targeting defects in outgoing correspondence. IRS officials said they provided biweekly workshops where subject matter experts answered questions and assistors shared lessons learned. Officials also said they required assistors to use a checklist to confirm completion of every step of the correspondence adjustments process including sending out required correspondence if necessary. They said that at least one IRS automated tool prompts assistors to send correspondence based on actions taken on a correspondence case. Additionally, managers have discretion to conduct 100 percent reviews of correspondence cases after they are completed to ensure that assistors send required correspondence. IRS‘s Internal Revenue Manual (IRM) states that correspondence soliciting additional information or responding to inquiries must be timely, accurate, and professional, and address all issues based on information provided by taxpayers. A quality response must also request additional information as needed from the taxpayer and is written in language that the taxpayer can understand. According to the IRM, responses to taxpayers are “timely” if initiated within 30 days of the date IRS received the inquiry. Such responses may include interim letters explaining when a taxpayer can expect a final resolution on a case or a final response describing action taken by IRS to resolve the case. As previously noted, IRS generally classifies correspondence cases not processed within 45 days as “overage.” In addition, internal control standards state that agency management should design appropriate types of control activities—such as policies, procedures, techniques, and mechanisms— to achieve its stated objectives. Accounts Management officials acknowledged they do not have adequate controls in IRS’s systems to ensure assistors send out accurate and complete correspondence to taxpayers when required before closing cases. In fact, while IRS has implemented policies, procedures, and mechanisms to help assistors send required correspondence with accurate and complete information, such steps have not been sufficient in helping IRS achieve its objective; between fiscal years 2014 and 2015, the number of errors linked to outgoing correspondence rose about 29 percent (from 2,614 to 3,377), adding to the increasing number of errors taking place since fiscal year 2010. IRS has not formally assessed the feasibility of setting up such controls, but Accounts Management officials noted it would be costly and difficult to build into IRS’s systems and not all correspondence inventories require letters sent to taxpayers. Without reviewing the feasibility of setting up adequate controls for consistently sending accurate correspondence to taxpayers, IRS is missing an opportunity to reduce errors in providing accurate and timely responses to taxpayers and sufficiently address their issues. Both Congress and the executive branch have taken steps to improve customer service. The GPRA Modernization Act of 2010 (GPRAMA) requires agencies to, among other things, establish a balanced set of performance indicators to measure progress toward each performance goal, including, as appropriate, customer service. Similarly, several executive orders, presidential memorandums, and OMB guidance require agencies to take steps to strengthen customer service and describe a number of actions agencies can take to improve their customer service. In our previous reports on the IRS filing season, we have described these requirements at length and emphasized how important it is for IRS to take those actions to ensure taxpayers are receiving quality customer service. Additional background on executive orders and other guidance is provided in appendix VIII. In response to GPRAMA, executive orders, and other policies, Treasury and IRS have taken steps to define customer service targets and align them to Treasury’s and IRS’s strategic and performance plans. For example, Treasury incorporated strategic goals and objectives into its fiscal year 2014- 2017 strategic plan for fairly and effectively reforming and modernizing federal tax systems, and improving efficiency, effectiveness, and customer interaction, and outlined strategies to achieve them; established performance measures linked to the strategic goals outlined above, such as telephone level of service, taxpayer self- assistance rate, and percentage of individual returns processed established an agency priority goal to increase self-service options for taxpayers, which complements OMB’s cross-agency priority goal to improve customer service in part through utilizing technology. incorporated a goal of delivering high-quality and timely service in its fiscal year 2014-2017 strategic plan, along with strategic objectives, such as tailoring service approaches to taxpayers to facilitate voluntary compliance and providing timely service to taxpayers through multiple channels, and strategies to achieve them; listed performance measures in its congressional justification that are linked to Treasury’s performance plan, including telephone level of service, taxpayer self-assistance rate, accuracy rates for responses provided to callers and percentage of individual returns processed electronically; and used its strategic plan, Taxpayer Assistance Blueprint, and other key documents to develop its joint Small Business/Self-Employed Division and Wage and Investment Concept of Operations (CONOPS) to outline its vision for the future of taxpayer services. CONOPS also includes high-level direction, specific initiatives, and work areas that are intended to drive the achievement of its vision. However, Treasury and IRS’s efforts fall short in several important areas: Treasury does not list correspondence overage rates in its performance plan. Handling correspondence is expensive; IRS estimated that it cost about $818.7 million from October 1, 2014 through June 30, 2015. In response to our December 2010 recommendation, IRS started using a correspondence overage rate beginning in fiscal year 2011 to measure its timeliness in handling correspondence. However, Treasury does not include correspondence overage rates as a performance measure in its performance plan or annual financial report, inhibiting its efforts to create a complete set of customer service performance metrics for IRS. Further, Congress and other stakeholders— such as the Treasury Inspector General for Tax Administration (TIGTA) and the National Taxpayer Advocate—do not have information readily available to monitor IRS's performance in handling correspondence from taxpayers. IRS has not yet developed a comprehensive customer service strategy incorporating appropriate levels of taxpayer services. In December 2012, we recommended that IRS outline a strategy that lists specific steps needed to attain appropriate levels of telephone and correspondence service based on an assessment of time frames, demand, capabilities, and resources. IRS intended the joint CONOPS, which was released in July 2014, to illustrate how it wants to deliver taxpayer services moving forward. The joint CONOPS articulates compliance activities and services IRS believes are achievable within a 5- year period. It identifies 30 critical capabilities for IRS to strengthen or develop, such as inventory planning, case management, and digital account management. It also defined initiatives and work areas to help IRS achieve its vision, such as using the Internet to submit documentation to IRS and update and amend returns, improving correspondence case management, and more accurately and quickly routing telephone calls to resolve taxpayers’ issues. While the joint CONOPS outlined a target of achieving about 90 percent closure of compliance cases within a filing year, it did not define what IRS believes are the appropriate service levels of telephone and correspondence. As a result, IRS is not able to fully articulate the levels of telephone and correspondence service which it believes are appropriate as it seeks to transition demand to self-service channels. IRS has not yet developed a telephone measure benchmarked to the best in business or customer expectations. IRS requested about $186 million for fiscal year 2016 to help the agency reach its goal of increasing telephone level of service to 80 percent in part by hiring more assistors and investing in information technology (IT) improvements. IRS last reached this level of service in fiscal year 2007. According to IRS officials, they use a planning process and strategy designed to achieve the highest level of service based on available resources and competing priorities, including funding statutorily required responsibilities such as implementing the Patient Protection and Affordable Care Act (PPACA) and Foreign Account Tax Compliance Act (FATCA). This is in contrast to our December 2014 recommendation, in which we recommended that IRS set its level of service based on a comparison to private-sector organizations providing a comparable or analogous service—or the “best in the business”—to identify gaps between actual and desired performance. In addition, IRS has not implemented our December 2010 recommendation to determine a customer service telephone standard based on the quality of service provided by comparable organizations or on what matters most to the customer. Treasury and IRS officials noted that IRS faces budgetary and legislative challenges not experienced by private sector organizations. IRS officials also believe that establishing a standard measure for telephone service would give the impression that IRS would plan to fail to deliver service to the standard in years where funding for taxpayer services is reduced. However, by not comparing customer service performance against the best in business or customer expectations, IRS is missing opportunities to illustrate gaps between actual and desired service, and provide additional information to Congress about resources IRS believes are needed for taxpayer service. IRS has not thoroughly examined all of the services provided via telephone assistors to determine which services can be provided via automated phone calls and online services. IRS has taken steps to determine the service channels taxpayers prefer to use for tasks, such as submitting documentation, obtaining updates on the status of a taxpayer case, or setting up a payment plan. For example, in part due to our prior work, IRS developed an automated telephone line and online tool that enabled taxpayers to receive information on amended returns submitted to IRS and locations of Volunteer Income Tax Assistance sites. However, IRS has not fully assessed the services it provides on other telephone lines to determine whether it can divert demand for services to automated phone calls and online applications. For example, IRS has not explored the costs and benefits of automating the process for ordering IRS forms. IRS officials told us that these calls are answered by a contractor who hires disabled individuals. Thus, they are reluctant to change this option. However, automating such calls would free up resources for services only IRS can provide, such as answering questions about account information. Without a careful review of services provided by telephone assistors and determining which services can be provided through other channels, IRS is missing opportunities to reduce telephone call volumes and effectively meet taxpayers’ needs for services at a lower cost. In October 2015, Treasury officials said they are not inclined to develop a comprehensive strategy since IRS already has a sufficient number of customer service performance goals. Further, in September 2015, officials from IRS’s Planning, Programming and Audit Coordination office said they were drafting an enterprise-wide CONOPS covering all IRS operations with a goal of providing more efficient and effective taxpayer services. They said they will define a more balanced view of customer service that illustrates future use of telephone and correspondence service as IRS expands its online services. They plan to release a draft of the enterprise-wide CONOPS to Congress and other external stakeholders in early 2016 and incorporate it into IRS’s strategic plan beginning in spring 2016. However, IRS’s Planning, Programming and Audit Coordination officials told us they did not envision the enterprise- wide CONOPS to incorporate specific goals for telephone and correspondence performance in line with what customers would expect, or resources needed to reach them. Without defining a comprehensive strategy with specific goals for customer service tied to the best in business and customer expectations, Treasury and IRS are not effectively conveying to Congress the types and levels of customer service expected by taxpayers and the capabilities and resources IRS requires to achieve those levels. IRS opened the 2015 filing season on the earliest starting date since 2012, despite having to implement challenging initiatives. IRS was able to both ensure compliance with FATCA and implement multiple tax law changes that passed late in 2014. In spite of these challenges, IRS officials and tax preparation industry stakeholders reported relatively few problems processing returns, which IRS attributed primarily to significant advance planning. To its credit, IRS was able to implement these changes while processing about the same number of returns and refunds as last year. Table 1 shows that IRS continues to see a decrease in processing paper returns and an increase in electronic processing, which has many benefits for taxpayers such as improved convenience, higher accuracy rates, and faster refunds. One area, however, where IRS did experience some problems was verifying taxpayers’ Premium Tax Credit claims due to health insurance marketplaces either not meeting the deadlines for providing IRS with complete health care coverage information or submitting information that was inaccurate. As we reported in July 2015, IRS had incomplete or delayed marketplace data to verify claims at the time of return filing and did not know whether these challenges were a single-year or an ongoing problem. We concluded that, without complete and accurate information from the marketplaces, IRS cannot effectively verify the amount of the premium tax credit that taxpayers are eligible to receive, or the amount that may have been paid on their behalf to an insurer in the form of an advance premium tax credit. We found that IRS needed to strengthen oversight of PPACA tax provisions for individuals and made several recommendations designed to strengthen oversight of PTC provisions, which IRS generally agreed to implement. IRS’s Primary Processing Units for Correcting Errors Errors can occur on tax returns because of mistakes made by both taxpayers and IRS. When processing returns, one of IRS’s responsibilities is to correct these errors. IRS generally does this in three processing units: Error Resolution System—Corrects a wide range of simple errors, such as missing schedules or forms using Math Error Authority. Rejects—Corrects incomplete returns by corresponding with taxpayers to request information, such as missing forms. Unpostables—Corrects returns that failed to pass validity checks and cannot be recorded (or posted) to the taxpayer’s account, such as incidents associated with identity theft. IRS has three units that correct errors—the Error Resolution System, Rejects, and Unpostables (see sidebar). Errors can cause ripple effects as returns move through processing and can significantly delay how long it takes to process a return. For example, if a taxpayer did not include a required tax form, examiners responsible for preparing the return for data entry will send a letter to the taxpayer requesting the missing form. Once the taxpayer submits the form, the return can be sent to the next unit for data entry. For more details on how IRS processes returns and corrects errors, see appendix IX. When IRS has to correct errors, it takes longer to process a return and can result in paying interest to the taxpayer, which is required if IRS takes longer than 45 days after the filing deadline to issue a refund. Consequently, as the number of errors increase, it may result in IRS paying more interest. IRS officials said they do not collect complete information on the reasons why IRS pays refund interest; however, they do conduct quality reviews of those cases where IRS paid the largest amounts of interest. These reviews show that multiple types of processing delays resulted in large interest payments. IRS officials attribute a rise in interest paid since 2011 in part to its filters catching more identity theft- related fraudulent returns. This causes delays as IRS takes additional steps to authenticate the legitimate return and can take longer than 45 days. Figure 6 shows that the amount of interest IRS paid has generally trended in the same direction as the total number of errors IRS identified. Even though millions of returns are corrected in the Error Resolution System, Rejects, and Unpostables units, IRS excludes many of these returns in its refund timeliness performance measure, which tracks the percentage of refunds issued within 40 days or less. Instead, the measure only includes paper-filed individual income tax returns and some returns that contain errors. In 2011, we reported that this measure and goal are outdated and have not significantly changed since 2003. We recommended IRS develop a new refund timeliness measure and goal to more appropriately reflect current capabilities. IRS officials said they would reassess both. In July 2014, IRS reported that it had determined not to develop a new refund timeliness measure, stating that implementation of the Customer Account Data Engine 2 daily processing, promotion of electronic filing, and newly implemented filters for identity theft eliminated the need to change the measure. Since then, the percent of returns processed electronically has increased from 78 percent to 86 percent. Furthermore, in August 2015, IRS officials told us that about 90 percent of refunds are issued within 21 days. These officials expressed concern that focusing only on timeliness could jeopardize the balance between quickly issuing refunds and ensuring that refunds are accurate and issued to the correct individuals. It is important that IRS issue refunds on time because when they are late, IRS is required to pay interest and taxpayers’ refunds are delayed. We continue to reiterate our prior recommendation that IRS develop a new refund timeliness measure and goal. Without including electronically filed returns in either the current measure or a separate one, IRS is not fully or accurately reporting on its performance in issuing timely refunds and omitting returns with errors further compounds these issues. As a result, IRS is not fully monitoring opportunities to potentially improve how efficiently IRS processes returns and issues refunds. We have previously reported that GPRAMA requires agencies to establish a balanced set of performance indicators to be used in measuring progress toward performance goals, including customer service. In its fiscal year 2014-2017 strategic plan, IRS acknowledges the importance of measuring customer satisfaction related to processing tax returns. An IRS unit reviewed submission processing operations and found opportunities to improve service delivery and improve the way returns are processed. In a narrowly focused review in 2011, a team from IRS’s Lean Six Sigma office identified 16 opportunities to improve submission processing operations. In addition, IRS officials told us they review processing operations and make incremental changes when preparing for each filing season. However, these reviews do not include comprehensive assessments of long-term or potentially systemic inefficiencies in IRS’s return processing operations. IRS officials said they do not have procedures to periodically or regularly evaluate how they process returns. Such assessments are important because the longer it takes IRS to process a return, the more likely refunds could be delayed and increase interest paid by IRS. During our current review, we found multiple opportunities for IRS to generate savings and efficiencies in its return processing operations. From our discussion groups with Submission Processing frontline staff and managers at the three sites that process individual tax returns, observations at a processing center, and interviews with senior officials, we identified opportunities that could potentially improve processing returns and reduce errors. For example, we found that: IRS’s procedures result in premature correspondence with taxpayers in certain instances. For returns filed on paper, examiners who prepare returns for processing may prematurely correspond with the taxpayer which contributes to delays in processing. In our discussion groups, 8 of 16 tax examiners in the error resolution and rejects units said there are restrictions on when they can contact a taxpayer to correct an error. The IRM states that tax examiners are generally allowed to correspond one time with taxpayers when processing a return, though in certain limited circumstances a second correspondence is permitted. These same examiners said that for returns filed on paper, when examiners who prepare returns for processing correspond with the taxpayer, others in the error resolution system are prohibited from making additional contact. For example, if a taxpayer did not include necessary information for claiming a tax deduction or credit and an examiner already corresponded with the taxpayer to request it, other examiners would be unable to correspond with the taxpayer any further related to that deduction or credit. In such a case, the return is suspended from processing until the taxpayer responds. If the taxpayer does not reply or provides incomplete information, then IRS processes the return excluding the information in question and the taxpayer is notified of the change. If the taxpayer disagreed with IRS’s resolution, then the taxpayer would have to file an amended return, which takes additional time and resources for the taxpayer as well as IRS. Ensuring that all errors are identified to the fullest extent possible before corresponding with the taxpayer would help IRS streamline processes and reduce burden on taxpayers when attempting to correct their returns for processing. IRS is not collecting performance data about some of the errors corrected during tax return processing. IRS does not estimate how long it takes to process a return with or without an error and how long it takes to resolve specific types of errors compared to others or how many errors result from its employees incorrectly transcribing data. In addition, IRS does not collect information on the percentage of documents that will not post to a taxpayer’s account by type, such as tax returns or payments. As we have previously reported, key practices for managing for results include the use of performance information to make the decisions necessary to improve performance. By not collecting such data, IRS is limited in its ability to monitor and improve processing tax returns. According to IRS officials, this could be difficult to accomplish because IRS’s computing systems are not set up to do so. IRS frontline managers and staff who correct errors on individual taxpayer returns identified weaknesses in their training. In discussion groups with us, 20 of 32 frontline managers and staff raised concerns about the quality of training. Some of the weaknesses they identified included that training did not coincide with the work they received, the trainers were not adequately prepared to teach, and that training designed to improve interpreting certain sections of the IRM was inadequate. The IRS Oversight Board reported similar training concerns last year. Although IRS has provided more training to tax examiners who correct errors, since 2010, performance problems have persisted. For example, for the units that process errors on individual taxpayer returns, accuracy ratings were below the baseline performance standard half the time between fiscal year 2013 to June 30, 2015. IRS officials acknowledged the challenges in providing timely training particularly given uncertainties in the level and timing of appropriations which affects IRS’s ability to hire and train before the filing season begins. In addition, officials explained that individual business units assess their training needs every year and conduct training accordingly. However, it is unclear the extent to which the performance issues are the result of training gaps. IRS eliminated or reduced some services in fiscal year 2014 and redirected taxpayers to lower-cost channels to focus on core taxpayer services that only IRS can provide (see appendix II for a full list of the fiscal year 2014 service initiatives). As a result, some taxpayers would have lost access to services previously provided and had to seek assistance from other sources such as paid tax preparers. We estimated IRS realized about $50 million that it shifted to core services after it spent about $356,000 on implementing these initiatives. Figure 7 shows the estimated resources realized by each initiative. IRS said it redirected 515 assistor FTEs to answer telephone calls on issues that only IRS could help resolve. It also redirected 160 walk-in site FTEs to respond to questions about balances due to IRS, math errors, refunds, identity theft, and other inquiries into taxpayers’ accounts. In turn, according to IRS, this enabled it to provide a higher level of service and lower wait times than expected for callers seeking live assistance in fiscal year 2014. These actions are examples of the difficult tradeoffs that we recommended IRS take to provide more timely telephone and correspondence services. IRS’s actions also helped the agency move toward its vision of transitioning taxpayer demand for assistance to lower cost, self-service options. IRS has made mixed progress addressing our prior filing season-related recommendations. For example, IRS implemented one recommendation from our 2014 filing season report by establishing performance measures and plans for assessing the effectiveness of service initiatives. IRS also implemented recommendations to improve web services, such as identifying potential risks for interactive products in development and summarizing mitigation plans needed to address such risks. However, IRS has not fully implemented 21 other recommendations that are intended to help increase transparency of its performance, reduce taxpayer burden, and improve service and compliance. This includes a recommendation on helping IRS have the information needed to weight the potential risks, costs, and benefits of options for implementing a “Real Time Tax” system to help improve verification of income tax returns by matching third-party information to such returns before refunds are issued. IRS can also take steps to implement our prior recommendations on combating identity theft refund fraud to strengthen present defenses against refund fraud while also developing new strategies for both electronic and paper returns that stop such fraud at all stages of return processing. Our prior work also identified actions Congress could take to enhance IRS’s Math Error Authority (MEA), which allows IRS in limited circumstances to correct calculation errors and check for other obvious noncompliance. Since 2008, we have raised five matters for Congress to consider providing IRS with additional MEA. In November 2009, in response to our suggestion, Congress acted to provide limited MEA for correcting errors on First Time Homebuyer Tax Credit claims, but four other matters on MEA remain open (see appendix X for details). In fiscal years 2015 and 2016, the administration included legislative proposals that would grant Treasury regulatory authority to expand the IRS’s use of MEA, which is consistent with what we suggested in February 2010. These proposals would allow IRS to correct computational-based errors and incorrect use of tables provided by IRS and would add a new category of correctable error where the (1) information provided by the taxpayer does not match the information contained in government databases, (2) taxpayer has exceeded the lifetime limit for claiming a deduction or credit, or (3) taxpayer has failed to include documentation that is required by statute with his or her return. This broader MEA, with appropriate safeguards, would give IRS the flexibility to respond quickly as new uses for the authority emerge in the future. Expanding opportunities to use MEA is also important because it could help IRS correct additional errors during return processing, which would save resources by reducing delays in processing and the need for burdensome audits. For example, Congress could address two matters we previously suggested if it granted Treasury regulatory authority to expand IRS’s use of MEA to correct errors in certain cases, such as where the taxpayer has exceeded the lifetime limits for claiming a deduction or credit. According to the Joint Committee on Taxation, by doing so, the federal government could cumulatively save about $166 million between fiscal years 2015 and 2025. We also identified actions Congress could take to reduce identity theft refund fraud. In August 2014, we suggested that Congress consider providing the Secretary of the Treasury with the regulatory authority to lower the threshold for electronic filing of W-2s from 250 returns annually to between 5 to 10 returns, as appropriate. By providing such authority, Congress can help support IRS’s efforts to conduct more pre-refund matching of W-2 information. The severe decline in IRS’s customer service in fiscal year 2015 underscores how important it is for IRS to urgently make tough decisions to improve services. In light of IRS’s reduced budget and expanding responsibilities, we have reported for several years that IRS needs to dramatically revise its approach to customer service. While IRS’s fiscal year 2014 service initiatives resulted in efficiency gains, they do not go far enough, as evidenced by the extremely low level of service the agency delivered in 2015. IRS needs a longer-term strategy to manage its budgetary and workload environment. To that end, we are concerned that Treasury and IRS do not believe that they need to develop a comprehensive customer service plan to set targets for appropriate levels of telephone and correspondence service based on service provided by the best in business and customer expectations. We continue to believe that implementing our previous recommendation would enable IRS to make more informed requests to Congress about the resource requirements to deliver desired levels of service. IRS has taken noteworthy actions to improve customer service, such as the Centralized Evaluative Review pilot, which shows promise to improve correspondence and telephone work. There are also other opportunities for Treasury and IRS to improve correspondence services and measure performance, such as including performance targets for correspondence in Treasury’s performance plan. This would enhance Congress’s understanding of IRS’s customer service performance and challenges. IRS continues to realize efficiencies in processing taxpayer returns through e-file, however, without conducting performance evaluations of its return processing, IRS is missing opportunities to reduce processing delays that can contribute to refund interest paid to taxpayers. Identifying efficiencies that both reduce common taxpayer errors and allow IRS to more timely process new types of errors could save the government money in interest paid. Examples of efficiencies we identified during our observations at IRS sites include tracking information on errors it corrects and identifying training needs that could improve performance for units that process errors on individual taxpayer returns. Conducting performance evaluations would likely enable IRS to find these and similar opportunities to improve processes. To improve taxpayer service amid declining budgets and increased responsibilities, Congress should consider requiring the Secretary of the Treasury to develop a comprehensive customer service strategy in consultation with the Commissioner of Internal Revenue that (1) determines appropriate telephone and correspondence levels of service, based on service provided by the best in business and customer expectations; and (2) thoroughly assesses which services IRS can shift to self-service options. To improve performance management of taxpayer services, we recommend that the Secretary of the Treasury update the Department’s performance plan to include overage rates for handling taxpayer correspondence as a part of Treasury’s performance goals. To improve taxpayer service and gain efficiencies, we recommend that the Commissioner of Internal Revenue take the following two actions: 1. Assess the feasibility of setting up a control in IRS systems requiring assistors to send out required correspondence to taxpayers prior to closing a correspondence case. 2. Periodically conduct performance evaluations of IRS return processing operations to identify inefficiencies. The initial evaluation could include, for example, assessing when to correspond with taxpayers whose returns contain errors, collecting additional data on errors that IRS corrects, and closing training gaps that are hindering performance for units that process errors on individual taxpayer returns. We provided a draft of this report to the Secretary of the Treasury and the Commissioner of Internal Revenue. Treasury and IRS provided written comments, which are reprinted in appendixes XI and XII, respectively. IRS also provided technical comments which we incorporated where appropriate. Treasury neither agreed nor disagreed with our recommendation to update the Department’s performance plan to include correspondence overage rates as a part of Treasury’s goals. Treasury stated that it meets regularly with IRS leadership to review progress toward goals and strategy decisions and that it will continue to work with IRS to improve managing and reporting its performance. IRS agreed with both recommendations directed to it. Regarding our recommendation to set up a control in IRS systems to require assistors to send required correspondence before closing a case, IRS stated that it would analyze its options for bolstering controls to address correspondence concerns. For our recommendation to conduct periodic performance evaluations of IRS return processing operations to identify inefficiencies, IRS stated that it would consider opportunities for improving existing processes that identify common errors requiring correction and/or correspondence with taxpayers. IRS noted that its long-term vision for tax administration is to modernize taxpayer service focusing on options to meet taxpayers’ needs and preferences. This would include online tax account access that would enable taxpayers to make adjustments such as correcting errors. Finally, to further identify inefficiencies and improve performance, IRS stated that it would review and improve employee training where appropriate. As agreed with your offices, unless you publically release its contents earlier, we plan no further distribution of this report until 30 days from its issuance date. At that time, we plan to send copies of this report to the appropriate congressional committees. We will also send copies to the Commissioner of Internal Revenue, the Secretary of the Treasury, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or mctiguej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix XIII. Our objectives in this report were to 1. assess how well the Internal Revenue Service (IRS) provided customer service compared to its performance in prior years and identify opportunities for IRS to streamline services, 2. assess how well IRS processed individual income tax returns compared to its performance in prior years and identify opportunities for IRS to streamline processing, and 3. determine what resources IRS realized from implementing service initiatives and describe IRS's progress toward implementing our prior filing season-related recommendations. To answer the first and second objectives, we obtained and analyzed IRS documents and data, including performance, budget, and workload data for return processing and taxpayer services, and used this information to compare IRS’s performance in 2015 to prior years (2010 through 2014) to identify trends and anomalies; identified federal standards for evaluating customer service, such as the Government Performance and Results Act Modernization Act and executive orders, presidential memorandums and Office of Management and Budget guidance to strengthen customer service, and compared Department of the Treasury and IRS actions to those standards; visited IRS facilities in Austin to observe return processing and assistors handling correspondence, and the Joint Operations Center (which manages IRS’s telephone operations) in Atlanta to observe assistors answering taxpayer calls and correspondence; interviewed officials from IRS’s Wage and Investment division (which is responsible for managing filing season operations) and external stakeholders, including tax administration experts from major tax preparation and software firms who interact with IRS on key aspects of the filing season, to obtain contextual information about IRS’s performance; interviewed officials from the Department of the Treasury and IRS to discuss goals and strategies to improve taxpayer services and steps they have taken to measure performance in delivering such services; conducted 10 discussion groups with IRS frontline staff and managers located at five IRS campuses. Four of the discussion groups were with assistors who answer telephone calls or respond to correspondence or frontline managers who oversee the assistors’ work. The assistors and managers worked in Atlanta; Austin; Kansas City, Missouri; and Philadelphia. Six of the groups were with tax examiners in Austin; Fresno, California; and Kansas City, Missouri, who are responsible for correcting errors and processing individual taxpayer returns. To identify group participants, we asked IRS officials for the contact information of staff located at each campus with the responsibilities described above. We then contacted a select number of assistors and tax examiners directly to schedule the meetings. We conducted six discussion groups in person and four via conference call. Each group contained four to seven participants. To encourage participants to speak openly, we ensured that no senior IRS management officials were present during the discussions, and we separated staff and managers into different groups. At the beginning of each group we explained that any comments and opinions provided would be reported in summary form. We developed and administered a standardized discussion guide to improve the quality of information gathered. Our questions for assistors focused on their experiences and suggestions, if any, for how IRS can more efficiently conduct its correspondence and telephone processes. We discussed the benefits and drawbacks of Centralized Evaluative Review at the campuses that piloted it. We asked examiners about their experiences processing returns with errors and what suggestions, if any, they had for IRS to process such returns more efficiently. To determine what resources IRS realized from implementing service initiatives, we first calculated the total gross dollars IRS saved by implementing each of the six service initiatives and redirected toward other purposes. We determined this amount by multiplying full-time equivalents (FTE) redirected by salaries and benefits per FTE using data provided by IRS. Next, we calculated total costs of implementing each of the service initiatives, then subtracted the amount from total gross dollars saved to calculate IRS resources realized from implementing each of the six initiatives in fiscal year 2014 dollars. IRS officials concurred with our approach and calculations. To describe IRS’s actions to implement our prior recommendations, we reviewed relevant documentation, including IRS Joint Audit Management Enterprise System reports tracking IRS’s actions to implement our recommendations, and obtained information from IRS officials. To identify data limitations and assess data reliability, we reviewed IRS data and documentation, assessed documentation for data limitations, and compared those results to our data reliability standards. We consider the data presented in this report to be sufficiently reliable for our purposes. We reported in our prior work that the Internal Revenue Service (IRS) was struggling to provide taxpayers access to services despite regularly realizing efficiency gains, and that IRS’s performance would likely continue to suffer unless it made tough choices about what services to provide. Consistent with these findings, IRS implemented service changes in fiscal years 2014 and 2015 by reducing or eliminating certain telephone and walk-in services, and redirecting taxpayers toward other service channels such as IRS’s website. Fiscal year 2014 service changes: 1. Limited telephone assistance to only basic tax law questions during the filing season and reassigned assistors to work account-related inquiries. 2. Eliminated free return preparation and reduced other services at IRS’s walk-in sites. 3. Launched the “Get Transcript” tool, which allows taxpayers to obtain a viewable and printable transcript on irs.gov, and redirected taxpayers to automated tools for additional guidance. 4. Redirected refund-related inquiries to automated services and did not answer refund inquiries until 21 days after a tax return was filed electronically or 6 weeks after a return was filed by paper (unless the automated service directed the taxpayer to contact IRS). 5. Limited access to the Practitioner Priority Service line to only those practitioners working tax account issues. 6. Limited live assistance and redirected requests for domestic employer identification numbers to IRS’s online tool. Fiscal year 2015 service changes: 1. Redesigned notices to clearly state why the notice was issued; if a response is required; what action, if any, is required; and inform taxpayers about online resources and self-service tools as an alternative to calling or writing the IRS. 2. Expanded use of the Oral Statement Authority tool to reduce the amount of written correspondence to resolve penalty relief requests. 3. Directed taxpayers who meet the Online Payment Agreement qualifications to use a tool online (and at kiosks where available) to apply for and set up installment payment agreements instead of calling or visiting IRS. 4. Reduced the volume of IRS products at walk-in sites and community outlets, including forms, instructions, and publications that are available online at IRS.gov, and encouraged taxpayers to use available online sources. 5. Reduced the number of walk-in sites accepting payments by cash and more heavily promoted electronic payment options, such as IRS Direct Pay, as an alternative to such payments made at a walk-in site or by mail. IRS changed the name of the product line from "Identity theft" in May 2013. IRS merged the previous International and International-Employer Identification Number lines to this combined product line on October 1, 2012. IRS changed the name of the product line from "Identity theft" in May 2013. Executive orders require agencies to take steps to strengthen customer service and presidential memorandums. Office of Management and Budget (OMB) guidance describe a number of actions agencies can take to improve their customer service. In our previous reports on the IRS filing season, we have described these requirements at length and emphasized how important it is for IRS to take those actions to ensure it is providing the best taxpayer service possible while informing Congress about resources needed to improve the level of service provided to taxpayers. Executive Order 12862, Setting Customer Service Standards, was issued in September 1993 and requires that all executive departments and agencies that “provide significant services directly to the public shall provide those services in a manner that seeks to meet the customer service standard established” which is “equal to the best in business.” A related presidential memorandum, issued in March 1995, also notes that customer service standards should reflect customer views, and an OMB memorandum issued in March 2015 reemphasizes that agencies “must keep pace with the public's expectations and transform its customer services by regularly soliciting and acting on customer feedback, streamlining processes, and delivering consistent quality across customer service channels.” In addition, we have reported that performance data should be used to identify and analyze the gap between an organization’s actual performance and desired outcomes, including by setting performance benchmarks to compare an organization with private organizations that are thought to be the best in their field. Executive Order 13571, Streamlining Service Delivery and Improving Customer Service, was issued in April 2011 to strengthen customer service and required agencies to develop and publish a customer service plan, in consultation with OMB. We identified other memorandums and guidance to agencies OMB has issued since 1995 that describe a number of actions to improve customer service, including setting, communicating, and using customer service standards. For instance, in July 2014, to help agency leadership focus on this issue, OMB issued guidance that agencies include additional customer service information with their fiscal year 2016 budget submissions. The Internal Revenue Service’s (IRS) method for processing returns is a complex operation because multiple units are involved. Figure 9 illustrates the numerous steps IRS goes through to process both returns and correct errors. Electronic returns move quickly to processing once IRS receives them while paper returns must first go through multiple additional steps. When returns are processed, IRS checks for errors and quickly corrects those that it can and notifies the taxpayer of missing documents when it cannot, such as a missing form or information return. In certain instances, after IRS has tried to post the return to the taxpayer’s account, it identifies that certain returns cannot post, such as identity theft returns, and attempts to resolve these unpostable returns. The following tables present our prior matters for Congress and recommendations to the Internal Revenue Service (IRS) related to IRS’s filing season operations that had not been implemented as of October 20, 2015. The most recent information available on the status of matters and recommendations for each GAO report listed in the tables below may be found by clicking on the web link for each report. In addition to the contact named above, Joanna Stamatiades, Assistant Director, Erin Saunders Rath, Analyst-in-Charge, Lyle Brittain, Jehan Chase, James Cook, Robert Gebhart, Shelby Kain, Kirsten B. Lauber, Donna Miller, Mark Ryan, and Ardith Spence made key contributions to this report.
During tax filing season, IRS processes tax returns, issues refunds, and provides telephone, correspondence, online, and face-to-face services. GAO has reported that in recent years IRS has absorbed significant budget cuts and struggled to provide quality service. GAO was asked to report on the results of IRS's performance during the 2015 filing season. For this report, GAO assessed IRS's taxpayer service and individual income tax return processing. GAO also identified opportunities to streamline services and processes, among other issues. GAO analyzed IRS documents and data, and observed operations at IRS processing and telephone sites. GAO compared IRS performance to prior years and its actions to federal standards for evaluating performance. GAO also interviewed IRS officials and external stakeholders, and conducted discussion groups with IRS frontline staff and managers. The Internal Revenue Service (IRS) provided the lowest level of telephone service during fiscal year 2015 compared to prior years, with only 38 percent of callers who wanted to speak with an IRS assistor able to reach one. This lower level of service occurred despite lower demand from callers seeking live assistance, which has fallen by 6 percent since 2010 to about 51 million callers in 2015. Over the same period, average wait times have almost tripled to over 30 minutes. IRS also struggled to answer correspondence in a timely manner and assistors increasingly either failed to send required correspondence to taxpayers or included inaccurate information in correspondence sent. IRS has taken steps to remind assistors to send correspondence, but does not have adequate controls to ensure that they send accurate correspondence before closing cases. GAO also found that the Department of the Treasury (Treasury) does not include correspondence performance goals in its performance plan, and therefore, does not have a complete set of measures to assess performance. The decline in service has coincided with a 10 percent reduction in IRS's annual appropriations, as well as resource allocation decisions by IRS to meet statutory responsibilities, such as implementing tax law changes and supporting information technology infrastructure. More importantly, GAO found that Treasury and IRS have neither developed nor have any plans to develop a comprehensive customer service strategy to define appropriate service levels and benchmark to the best in business or customer expectations as GAO has previously recommended. Without such a strategy, Treasury and IRS can neither measure nor effectively communicate to Congress the types and levels of customer service taxpayers should expect, and the resources needed to reach those levels. Similarly, while IRS officials and stakeholders reported few problems with processing individual tax returns, GAO identified some inefficiencies related to tax processing, such as premature correspondence with taxpayers and inadequate training for frontline staff. These inefficiencies warrant further evaluation to determine if additional improvements are needed. Congress should consider requiring Treasury to develop a comprehensive customer service strategy in consultation with IRS. Treasury should update its performance plan to include goals for correspondence. IRS should assess the feasibility of a control to require assistors to send out required correspondence and evaluate return processing operations to identify inefficiencies. Treasury neither agreed nor disagreed with GAO's recommendation to update its performance plan but said it would coordinate with IRS. IRS agreed with GAO's two other recommendations.
The municipal securities market comprises both primary and secondary markets. In the primary market, underwriters buy new securities from municipal issuers (e.g., local government entities) and subsequently sell them to investors during the primary offering.trade after the primary offering are said to trade in the secondary market, Municipal securities that with both institutions and individuals participating. Institutional investors typically trade municipal securities in amounts of $1 million or more and generally are in the market full-time to provide or preserve income as well as maximize investment returns for their clients or firms. In contrast, individual investors typically trade municipal securities in amounts of $100,000 or less and access the market relatively infrequently, with the intent to buy and hold securities until maturity. Many individual investors find municipal securities an attractive investment option because of the tax advantages these intruments offer. Unlike the dividends on equity securities (which also trade in a market with considerable individual investor participation), the interest on most municipal securities is exempt from federal income tax and, in some cases, from state or local income tax. In addition, as debt instruments municipal securities are generally considered less risky than equity securities. For example, issuers of debt securities have a contractual obligation to return the principal value of the security to the holder at maturity, while issuers of equity securities do not. Further, issuers of debt securities also have a contractual obligation to pay investors a fixed or variable rate of interest income. On the other hand, dividend payments to shareholders of equity securities are decided by the company’s board of directors. Data on the number of municipal issuers and outstanding municipal securities are not officially tracked by regulators or the private sector. Third-party information vendors provide a range of estimates; data we obtained from one indicated the municipal securities market has over 46,000 municipal issuers, including states, counties, cities, towns, and state and local government agencies, among others, and at least 1.1 million securities outstanding. In contrast, about 5,700 public companies list their equity securities for trading on the major U.S. exchanges. Each municipal issuance is unique, with its own credit structure, terms, and conditions. Most outstanding municipal securities trade infrequently—for example, in 2010 about 99 percent of outstanding municipal securities did not trade on any given day. The heaviest trading of municipal securities typically occurs immediately following their issuance, after which trading becomes sporadic. The municipal securities market is geographically fragmented, with secondary market trading supported by national and regional broker- dealer firms that serve institutional investors (institutional broker-dealers) or individual investors (retail broker-dealers), and in some firms, both. Several national broker-dealer firms have enough capital and geographic presence to underwrite large new issuances nationwide, trade in large volume with institutional investors, and offer expertise in virtually every sector of the market. Some midsized broker-dealer firms also have nationwide coverage for institutional and individual investors on a smaller scale. But other broker-dealer firms provide inventory and expertise in well-defined geographic areas, allowing them to serve individual investors—many of whom invest in municipal securities to enjoy state or local income tax benefits—as well as institutional investors who need access to local markets. Given the heterogeneity and variety of municipal securities available, the fact that they are traded infrequently, and the geographic fragmentation of the market, broker-dealers typically work with their customers to find available securities that fit preferred parameters (e.g., geographic location, yield, credit quality, or price) instead of specific securities. Most broker-dealers execute trades as the principal by trading securities from their proprietary accounts. other features that allow users to share information on and post offerings, obtain and provide bids on securities, and conduct research and analysis. Broker-dealers may also use broker’s brokers and electronic trading platforms to trade in the secondary market. Currently, about 20 broker’s brokers promote additional liquidity and facilitate information flow in the municipal securities markets by specializing in segments of the market (by region, issuer, or type of security) and helping broker-dealers find buyers for their securities in their areas of expertise. They do so primarily by arranging auctions called bid wanted procedures (bids wanted) for broker- dealers that are selling securities, particularly in unfamiliar areas of the market. Broker-dealers can also buy and sell municipal securities for their customers through electronic trading platforms that combine inventories from market participants, typically broker-dealers, into one location, thus enabling users—mostly broker-dealers and, in some cases, institutional investors—to search for, buy, and sell municipal securities from a single site. Some of these trading platforms focus on trading for the individual investor market. However, individual investors typically do not have direct access to these trading platforms, although they may have indirect access through a retail broker-dealer. About 1,800 securities firms and banks are registered with MSRB as broker-dealers of municipal securities. As an SRO, MSRB develops rules for broker-dealers engaged in underwriting, trading, and selling municipal securities with the goals of protecting investors and issuers and promoting a fair and efficient marketplace. To further its mandate to protect investors, MSRB also operates information systems designed to promote post-trade price transparency and access to municipal securities issuers’ disclosure documents. MSRB provides this access free of charge through its Electronic Municipal Markets Access (EMMA) website. As we have seen, FINRA and federal banking regulators enforce MSRB rules for broker-dealers under their respective jurisdictions. FINRA oversees 98 percent of those MSRB-registered broker-dealers that are also registered members of FINRA, while federal baking regulators oversee the remaining 2 percent. SEC has designated FINRA as the entity responsible for conducting surveillance of trade data from RTRS for potential violations of MSRB rules. FINRA employs automated surveillance in its compliance monitoring that is programmed to review RTRS data for potentially excessive prices and late trading, among other rule violations. FINRA staff review alerts generated by automated surveillance systems to identify those that warrant further investigation. When FINRA finds evidence of potential violations of these rules involving those broker- dealers who are its members, it can take action ranging from informal warnings to the imposition of monetary fines to expulsion from its membership, among other sanctions. FINRA refers potential violations involving bank dealers to the appropriate federal banking regulator. During the period of our review, FINRA and the federal banking regulators conducted routine examinations of the firms under their jurisdiction once every 2 years for compliance with MSRB rules, pursuant to MSRB requirements. OCIE administers SEC’s nationwide examination and inspection program. OCIE oversees the SROs’ compliance with federal securities laws and the SROs’ enforcement of their members’ compliance with federal securities laws and SRO rules through inspections. Inspection review areas include an SRO’s compliance, examination, and enforcement programs. OCIE also directly assesses broker-dealer compliance with federal securities laws through examinations such as cause and risk- based examinations. If examiners identify compliance findings during broker-dealer examinations, they may assess the quality of any recent FINRA examinations of the broker-dealer and provide oversight comments to FINRA. SEC’s Office of Municipal Securities is a separate office within the Division of Trading and Markets that coordinates SEC’s municipal securities activities, advises on policy matters relating to the municipal security market, and provides technical assistance in the development and implementation of major SEC initiatives in the municipal securities area. In addition, the Office of Municipal Securities reviews and processes rule proposals filed by MSRB and acts as SEC’s liaison with MSRB, FINRA, and a variety of industry groups on municipal securities issues. SEC’s Division of Enforcement (Enforcement) investigates possible violations of securities laws, recommends commission action when appropriate, either in a federal court or before an administrative law judge, and negotiates settlements. In January 2010, Enforcement created the Municipal Securities and Public Pensions Unit, which focuses on misconduct in the municipal securities market and in connection with public pension funds. Because of the heterogeneity of the issuers and the securities they issue, the large number of securities outstanding, and the infrequency with which these securities trade, the municipal securities market does not maintain reliable tradable quotes on all outstanding municipal securities.Consequently, broker-dealers we spoke with said they use a variety of information to determine the prices at which they are willing to buy and sell securities. We found that institutional investors traded at more favorable prices than individual investors and were generally better equipped to make independent assessments of the value of a security. SEC, MSRB, and market participants have been considering ways to improve pretrade price transparency. As we have seen, the large municipal securities market, with its many issuers and infrequent trades for a given security, does not have readily available, transparent information on the prices of securities. Municipal broker-dealers generally determine the prices at which they are willing to trade by making relative assessments of a security’s market value, drawing on various sources of information and incorporating their compensation for facilitating the trades.identified as relevant to their pricing determinations included (1) recent post-trade price information on same or comparable securities, (2) available pretrade price information on the security or comparable securities, (3) the characteristics and credit quality of the security, (4) relevant market information, and (5) the cost of trading the security. Several factors that broker-dealers we spoke with First, when determining prices, broker-dealers said they often began by reviewing recent post-trade information on the same or similar securities. In 2005 MSRB began requiring broker-dealers to report price data on most municipal securities transactions within 15 minutes to RTRS and, in 2008, made this post-trade pricing information freely available on the EMMA website. Broker-dealers we spoke with said that the price of a recently reported interdealer trade for a security was a particularly good indication of its value for that segment of the market. However, if a security has not traded recently, they said they instead look for recent trades in comparable securities. Broker-dealers we spoke with also said they typically access MSRB’s trade data through Bloomberg, which makes available tools to perform advanced searches and analytics on the data. These broker-dealers also said that they frequently used industry benchmarks—typically yield curves—-constructed in part from the post- trade prices of selected securities as a reference for pricing similar securities. Representatives of broker-dealers we interviewed explained that post-trade information provided them with an understanding of real- time trends in the demand for similar types of securities. For example, a major electronic trading platform offers several tools for assessing the prices of its listed offerings using post-trade information. Users can see recently reported trades for similar securities, compare the offer price with a widely used benchmark curve, and receive alerts if the offering price exceeds the most recently reported trade by a specified threshold. Second, broker-dealers may use available pretrade price information on the same or similar securities to infer market value. In the absence of tradable quotes for outstanding securities, pretrade price information in the municipal securities market includes bids from bids wanted and offer prices. However, unlike post-trade information, pretrade price information is not centralized, not publicly available, and not as available to broker- dealers (and to other market participants) as post-trade price information. To estimate the market value for a security they want to sell, broker- dealers may solicit bids—or may ask a broker’s broker to solicit bids— through a bid wanted. Broker’s brokers may also provide broker-dealers with otherwise publicly unavailable information on third-party bids and offers from past bids wanted as well as the highest bid and the lowest offer available at a given time for securities in their areas of expertise. For example, a broker’s broker who regularly puts a security out for bid wanted can provide information to broker-dealers on the bids received even if the security has not traded in the last 2 months. Additionally, broker-dealers obtain information about offer prices mainly through their relationships and daily communications with other broker-dealers or broker’s brokers, their investors who may inform them of competing offers, and listed offerings on electronic trading platforms or Bloomberg. Third, information on the credit quality of a security may affect its market value, particularly any changes to the credit quality of the security since it last traded. Broker-dealers can infer the credit quality of a security by reviewing information from issuers’ financial disclosures posted on the EMMA website, which they typically access via Bloomberg. Issuer disclosures that may affect a security’s market value include information on principal and interest payment delinquencies, changes in credit ratings, and unscheduled draws on debt service reserves reflecting financial difficulties, among other factors. Broker-dealers stated that their ability to understand the credit risk of a particular security rested primarily on their ability to obtain timely, comprehensive issuer disclosures. However, they noted that municipal issuers’ disclosures are sometimes outdated and incomplete. They added that conducting an independent assessment of the credit quality of municipal securities has become increasingly important given the decline in the availability and use of bond insurance following the recent financial crisis. Fourth, broker-dealers identified overall market conditions and events as important factors to consider when inferring the market value of a security. For example, an increase in interest rates since the last time a security has traded will, other things being equal, reduce its value. Another important factor broker-dealers consider is overall supply and demand. For example, broker-dealers we interviewed told us that they monitored the primary market because investor demand for new issues affects prices for similar securities in the secondary market. Broker- dealers also told us that by being visible and frequently transacting in the market, they could maintain continuous dialogue with their customers about prices, helping to gauge the interest of investors and other broker- dealers in certain securities at given prices. Finally, broker-dealers said that external factors such as “headline risk”—the risk that a news story will affect prices in a market—can also affect prices in the municipal securities market. An example of headline risk cited by broker-dealers we interviewed was a December 2010 report by a financial markets analyst predicting widespread defaults among municipal issuers. This report caused many individual investors to withdraw their money from these funds, in turn depressing prices. Fifth, in determining prices, broker-dealers we spoke with said they typically consider trading costs associated with every municipal securities trade, such as fees to MSRB, and operational costs. They said that in general, it is less costly for broker-dealers to trade a given volume of securities in a few large blocks than in a large number of small blocks. For example, an institutional broker-dealer with a $2 million block of securities to sell may have to find only one buyer for the securities, while a retail broker-dealer with a similar block of securities might have to find 100 individual investors to purchase these securities in smaller blocks of $20,000. They explained that the higher costs related to the smaller trades include not only the time and other related costs of finding many more interested buyers, but also the risk that the broker-dealer incurs in holding the securities in his inventory during that time. Last, broker- dealers we spoke with told us that they could spend considerable amounts of time with individual investors explaining the characteristics and relative risks of the securities, answering questions, and complying with regulatory requirements that govern broker-dealer transactions with individual investors. In contrast, they said they do not have to spend as much time with institutional investors, who are typically more knowledgeable and experienced market participants. In order to be profitable, broker-dealers consider these costs when establishing prices. These broker-dealers also noted that they used their professional judgment to determine the weight of any factor in determining the price for a security, given the facts and circumstances surrounding the transaction. For example, while a recent trade price on a similar security may drive a security’s trade price in one case, the same information may become less relevant in the case of a security that has more recently suffered a credit downgrade. We analyzed MSRB data for secondary market trades involving newly issued fixed-rate securities during the period from 2005 through 2010. We found that (1) relative to institutional investors, individual investors generally paid higher prices when buying—and received lower prices when selling—municipal securities; (2) broker-dealers received larger spreads (i.e., the difference between the purchase and selling price of a security based as a percentage of the purchase price) when trading smaller blocks of municipal securities; and (3) the prices that individual investors paid for a given security tended to be more dispersed—that is, to vary more—than the prices that institutional investors paid. First, our analysis revealed that for broker-dealer sales to investors, the relative price declined on average with trade amount, and that the opposite occurs for broker-dealer purchases from investors. That is, investors paid higher prices when buying smaller blocks of securities from broker-dealers—and received lower prices when selling them— than they paid or received for larger trades. Table 1 shows average relative price for trades involving newly issued fixed-rate securities issued in 2010. The table shows that as trade size increased, relative prices that investors paid for municipal securities declined steadily and relative prices that investors received for selling their securities increased steadily. For example, on average, investors paid 101.9 percent of a security’s reoffering price and received 99.4 percent of a security’s reoffering price for $5,000 worth of securities, while they paid 100.1 percent of a security’s reoffering price and received 100.5 percent of a security’s reoffering price for $2 million worth of securities. As discussed earlier, individual investors typically trade municipal securities in amounts of $100,000 or less, and institutional investors typically trade in amounts of $1 million or more. Consequently, individual investors are likely paying higher prices than institutional investors when they purchase municipal securities and receiving lower prices than institutional investors when they sell municipal securities. Our analysis also found that broker-dealers received larger spreads when trading small blocks of municipal securities. Because individual investors tend to trade smaller amounts than institutional investors, individual investors tended to pay higher spreads than institutional investors. For example, our analysis showed that the average spread for a $20,000 trade of a fixed-rate security in 2010 was around 2 percent and for a $5 million trade around 0.01 percent. Table 2 shows how these spreads affect investors’ return as measured by the yield to maturity (the yield received after the security matures) on two hypothetical trades of $20,000 and $5 million of the same securities purchased by an individual investor and an institutional investor, respectively. In addition, our analysis showed a wider range of prices for smaller trades than for larger trades from 2005 through 2010. That is, prices for larger trades tended to be more concentrated, while prices for smaller trades tended to be more dispersed. To the extent that individual investors trade smaller amounts than institutional investors, this relationship indicates that individual investors were more likely to pay a wider spectrum of prices for a given security than institutional investors. Table 3 shows price dispersion for trades involving newly issued fixed-rate municipal securities issued in 2010. The table shows prices that investors paid (and, to a lesser extent, received) for municipal securities were more dispersed for smaller trades than for larger trades. These findings are consistent with previous research on municipal securities trades. For example, researchers analyzing trades of municipal securities found that broker-dealers received larger spreads on smaller trades than they received on larger trades. In addition, researchers analyzing trades of recently issued municipal securities found that prices for smaller trades were more dispersed than prices for larger trades. Various factors could contribute to the differences in prices that individual investors receive relative to institutional investors. Some researchers have suggested that these differences are not entirely accounted for by differences in dealer costs between large and small trades. One study suggests the lower spreads that institutional investors pay may also be due to the lack of price transparency in the market, which allows better informed investors to obtain more favorable trade prices. Another study adds that institutional investors’ continuous engagement in the market and frequent interaction with broker-dealers also provide them with more bargaining power than individual investors. This study also suggests that the more dispersed prices that individual investors experience could indicate that, in the nontransparent municipal securities market, broker- dealers may have more opportunities to charge higher prices when dealing with less knowledgeable investors. The authors explained that the wider range of prices individual investors receive when they buy or sell the same security could reflect broker-dealers’ ability to detect diverse levels of sophistication among individual investors, with less knowledgeable individuals potentially more likely to trade at less favorable prices than more market-savvy individuals. A third study, however, concludes that differences in prices are not entirely due to the lack of transparency in this market. The study notes that municipal securities often pass through a chain of dealers before being placed with investors and suggests that such interdealer trading may contribute to differences in prices for individual and institutional investors. This study finds that the prices investors pay increase with the amount of interdealer trading that preceded their purchases, and also that more interdealer trading is associated with greater price dispersion. The study also finds that successive interdealer trades tend to involve smaller and smaller trades, thus suggesting that investors trading smaller amounts—individual investors—are likely to pay higher prices and also more dispersed prices than investors trading larger amounts—institutional investors. We found several factors that likely affected individual investors’ ability to gain and use information to independently assess offers and bids they received from their broker-dealers for municipal securities they were interested in purchasing or selling. While MSRB has increased the amount of information available to all investors through its EMMA website—including price information on past trades and issuer disclosures—institutional investors we spoke with generally had more resources and expertise to assess prices than individual investors. In particular, they had (1) access to more sources of pretrade price information in the form of offerings and bids provided through their large networks of broker-dealers, (2) access to more user-friendly post-trade information through third-party vendors and their networks of broker- dealers, and (3) more market expertise to help them incorporate other available information. First, institutional investors told us that when buying securities they accessed the fragmented municipal market through their large networks of broker-dealers. For the institutional investors we interviewed, these networks range from 30 to over 100 national and regional broker-dealers who compete for their business by providing them with a wide range of municipal securities offerings from the primary and secondary markets.For example, institutional investors typically receive daily secondary market offerings from their broker-dealers through Bloomberg, which provides an interface that allows users to pull together and organize these offerings for easy analysis—an important feature in a large, heterogeneous market where price discovery depends heavily on relative assessments of similar securities. Relative to institutional investors, individual investors typically have access to fewer sources of pretrade price information. Unlike institutional investors that have access to and can compare thousands of daily offerings from their large networks of broker-dealers, individuals typically have brokerage accounts with a few broker-dealers, perhaps only one, that may or may not offer online access to their offerings. Individuals with online access to a brokerage firm’s offerings can search for securities that meet certain parameters and compare the results. They may be able to repeat this exercise with other broker-dealers, although they are unlikely to obtain competing prices for the same security. In contrast, some investors without access to online offerings told us that they relied on their broker-dealers. These investors can compare prices of similar securities only insofar as their brokers share this information with them. However, some retail broker-dealer firms have taken steps to attract individual investors by combining offerings from electronic trading platforms with their own offerings, thus expanding the pool of securities available to their customers. According to one of the largest municipal electronic trading platforms, which caters to retail broker-dealers, individuals can access the platform’s inventory through several major brokers, most full-service brokers, and many independent financial advisers. Similarly, institutional investors wanting to sell municipal securities generally have multiple ways to obtain pretrade price information in the form of bids. Their access to large networks of broker-dealers and tools for obtaining bids from more than one dealer allows them to contact potential buyers and independently assess the bids they receive for their securities. Institutional investors we interviewed said that they also frequently carried out their own bids wanted by using Bloomberg to solicit bids from broker-dealers in order to gauge demand and potentially receive a bid at which they were willing to sell. Additionally, these institutional investors told us that they might offer securities directly to broker-dealers to find interested parties among the firms or their customers. Finally, institutional investors can ask a broker-dealer to offer the security for sale through a broker’s broker or an electronic trading platform. By contrast, individual investors typically do not have independent access to multiple bids and thus may be less able to assess the prices they receive for securities they want to sell. When individuals sell securities, they typically rely on the broker-dealer responsible for the account that houses the securities to find a market. A retail broker-dealer may offer the securities to other broker-dealers or customers or may solicit bids through a broker’s broker or an electronic trading platform. However, because selling small blocks of securities is generally more difficult than selling larger blocks, broker-dealers we interviewed said that they might be able to obtain only a few bids for the individual investor. Although the broker- dealer may explain to the individual his process for obtaining bids, individual investors may have difficulty judging the level of demand for their securities or the level of effort their broker-dealers made to find potential buyers. Similar to broker-dealers, institutional investors we interviewed told us that they can access MSRB’s historical trade information through the EMMA website and centrally through Bloomberg, which allows users to compare post-trade prices for two or more securities that share similar characteristics using a search function. Institutional investors also said that their established relationships and continued negotiation with their broker-dealers often revealed market patterns from post-trade prices that helped them assess prices. For example, some large institutional investors told us that broker-dealers typically let them know about large or otherwise meaningful trades that they believed might affect prices of similar securities before these trades appeared on RTRS (postings must occur within 15 minutes of the trade). Some of these investors said that even though MSRB’s RTRS system did not disclose total transaction amounts for trades over $1 million—which the system reports as trade amounts of “$1+ million”—they typically were aware of the amount and the price of these large transactions through their relationships with broker-dealers. Market participants have said that this information is important, because prices in large trades affect prices for many other similar securities because of the relative nature of pricing in this market. Institutional investors are also able to benefit from broker-dealers’ commentaries on trades or on demand trends in the market through Bloomberg. In contrast, individuals—who are likely to find Bloomberg prohibitively expensive—can obtain post-trade information on any outstanding security from the EMMA website but may encounter limitations. While individual investors may use the EMMA website to look for past trade prices of a security to assess the current price, this information is likely not useful unless the latest trade is relatively recent, as we have seen. Currently, the EMMA website does not have search capabilities designed to allow users to identify comparable securities. Further, individual investors could misinterpret post-trade pricing data if they were unaware that reported prices for investor transactions reflected dealers’ compensation for the trade as well as the estimated market value of the security. MSRB is currently evaluating improvements that would make the EMMA website more meaningful and useful for individual investors. Institutional investors we spoke with generally employed professional staff, such as credit analysts and traders, who specialized in evaluating credit risk and trading municipal securities and maintained models to Institutional investors we interviewed stated evaluate offering prices. that in general they could form an immediate initial judgment about the price of a municipal security because they were entrenched in the market on a daily basis and had accumulated expertise to inform their decision making. These investors told us that they applied a wealth of market history to determine a security’s relative value. They said that, for example, they knew the approximate price at which an A-rated hospital security in California with a 30-year maturity is trading and could update prices for the same or similar securities by looking at technical features (like call features), the issuers’ financial profile, and the market strength on the day of the trade, among other things. By contrast, individual investors have access to issuer financial disclosures through MSRB’s EMMA website and other publicly available issuer information but may lack the expertise to understand and update prices using this information. Besides issuer disclosures, individuals have access to free investor information websites, such as the Securities Industry and Financial Markets Association’s (SIFMA) investinginbonds.com, which makes available various market benchmark yield curves, among other things. Many of these resources may also be available to individuals through their broker-dealers’ online websites. However, some institutional investors we spoke with believed that professional expertise was required to use this information to assess prices, especially for securities that had not traded recently. For example, even with timely access to issuer disclosures, it is not clear that individual investors with relatively limited market expertise would be able to estimate how a rating downgrade translated into a lower price for a security. Additionally, individual investors may undertake varied degrees of research during the few hours that they typically have to make an investment decision. For example, some investors we spoke with did not look at historical trade information or issuer disclosure information when they bought bonds and instead relied on the recommendation of their broker-dealer. Others, however, chose a few potential securities from their broker-dealer’s online offerings and checked historical trade information and disclosure information for those securities. One of the more knowledgeable among the individual investors we spoke with stated that he treated the last interdealer trade price as a benchmark for pricing and used this information with varying degrees of success to negotiate prices with brokers. For example, one individual said he had successfully used the last traded price to bargain for better prices with his broker and found that if he was buying bonds for a par value of $200,000, for example, he might be able to save $100 (or 0.05 percent of par value). Market participants explained that individual investors faced additional challenges in independently assessing the value of a security since the decline in the use and availability of bond insurance following the recent financial crisis. In the past, individual investors could choose to buy an insured security and rely on the insurer’s guarantee without fully understanding the security’s underlying value. Individual investors may review issuer disclosures through the EMMA website to help in independently assessing risk, but some individual investors have expressed frustration at their inability to identify and understand the relevant pieces of information from the typically long and technical issuer disclosures. SEC and MSRB have ongoing studies examining the municipal security market. In May 2010, SEC announced that it was beginning a review of the municipal securities markets and intended to examine pretrade price transparency, among other issues, using a series of field hearings.the conclusion of the review, SEC staff are to prepare a publicly available report recommending whether specific changes to laws, regulation, or private sector best practices are needed to better protect municipal securities investors. SEC staff anticipate that the report will be finalized and made public in 2012. In December 2010, MSRB also announced that it was undertaking a study of the municipal securities market, including a review of market structure and trading patterns. MSRB stated that the study would include a review of transaction costs, price dispersion, and other market data and was intended to help MSRB assess whether the market was operating as efficiently and fairly as possible. It is also intended to assist MSRB in evaluating whether pricing and liquidity in the market could be improved with higher levels of pretrade price transparency. MSRB staff said that the initial phase of the study would likely be completed in 2012. MSRB said that in considering whether to recommend potential changes in terms of market structure or disclosures that would improve price transparency, the costs and benefits would need to be weighed carefully. Discussions to improve pretrade price transparency in the municipal securities market focus on whether and how to make bid and offer information on municipal securities more widely available and how to improve individual investors’ access to the market. In an October 2010 speech discussing SEC’s review of the municipal securities markets, one commissioner noted that post-trade transparency in this market had improved considerably since MSRB’s implementation of real-time trade reporting and the EMMA website. However, because of the low liquidity levels of many municipal securities, these trade data could be weeks or months old and therefore not helpful to investors. In part for this reason, the commissioner said, improving pretrade transparency was an important goal. MSRB staff observed that only a few limited venues allowed even knowledgeable and experienced market participants such as broker-dealers to see bid and offer information for municipal trades. They added that because the municipal securities market operates through over-the-counter trading, even the broker-dealers could not see bid and offer information for the entire market. One challenge to improving pretrade price transparency is determining whether and how to make this information available to the general public in a timely manner, particularly for thinly traded securities. That is, given that most municipal bonds are traded infrequently once they have been initially distributed, two-sided quotes are not continuously available in this market. One suggestion that arose was to create a national listing service where all municipal broker-dealers could list their entire municipal securities offerings for public viewing and allow investors to search for securities that fit their investment parameters and to compare prices and yields. To make selling securities easier for investors, one field hearing participant suggested allowing investors to place bids on offerings, while another suggested establishing a limit order mechanism for this market. These suggestions would necessitate creating a centralized trading venue. However, as of January 2012, market participants had not developed detailed proposals that describe the feasibility or offer cost- benefit analyses of such changes to the structure of the market. A limit order is an order to buy or sell a security at a specified price. investors.securities, broker-dealers and investors currently have access to a limited set of offerings in the market, and that when selling securities, they currently only have access to a subset of potential bidders for the securities. This market participant said that an exchange could broaden both broker-dealers’ and investors’ access to bids and offers for municipal securities, and that such centralized transparent aggregation of dealer and individual investor interest would lead to increased liquidity, even in the absence of two-sided quotes for most bonds. Further, this market participant said that an exchange would promote pretrade price transparency through the public dissemination of bid and offer information. Other market participants agreed that an exchange would broaden individual investors’ access to the market and noted that an exchange would allow them to more easily find offerings for comparable securities with the characteristics they wanted. Furthermore, one market expert stated that an exchange would provide more liquidity to investors by taking advantage of existing technology to identify potential interested buyers for a given security, even in the absence of two-sided quotes. For example, one market participant noted that when buying However, broker-dealers and large institutional investors we interviewed stated that, in this fragmented market driven by supply and demand, relationships and direct negotiation were the key to making markets and determining prices. Broker-dealers also pointed to the large number of heterogeneous and relatively illiquid municipal securities that would make it difficult to establish ready two-sided markets for a given security. Additionally, large institutional investors we spoke with stated that a municipal securities exchange may not be feasible or advisable because of the costs of developing a central meeting place that could incorporate these unique attributes of the market. Broker’s brokers also thought the negotiated nature of the market limited the feasibility of an exchange and noted that demand for their services had increased greatly with the decline in the availability and use of bond insurance. They said that because broker-dealers could no longer rely on the homogenizing effects of bond insurance, their need for reliable information on, for example, specialized securities’ credit and sector trends had increased. MSRB has issued rules addressing broker-dealers’ pricing, trade reporting, and clearance and settlement responsibilities with respect to municipal securities transactions. However, because MSRB does not have enforcement authority over broker-dealers, FINRA, federal banking regulators, and SEC conduct broker-dealer oversight and enforce MSRB rules. FINRA oversees 98 percent of broker-dealers registered with MSRB, and the federal banking regulators (OCC, FDIC, and the Federal Reserve) oversee the remaining 2 percent, which we refer to in this report as bank dealers. SEC’s OCIE provides oversight of MSRB and FINRA’s regulatory activities. We found that FINRA and the banking regulators did not identify many violations of the pricing and trade reporting rules from 2006 through 2010 and that settlement failures on municipal securities transactions were rare.multiple FINRA district office inspections and broker-dealer examinations as part of its municipal market oversight, it had not inspected MSRB or FINRA’s fixed-income program since 2005 and lacked a program for conducting interim monitoring to assess risks at these SROs. Several MSRB rules govern broker-dealers with regard to municipal trade pricing, reporting, and clearance and settlement. Rule G-30: MSRB Rule G-30 requires that broker-dealers charge fair and reasonable aggregate prices to customers (individual and institutional investors) for buying and selling securities. In principal transactions, in which broker-dealers take securities into their own accounts, the aggregate price reflects not only the market value of the security, but also the compensation the broker-dealer receives on the transaction, either a markup or markdown from the security’s prevailing market price. A markup is compensation for selling a security to a customer, while a markdown is compensation for buying a security from a customer. A security’s prevailing market price is its interdealer market value—or the price at which a broker-dealer would sell or buy the security to or from another broker-dealer—at the time of the customer transaction. Most broker-dealers engage in municipal securities transactions in a principal capacity, and as such are not required to break out the markup or markdown from their aggregate prices. Figure 1 shows how markups and markdowns are calculated and illustrates the markups and markdowns in hypothetical municipal securities transactions. MSRB has stated that, in order to be fair and reasonable, the price of a security must bear a reasonable relationship to its prevailing market price. Both the price and the markup or markdown must be fair and reasonable in order to satisfy Rule G-30. In other words, a broker-dealer cannot charge the prevailing market price but add an excessive markup and still be in compliance with the rule. Citing the heterogeneous nature of municipal securities transactions and broker-dealers, MSRB has not set specific numeric guidelines for acceptable markups or markdowns. Since the early 1970s, however, several SEC cases and opinions have addressed instances in which broker-dealers charged excessive aggregate prices. Appendix V describes the key features of several of these cases. Rule G-14: Since 2005, MSRB Rule G-14 has required real-time reporting of most municipal securities trades for transparency and regulatory purposes. With few exceptions, Rule G-14 requires broker- dealers to report all trades to an RTRS portal “promptly, accurately, and completely.” recording transactions and their relevant details within 15 minutes of the time of trade. In addition, Rule G-14 states that broker-dealers must have a current Form RTRS on file with MSRB with the information necessary to ensure that their trade reports can be processed correctly. There are three ways for broker-dealers to report their trades to the RTRS. First, NSCC operates an RTRS portal that may be used for any trade record submission or trade modification. Second, broker-dealers can report customer transactions (but not most interdealer transactions) to MSRB’s web-based RTRS portal. Third, broker-dealers must report most interdealer transactions through NSCC’s Real-Time Trade Matching (RTTM) portal, which feeds into the RTRS. the third business day following the trade date. Specifically, MSRB Rule G-15 sets out settlement dates with respect to broker-dealers’ transactions with customers, while Rule G-12 sets out settlement dates for interdealer municipal transactions. MSRB makes trade data submitted by broker-dealers through the RTRS available to FINRA, the federal banking regulators, and SEC for their regulatory activities. In January 2010, MSRB launched Regulator Web, or RegWeb, a secure web-based portal to municipal securities transaction data. RegWeb provides regulators with consolidated access to information including real-time, individual firm transaction data as well as dealer data quality reports (monthly reports listing each dealer’s late, canceled, and amended trade statistics); monthly reports on system outages and other statistics; Forms RTRS that firms have filed with MSRB; a list of broker- dealers registered with MSRB; and other information. FINRA primarily employs an automated surveillance program and conducts examinations of broker-dealers to enforce MSRB rules related to pricing and trade reporting. FINRA uses automated surveillance to monitor all municipal broker-dealers that are FINRA members for compliance with MSRB pricing and trade reporting rules. FINRA’s automated surveillance also includes activity related to the bank dealers under the jurisdiction of the federal banking regulators. Using programmed parameters, FINRA assesses RTRS data for potential violations of MSRB rules, including G-30 and G-14. For example, FINRA has surveillance programs that identify transaction prices that appear to be outliers compared with prices in the rest of the market. FINRA analysts follow up on alerts generated by these programs with broker-dealers under its jurisdiction in accordance with written policies and procedures and, in certain circumstances, will refer potential violations by bank dealers to the appropriate federal banking regulator for further investigation. FINRA also assesses compliance with the MSRB pricing and trade reporting rules through routine and cause examinations of municipal broker-dealers. Examiners use an electronic examination module that includes specific instructions for collecting documentation, selecting samples, and running data reports, and for other aspects of their examinations. FINRA’s surveillance and examinations can result in a variety of actions against a firm that violates a rule, such as an informal warning or a monetary penalty, among other actions. The federal banking regulators rely primarily on examinations to monitor bank dealers’ compliance with MSRB pricing and trade reporting rules. Officials from these agencies explained that they did not have formal surveillance programs designed to monitor bank dealers’ compliance with MSRB rules. Rather, they periodically review MSRB reports on data quality and pricing volatility in RegWeb, often as part of their preparation for on-site examinations. In addition, although the federal banking regulators all stated that such instances are rare, FINRA may refer to them potential violations by bank dealers that it identifies through its automated surveillance program. During their on-site examinations, bank examiners generally take samples of bank dealers’ transactions and review them for compliance with Rules G-30 and G-14. Their examinations can result in corrective actions, among other responses. As part of its oversight of FINRA’s regulatory operations, OCIE assesses broker-dealers’ compliance with MSRB Rules G-30 and G-14 through broker-dealer examinations. OCIE also may review for compliance with these rules in other types of examinations, such as cause examinations and risk-targeted examinations. Because OCIE uses a risk-based approach to determine areas of focus in these examinations, examiners might not always check for compliance with Rules G-30 and G-14. When they do, however, they follow OCIE’s written examination procedures. OCIE’s examinations can result in actions such as a deficiency letter to the firm or referral to SEC enforcement staff for a more formal review. See appendix VI for a more detailed description of how FINRA, the federal banking regulators, and OCIE examine for compliance with Rules G-30 and G-14. MSRB and the other regulators coordinate in various ways to facilitate effective enforcement of Rules G-30 and G-14, as well as other MSRB rules. For example, MSRB officials provided agendas demonstrating that SEC, MSRB, and FINRA have held three semiannual meetings since December 2010, as mandated by the Dodd-Frank Act, to describe their work in the municipal securities market and to discuss any issues related to regulation, including rule interpretation, examinations, and enforcement of MSRB rules. According to documentation provided by MSRB officials, MSRB and FINRA also meet regularly and share information in accordance with a memorandum of understanding, and MSRB meets with SEC several times a year and with the federal banking regulators twice a year to discuss various municipal market issues, with a focus on MSRB rule interpretations, amendments, and guidance.formal meetings, SEC, MSRB, and FINRA staff told us that they maintained daily or weekly informal communication to discuss rule filings or interpretations; surveillance, examination, and enforcement issues; technology issues; and other pertinent matters. As described earlier, MSRB also shares RTRS data and other information with the regulators via the RegWeb system. Finally, MSRB officials stated that they provide a variety of training opportunities to examiners and other staff of SEC, FINRA, and other regulators to promote consistency in the enforcement of MSRB rules. According to our review of regulators’ surveillance and examination data, FINRA, the federal banking regulators, and OCIE have identified few violations of Rule G-30 by broker-dealers and bank dealers from 2006 through 2010. Regulatory officials told us that they considered a variety of factors when determining whether a broker-dealer had charged unfair or unreasonable prices, markups, or markdowns. We also found that violations of Rule G-14 were not systemic and that the industry average for late reported trades had decreased substantially since 2005. Finally, sample data from the agency that oversees clearance and settlement of municipal trades indicate that municipal transaction settlement failures are rare. Regulators cited a small number of violations of Rule G-30 during the period from 2006 through 2010. Specifically, FINRA opened 416 reviews based on alerts related to potential G-30 violations occurring during the 5- year review period. FINRA had completed 343 of those reviews by June 2011. Of those 343 reviews: FINRA determined that 267 (78 percent) warranted no further review. FINRA officials explained that they had investigated the prices and markups for the firms in question and found that violations had not actually occurred or were too minor to warrant further action. Eleven reviews (3 percent) resulted in a cautionary action (an informal warning to the broker-dealer that similar violations in the future could result in formal disciplinary actions). One review (less than 1 percent) resulted in a cautionary action for a violation of another MSRB rule. Sixty-four (19 percent) were referred internally for potential disciplinary action. Of the 64 reviews FINRA’s surveillance group referred internally, 22 were closed as of June 2011. Of those 22 reviews: Two (9 percent) warranted no further review. Eight (36 percent) resulted in a cautionary action. Twelve (55 percent) resulted in a Letter of Acceptance, Waiver and Consent (a disciplinary action in which the broker-dealer consents to findings and the imposition of sanctions but neither admits nor denies the violations). Likewise, out of 5,764 examinations with a municipal securities component that FINRA conducted from 2006 through 2010, 51 examinations (less than 1 percent) identified G-30 violations that resulted in a formal or informal action. Of those 51 examinations: Thirty-seven (about 73 percent) resulted in a cautionary action. Eleven (about 22 percent) resulted in a compliance conference (a more serious type of informal action that involves a meeting between FINRA management and the broker-dealer firm to discuss the violations). Three (about 6 percent) were referred internally for potential disciplinary action. We reviewed 25 of the 51 examinations and found that the markups and markdowns that FINRA questioned in those examinations ranged from about 2 percent to about 10 percent. The federal banking regulators did not report any G-30 violations in the 87 total bank dealer examinations they conducted from 2006 through 2010. OCIE staff also said that, for the examinations they conducted during this time frame in which they assessed broker-dealers for G-30 compliance, they observed a relatively low rate of G-30 violations. Regulators consider multiple factors in determining broker-dealers’ compliance with Rule G-30. For example, Rule G-30 specifies four factors that broker-dealers must consider when determining a fair and reasonable price, including the broker-dealer’s best judgment as to the fair market value of the securities at the time of the transaction. In addition, in its interpretive notices, MSRB has identified other factors that may be relevant to this determination, such as the resulting yield—or annual rate of return—of the security to a customer. To determine whether a broker-dealer is in compliance with Rule G-30 on a specific trade, regulators must consider how well the broker-dealer has applied all of the relevant factors as well as the facts and circumstances of the transaction. Table 4 lists the written factors that regulators consider in enforcing this rule. For instance, as part of their consideration of whether broker-dealers use their best judgment in pricing securities, OCIE examiners stated that when they found a potentially excessive markup, they checked to see whether the market had moved or significant information about the issuer had become available just before the transaction. We observed how examiners assessed broker-dealers’ application of some of these factors in our review of selected FINRA examinations with G-30 violations. For example, FINRA examiners identified six potential G- 30 violations—all in the form of excessive markups—at one firm they examined. When they asked the firm to explain the markups, which ranged from about 4 percent to nearly 10 percent, the firm stated that all of the transactions in question involved low-rated or unrated bonds and occurred in late 2008, when the market was highly volatile.stated that because it had no customers for the bonds at the time it The firm also purchased them, it had assumed additional risks, and that no comparable interdealer trades were available to establish the prevailing market price. While FINRA examiners concluded that four of the six transactions were fair and reasonable, they cited G-30 violations on the two remaining transactions based on the combination of high markups, firm profits, and the transactions’ proximity in time to interdealer purchases. Illustrating the importance of reviewing the individual facts and circumstances in each transaction, examiners deemed one of the highest markups acceptable because the security had been trading within a wide range of prices around the time of the trade in question and the transaction resulted in a high yield to the customer relative to those of comparable securities. In another examination, FINRA examiners questioned a trade in which a firm bought 10 bonds from a customer at a price substantially lower than the last reported trade, which had occurred about a week earlier. The firm indicated that the security’s credit rating had dropped within that time period. Given this downgrade and the fact that the bonds in question continued to trade in the lower range for about a month afterward, FINRA examiners determined that the firm had set a fair and reasonable aggregate price and thus had not violated Rule G-30. In April 2010, MSRB proposed draft guidance that would provide greater specificity for broker-dealers acting as principals in determining a security’s prevailing market price. Specifically, MSRB’s Regulatory Notice 2010-10 proposed a hierarchical approach that is intended to harmonize with FINRA’s approach to pricing nonmunicipal debt securities. The proposed approach would first have the broker-dealer use as the prevailing market price his contemporaneous costs or proceeds—in other words, his costs or proceeds from a transaction recent enough that it would be expected to reflect the current market price for the security. If a broker-dealer wished to use a source other than his contemporaneous costs or proceeds to determine the prevailing market price, he would be required to search through a hierarchy of relevant transactions to seek other appropriate comparison prices. In addition, broker-dealers would be required to document how they determined the prevailing market price in cases where they did not use contemporaneous costs or proceeds. MSRB officials told us that the proposed method would likely make it easier for regulators to conduct surveillance and enforcement for Rule G- 30, because it would provide a relatively mechanical way to determine a security’s prevailing market price. However, broker-dealers have expressed concerns about the proposed method, citing, among other issues, increased burdens and risks to liquidity. As of January 9, 2012, MSRB had not finalized the proposed guidance. Although regulators identified G-14 violations during our review period, these trade reporting issues did not appear to be systemic. According to data we received from MSRB, from February 2005 (the month after the 15-minute reporting requirement took effect) through July 2011, the monthly industry average for late trades declined from approximately 7 percent to less than 1 percent. The average rate of late trades during this time frame was less than 2.5 percent. FINRA officials noted that in 2005 FINRA had assigned a dedicated team to conduct automated surveillance reviews of the municipal market, with an initial focus on late trade reporting. FINRA officials believe that these surveillance efforts likely played a role in the decline of the industry G-14 violation average. FINRA opened 721 reviews based on alerts related to potential G-14 violations occurring from 2006 through 2010. FINRA’s surveillance group had completed 621 of those reviews as of June 2011. Of the 621 reviews: FINRA determined that 323 (52 percent) required no further review. As with the reviews stemming from G-30 alerts, FINRA officials explained that they had investigated the transaction reports for the firms in question and found that violations had not actually occurred or were too minor to warrant further action. Fifty-five (9 percent) resulted in a cautionary action for a G-14 violation. Ten (about 2 percent) resulted in a cautionary action for violations of other MSRB rules. Another 233 (about 38 percent) were referred internally for potential disciplinary action. Of the 233 reviews FINRA’s surveillance group referred, 147 had been completed as of June 2011: Eight (5 percent) warranted no further review. Fourteen (about 10 percent) resulted in a cautionary action. Three (2 percent) resulted in a minor rule violation plan letter (an informal disciplinary process that allows FINRA to assess fines of less than $2,500). The remaining 122 (83 percent) resulted in a Letter of Acceptance, Waiver and Consent. Also from 2006 through 2010, out of the 5,764 examinations they conducted with a municipal securities component, FINRA examiners cited G-14 violations resulting in a formal or informal action in 910 (about 16 percent). Of those 910 examinations: Some 699 (about 77 percent) resulted in a cautionary action. Another 136 (15 percent) resulted in a compliance conference. Forty-two (about 5 percent) resulted in a Letter of Acceptance, Waiver and Consent. The remaining 33 examinations (about 4 percent) resulted in minor rule violation plans, internal referrals for potential disciplinary action, or offers of settlement. In the sample of 32 FINRA examinations we reviewed with G-14 violations, we found that the violations stemmed from a variety of sources, including human error, deficient procedures, a firm’s failure to submit or update a Form RTRS, or technical malfunctions, among other reasons. However, human error and deficient procedures were the most commonly cited causes. Regulators noted that late or inaccurate trade reporting was relatively simple to identify through surveillance and examinations and that broker-dealers could be cited for a G-14 violation based on as little as one or two late or inaccurately reported trades. For example, in one of the FINRA examinations we reviewed, examiners took a sample of 60 trades and found that 2 were reported to MSRB with the incorrect price. A representative of the broker-dealer firm told examiners that the firm had corrected the trades the day they were entered but that an error had led to the suppression of the amended information. FINRA counts instances like this as G-14 violations and requires broker-dealers to update MSRB with the correct information if possible. The federal banking regulators cited G-14 violations in 8 of the 87 bank dealer examinations they conducted. They generally responded to these violations with corrective action requirements. On the basis of examinations they conducted during this time frame in which they assessed broker-dealers for G-14 compliance, OCIE staff agreed that G- 14 violations appeared to have decreased in recent years and said that the inadvertent late trades they continued to see were sometimes attributable to factors such as breakdowns in trade reporting systems. OCIE staff told us that settlement failures typically appeared to represent a low percentage of municipal transactions cleared through NSCC. According to data gathered during a 5-day trading period in June 2011 by NSCC, municipal trade settlement failures composed approximately 2.1 percent of the total dollar value of all NSCC settlement failures across all markets for that time period. Since 2005, OCIE has not inspected FINRA’s fixed-income surveillance program or MSRB, both because of staffing limitations and because of changes to its inspection approach. OCIE’s written inspection guidelines call for inspections of MSRB and FINRA’s regulatory programs. OCIE did not have a fixed schedule for examining MSRB, but its SRO Inspection Guidelines stated that the office generally inspected each SRO under its jurisdiction every 1-4 years. Until 2010, OCIE conducted routine inspections of various aspects of FINRA’s operations—including district office programs, arbitration, customer communication, central review, and financial operations—every 2 to 4 years in accordance with its SRO inspection guidelines. Surveillance, examination, and enforcement programs were typically components of these routine inspections, but municipal securities were not included in each inspection cycle. From 2000 through 2010, mostly in accordance with a 3-year cycle, OCIE conducted 49 inspections of FINRA’s district offices, which conduct the majority of broker-dealer examinations. As part of these inspections, they assessed whether FINRA examined municipal securities broker-dealers at least once every 2 years and reviewed a sample of FINRA’s workpapers to determine whether FINRA examiners thoroughly reviewed broker-dealers for compliance with all MSRB rules and other applicable rules and regulations. However, the district office inspections are not intended to address FINRA’s surveillance activities or policies and procedures for its municipal market regulatory programs. In 2010, OCIE began transitioning to a risk-based SRO inspection approach in conjunction with a comprehensive assessment of OCIE’s structure and functions. As such, OCIE will no longer conduct inspections according to a routine schedule but rather based on issues that represent the greatest risks to investor protection and market integrity. OCIE has not inspected FINRA’s fixed-income surveillance programs or MSRB since 2005. OCIE’s inspections of FINRA and MSRB in 2005 produced findings related to their municipal securities oversight activities. While the two SROs responded to OCIE’s findings and recommendations with corrective actions or, in a few cases, rebuttals, OCIE has not yet confirmed through on-site inspections whether they have adequately addressed these recommendations. OCIE staff only recently began a new inspection of FINRA that will encompass its fixed- income surveillance program, including the municipal trade reporting and markup reviews. OCIE has not yet begun another inspection of MSRB. OCIE staff said that staffing constraints had prevented them from starting another inspection sooner to review FINRA’s fixed-income surveillance program and MSRB. According to OCIE data, staffing of OCIE’s Market Oversight group, which is responsible for inspections of FINRA and MSRB and other SROs that are not clearing agencies, has declined by 5 employees (about 12 percent) since fiscal year 2007—when we last reported on staffing of this group—and by 24 employees (nearly 40 percent) since fiscal year 2005. As shown in table 5, as of September 2011, the Market Oversight group consisted of 38 active staff, including 12 managers, 25 professional staff (examiners), and 1 support staff. According to OCIE staff, the majority of staff members in the Market Oversight group have a law degree, and 11 people have prior experience in fixed-income issues. Furthermore, OCIE staff stated that positions in the Market Oversight group are a mixture of entry-level and senior positions, with staff typically staying approximately 4 to 5 years before going elsewhere within or outside of SEC. As of September 2011, according to OCIE staff, the Market Oversight group had seven vacant slots, but an SEC hiring freeze limited OCIE’s ability to fill most of these positions. GAO-08-33. 2011 (as of 9/16/2011) According to the information OCIE provided, there were 27 professional staff in the group as of September 16, 2011. However, that number includes two people who were detailed to other offices in the agency and were not actively working in the Market Oversight group. Therefore, we list the number of available professional staff as 25 rather than 27. Although OCIE is transitioning to a risk-based approach to SRO inspections, it lacks sufficient data on the SROs’ fixed-income regulatory activities that it could use to inform this approach. OCIE’s mission includes protecting investors and ensuring market integrity through risk- based strategies that, among other things, are designed to improve compliance and monitor risk. However, OCIE currently engages in limited monitoring of the SROs between inspections and may not have sufficient sources of information to allow it to effectively assess the risk level of SROs’ regulatory programs. OCIE staff told us that they plan to convene all of the SROs in early 2012 to, among other things, clarify expectations relating to their activities. One of the objectives of the SRO outreach will be to share issues that OCIE identified in assessments it conducted of all equity and options SROs in 2011 that have implications across the SROs. However, this effort will not provide staff with information on the quality of ongoing SRO oversight in any particular area—such as fixed-income surveillance—between inspections. OCIE staff also participate in the meetings mandated by the Dodd-Frank Act that include SEC, MSRB, and FINRA. While such communication is essential to helping ensure uniform interpretation of MSRB rules and discussing recent trends in enforcement, among other things, it does not provide insight into the ongoing effectiveness of SRO regulatory programs. We found that OCIE received and reviewed quarterly reports from FINRA on its regulatory activities related to municipal securities markups and markdowns. However, an OCIE staff member told us that the reports, which present aggregate statistics, reveal little about the effectiveness of FINRA’s activities in this area. For a risk-based inspection approach to be effective, it is essential for OCIE to maintain ongoing monitoring and communication with the SROs to keep abreast of the current operations and to use this information to update its supervisory strategies. We note that the review period OCIE covered in its 2005 FINRA inspection predated the recent financial crisis and ensuing volatility in the municipal securities market. Although OCIE is now conducting an inspection of FINRA that encompasses its fixed- income surveillance program, it had not obtained any information since its last inspection about the quality of FINRA’s market oversight. Further, MSRB implemented RTRS in 2005 and began making real-time trade price information freely and publicly available on the EMMA website in 2008, but OCIE has not performed any independent reviews or otherwise obtained information to establish the quality or reliability of the data in this system, despite the fact that market participants use it for pricing purposes and that SEC, FINRA, and the federal banking regulators rely heavily on the data to carry out their regulatory activities. SEC Release No. 34-50699, November 18, 2004. The rule was part of a larger package of proposed rules and amendments related to fair administration and governance of SROs, which SEC has not finalized. insight into the effectiveness of the SROs’ regulatory activities—it would provide a mechanism for OCIE to regularly collect and analyze information from the SROs. Without collecting information on an ongoing basis that provides insight into the effectiveness of SRO regulatory programs, OCIE may not be able to identify anomalies or changes in the operations that warrant more immediate inspections. OCIE is transitioning to a risk-based approach for its SRO inspection program and is convening a meeting with the SROs in 2012 to share issues staff have already identified that have implications across the SROs. Among other things, the risk-based approach is intended to improve compliance and monitor risk. While OCIE’s efforts to implement a risk- based inspection program have the potential to better target scarce resources to high-risk areas, its limited monitoring of the SROs between inspections could result in its missing potential new or ongoing issues with their regulatory programs. For example, OCIE’s last inspection of FINRA’s fixed-income surveillance program predated the recent financial crisis and ensuing volatility in the municipal securities markets. Although OCIE obtained some information on FINRA’s examination program through its district office inspections and broker-dealer examinations, its lack of a structured mechanism for monitoring the quality of FINRA’s fixed-income surveillance during that time means that OCIE will not have a full picture of how effective FINRA was in surveilling for and detecting violations of MSRB rules until it finishes its 2011 inspection—more than 3 years after the financial crisis began and more than 6 years since its last inspection. Proposed Rule 17a-26 is an example of a mechanism that OCIE could use to obtain meaningful information for ongoing monitoring of SRO regulatory programs for the municipal securities market. This proposed rule would compel the SROs to review, on an annual and a quarterly basis, the operation and performance of their regulatory programs and report the results of these reviews to SEC. Finalizing this rule—revised as necessary to reflect OCIE’s current informational needs—would allow OCIE examiners to formally collect and analyze interim data on the operation and effectiveness of SROs’ programs and potentially facilitate ongoing oversight of SROs between inspections. Such information could provide regulators with more up-to-date information on the state of the market and SROs’ regulatory efforts. In addition, it could help OCIE meet its goal of identifying high-risk areas and leverage its staff resources appropriately. Unless OCIE takes steps to gather and analyze information on the SROs’ fixed-income regulatory programs on an ongoing basis, it may not learn about emerging or recurring issues or risks in a timely manner and take steps to address them. To improve SEC’s ability to monitor the operations and effectiveness of SRO regulatory programs related to municipal securities trading between inspections and to help identify areas of high risk, we recommend that the Chairman of the Securities and Exchange Commission direct OCIE to take steps to gather and analyze information on the SROs’ fixed-income regulatory programs on an ongoing basis and use it to inform their risk- based inspection approach. We provided a draft of this report for comment to the SEC Chairman for her review and comment. SEC provided written comments that are reprinted in appendix VII. SEC also provided technical comments that were incorporated as appropriate. In addition, we provided a draft of this report to the Federal Reserve, FDIC, and OCC, for their review and comment. These agencies did not provide written comments, but we incorporated their technical comments where appropriate. We also provided a copy of the draft report to MSRB and FINRA for their review and incorporated technical comments from them as appropriate. In its written comments, SEC agreed with our findings. With respect to our recommendation that SEC improve its ability to monitor the operations and effectiveness of SRO regulatory programs between inspections by gathering and analyzing information from the SROs on an ongoing basis, SEC agreed that more enhanced oversight of the SROs’ fixed-income regulatory programs is needed and that it has already begun that process through the transition to a risk-focused approach. SEC noted, however, that more frequent review and analysis would require additional staff resources and reiterated that OCIE has been unable to fill several vacant positions in its Market Oversight group due to limitations on SEC hiring under a Continuing Resolution. SEC further noted that even if the vacant positions were filled, OCIE’s Market Oversight group would continue to be understaffed relative to the number and complexity of entities that it examines and that it would need additional resources to conduct more frequent inspections of FINRA and MSRB’s fixed-income programs or to do interim monitoring of FINRA’s fixed income surveillance program. As we observed, SEC’s efforts to implement a risk-based inspection program have the potential to better target its scarce resources to high-risk areas. Gathering and analyzing data from the SROs on an ongoing basis could help SEC better meet its goal of identifying high-risk areas and leveraging its staff resources for inspections. We are sending this report to the Senate Committee on Banking, Housing, and Urban Affairs and the House Committee on Financial Services. We are also sending copies of the report to the Special Committee on Aging, U.S. Senate; the Committee on Agriculture, Nutrition, and Forestry, U.S. Senate; the Committee on Agriculture, U.S. House of Representatives; and the Chairman of the SEC. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or clowersa@gao.gov. Contact points for our Offices of Public Affairs and Congressional Relations may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. Prior to the financial crisis that began in the summer of 2007, municipal governments made increasing use of interest rate swaps, a derivative product. In an interest rate swap, a municipal issuer enters into a contract with a counterparty (typically an investment bank, commercial bank, or insurance company), and agrees to exchange periodic interest payments. Municipal issuers may use interest rate swaps to try to lower their borrowing costs. For example, by issuing variable-rate securities and entering into a variable-to-fixed interest rate swap, an issuer may be able to obtain a lower fixed-rate interest payment than it otherwise could obtain if it had issued fixed-rate securities directly. In this case, after issuing the variable-rate securities, the issuer enters into a swap agreement with a counterparty that agrees to pay the issuer a variable rate based on an index that is intended to approximate the variable-rate interest payments that the issuer must make to its investors. In exchange, the issuer agrees to pay the counterparty a fixed interest rate. As a result, the issuer achieves a synthetic fixed rate by converting a variable-rate obligation to a fixed-rate obligation. Payment exchanges between the issuer and the counterparty reflect differences between the fixed rate and the variable rate during a specific period of time. The swap does not alter the issuer’s obligations, including debt servicing, to existing investors. Municipal issuers incur a number of risks when they enter into interest rate swaps, including basis risk, termination risk, and counterparty risk Basis risk is the risk that the variable rate paid by the issuer to its investors is more than the variable interest rate received under the swap. If that occurs, the payments the issuer receives from the counterparty are less than the payments the issuer must make to the investors. The issuer must cover that difference in addition to paying the fixed rate on the swap to the counterparty. Termination risk is the risk that the swap may terminate or be terminated before its expiration. Swap agreements allow for termination of the swap by either party in the case of certain events, such as payment defaults on the swap or credit rating downgrades. For example, if the issuer triggers an early termination, it could owe a termination payment reflecting the value of the swap under the market conditions at that time. If market rates have changed to the issuer’s disadvantage (e.g., the issuer is a fixed-rate payer and interest rates have declined), the issuer will be “out of the money” on the swap, that is, the fixed rate that the issuer is paying to the counterparty is higher than the current market rate, and owe the counterparty a termination payment. A termination of a swap can result in a substantial unexpected payment obligation. Counterparty risk is the risk that the counterparty will default on its payment obligations to the issuer. The recent financial crisis heightened the exposure of a number of municipal issuers with interest rate swaps to these risks. For example, a number of municipal issuers had insured their underlying variable-rate securities with bond insurance. However, the downgrades in these insurers’ credit ratings during the financial crisis resulted in some issuers having to post collateral on the swap agreements they had entered into or face termination of the swaps. Because interest rates had declined significantly at that time, these issuers were out of the money—making it expensive to terminate the contract. However, a number of issuers refunded their variable rate securities and terminated the swaps to free themselves from these agreements. In some cases where municipal issuers have suffered losses because of swap agreements, issuers allege that the counterparties that sold them the swaps (swap dealers) misrepresented the risks of the swaps that they sold to the issuers. In other cases, they have called into question the fees that the swap dealers made. Questions grew that some of the municipal issuers that entered into swaps during this period did not understand these complicated products or their risks. Title VII of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) created a comprehensive framework to provide oversight over the previously unregulated over-the counter derivatives market. The Dodd-Frank Act provided the Commodities Futures Trading Commission (CFTC) the authority to regulate swaps, including interest rate swaps. Section 731 specifically amended the Commodity Exchange Act (CEA) to provide CFTC with both mandatory and discretionary rulemaking authority to impose business conduct requirements on swap dealers and major swap participants in their dealings with counterparties generally, including municipal issuers, which are among the entities termed “special entities.” In January 2012, CFTC issued rules to implement this authority. Among other things, the rules establish a “know your counterparty” requirement. This requirement requires a swap dealer (but not a major swap participant) that acts as an adviser to a special entity to make a reasonable determination that any swap it recommends is in the special entity’s best interest and make reasonable efforts to obtain information necessary to make a reasonable determination that the swap it recommends is in the special entity’s best interest. The swap dealer will comply with its duty to act in the special entity’s best interest where it complies with the “reasonable efforts” requirement, acts in good faith and makes full and fair disclosure of all material facts and conflicts of interest with respect to the recommended swap, and employs reasonable care that the swap is designed to further the special entity’s objectives. The rules also require swap dealers and major swap participants to disclose to their counterparties material information about swaps, including material risks, characteristics, incentives, and conflicts of interest. Additionally, CFTC’s rules establish several duties for swap dealer and major swap participants, including the duty to verify a counterparty’s eligibility to transact in the swap markets, provide the daily midmarket value of uncleared swaps to the counterparty, and ensure all communications to the counterparty are fair and balanced. A swap dealer who recommends a swap must conduct reasonable diligence to understand risks and rewards of the recommendation and have a reasonable basis to believe that the recommendation is suitable for the counterparty. The rules also establish a duty for any swap dealer that acts as an adviser to a special entity to act in its best interests, which includes recommending a swap or trading strategy involving a swap. The rules establish a duty for swap dealers and major swap participants to have a reasonable basis to believe that any special entity counterparty has a representative that meets the following criteria: is sufficiently knowledgeable to evaluate the transaction and risks; is not subject to statutory disqualification; is independent of the swap dealer or major swap participant; undertakes a duty to act in the best interests of the special entity; makes appropriate and timely disclosures to the special entity; evaluates, consistent with any guidelines provided by the special entity, fair pricing and appropriateness of the swap; in the case of a special entity that is an employee benefit plan subject to the Employee Retirement Income Security Act of 1974 (ERISA), is a fiduciary as defined in Section 3 of ERISA; and in the case of a special entity that is a municipal entity, is subject to restrictions on certain political contributions to certain public officials of the municipal entity. For special entities other than employee benefit plans subject to ERISA, the final rule provides a safe harbor under which the swap dealer will be deemed to have a reasonable basis to believe that the special entity has a qualified representative if the certain conditions are met, including the representative stating in writing that it has policies and procedures designed to ensure that it satisfies the applicable criteria. To analyze how institutional and individual investors trade municipal securities in the secondary market and the factors affecting the prices institutional and individual investors receive, we obtained data on all municipal securities trades that broker-dealers reported to the Municipal Securities Rulemaking Board’s (MSRB) Real-Time Transaction Reporting System (RTRS) from January 1, 2005, to December 31, 2010. For each trade, the data included variables describing characteristics of the security, including the dated date (the date from which interest starts to accrue), maturity date, interest rate, principal amount at issuance, and reoffering price (the price at which underwriters sell newly issued securities to the public in the primary market), as well as variables describing the characteristics of the trade (trade date/time, settlement date, trade price, yield, and trade amount) and trade type (dealer sales to customer, interdealer trade, or dealer purchases from customer). We analyzed trade data involving newly issued fixed-rate securities to understand how trade prices differ for institutional and individual investors, using trade size (amount) as a proxy for whether the trade involved institutional or individual investors. We focused on trades that occurred within the period from 30 days prior to and 120 days after the dated date on municipal securities. We chose to examine this time frame because we observed that bonds in our sample trade most frequently around the time of issuance and that trading activity declines as the number of days after issuance increases, with trading activity typically leveling off by about 120 days after issuance. Focusing on a period with more trading activity improved the precision with which we measured the relationships described below. We chose to examine only trades of newly issued bonds to ensure that all the trades we analyzed involved bonds that had been available to investors for a similar amount of time and to limit the likelihood that unobserved, time-varying characteristics of bonds influence our analysis. First, we analyzed the relationship between the relative trade price (the trade price as a percentage of the reoffering price) and trade amount by trade type in order to determine if prices for smaller trades—those more likely to involve individual investors—are different from prices for larger trades—those that are more likely to involve institutional investors. Second, we analyzed the relationships between spreads (the difference between the price on dealer sales to investors and the price on dealer purchases from investors as a percentage of the price on dealer purchases) within $10,000 trade amount increments and trade amount to determine if spreads on smaller trades are different from spreads on larger trades. For these regressions, we constructed datasets with one observation for each security for each $10,000 trade amount increment. For each security, for each $10,000 trade amount increment, we calculated the inside spread, mean spread, and outside spread. The inside spread is the difference between the lowest trade price on a dealer sale and the highest trade price on a dealer purchase as a percentage of the highest trade price on a dealer purchase. The mean spread is the difference between the mean trade price on a dealer sale and the mean trade price on a dealer purchase as a percentage of the mean trade price on a dealer purchase. The outside spread is the difference between the highest trade price on a dealer sale and the lowest trade price on a dealer purchase as a percentage of the lowest trade price on a dealer purchase. We only used observations on security-trade amount increment combinations for which there existed at least one dealer sale and at least one dealer purchase. Third, we analyzed the relationship between price dispersion (the difference between the maximum and minimum trade price as a percentage of average trade price) and trade amount by trade type to determine if prices on smaller trades are more or less dispersed than prices on larger trades. For these regressions, we constructed datasets with one observation for each security, for each trade type, and for each trade amount in $10,000 increments. We formed groups of trades for each security, trade type, and trade amount in $10,000 increments. We then calculated price dispersion for that group of trades as the difference between the maximum trade price and minimum trade price as a percentage of the average trade price. For all three analyses, our regressions included indicator variables for each security in the sample to control for unobserved, time-invariant features of the securities. We estimated separate regressions for bonds issued in each year from 2005 through 2010. We present the results of our regression analyses in appendix III. For illustrative purposes, we also calculated descriptive statistics using the trade data. First, we calculated the average relative trade price on newly issued fixed-rate securities by trade amount and trade type for 2010. Second, we determined the average spreads for a $20,000 trade (an individual investor-sized trade) and a $5 million trade (a institutional investor-sized trade) of a fixed-rate security in 2010. We then used these average spreads to calculate the yield to maturity of two hypothetical trades of $20,000 and $5 million of the same security. We did this to compare the effect of the size of the spread on the return received by an individual investor and an institutional investor. Third, we calculated the average price dispersion for newly issued fixed-rate securities by trade amount and trade type for 2010. We presented these descriptive statistics in tables in the report. In conducting our analyses, we carried out a data reliability assessment of the MSRB trade data. To do so, we reviewed information on the processes and procedures MSRB uses to help ensure that trade data entered into RTRS are accurate and complete. We also reviewed the data for missing values and outliers and, where we observed instances of such, solicited explanations from MSRB staff. On the basis of this information, we determined that these data were reliable for our purposes. We also obtained statistics on the relative size of the municipal securities market. We obtained data from the Federal Reserve’s Flow of Funds Accounts of the United States on the estimated dollar value of municipal securities outstanding and from Bloomberg L.P. (Bloomberg) on the number of municipal issuers and outstanding municipal securities that it tracks. We also collected data on the total number of public companies listed on the major U.S. exchanges from the annual reports of NYSE Euronext and NASDAQ OMX. We did not conduct an assessment of the reliability of these data sources. However, these data are widely used by regulators, market professionals, and academics and are considered credible for the purposes for which we used them. In addition, we used these data solely for descriptive purposes and not for the purpose of making recommendations or drawing conclusions about causality. We reviewed studies that analyzed pricing in the municipal securities market. We limited our survey to those studies using data from 1995 or later. We did this because prior to 1995, there was no systematic and comprehensive dissemination of post-trade information for municipal securities. We identified five relevant studies by searching the EconLit, the JSTOR, the National Bureau of Economic Research (NBER) Working Paper Series, and the Social Science Research Network (SSRN) databases. We identified two additional studies through our interviews with market participants. Although we did not identify methodological concerns with these studies, the inclusion of these studies is for research purposes and does not imply that we deem them to be definitive. In addition, we attended or viewed the Securities and Exchange Commission’s (SEC) field hearings on the state of the municipal securities market, reviewed industry literature, and interviewed members of trade organizations representing institutional investors, broker-dealers (including broker’s brokers), and individual investors; academics; SEC Office of Municipal Securities Market officials; MSRB officials and independent municipal market research and advisory firms. We also reviewed information from these entities on the availability of pre- and post-trade pricing information in the marketplace, and we spoke to market participants interested in forming an exchange for municipal securities. To understand how electronic systems and trading platforms are used in the trading of municipal securities, we received a demonstration from Bloomberg on the services it offers to municipal broker- dealers and other subscribers to facilitate municipal securities trading and analysis. We also reviewed existing alternative trading systems (ATS) operating in this market by analyzing their annual Form ATSs submitted to SEC and other descriptive information and received a demonstration from one ATS of its electronic platform for trading municipal securities. To determine how federal regulators enforce MSRB rules to ensure fair and reasonable prices for investors and the timely and accurate reporting of municipal trades, we reviewed relevant MSRB rules, guidance, and proposed rules. We focused on Rules G-30, G-14, G-12, and G-15, which address pricing, trade reporting, and trade clearance and settlement. We also reviewed documentation describing RegWeb, the web portal MSRB makes available to federal regulators to analyze and query RTRS data for regulatory purposes; the Financial Industry Regulatory Authority’s (FINRA) policies and procedures for electronically surveilling RTRS data for potential violations of MSRB pricing and trade reporting rules; and FINRA and federal banking regulators’ (Office of the Comptroller of the Currency, or OCC; Federal Deposit Insurance Corporation, or FDIC; and Board of Governors of the Federal Reserve System, or the Federal Reserve) examination procedures for assessing broker-dealer compliance with these rules. We also identified enforcement trends related to Rules G-14 and G-30. With respect to FINRA, we reviewed results of the periodic surveillance of trade data it conducted from 2006 to 2010 to monitor broker-dealers and bank dealers for potential violations of MSRB Rules G-14 and G-30. These results included the number of alerts FINRA’s surveillance programs generated on potential G-14 and G-30 violations, as well as the resolution (for example, no further review, cautionary action, etc.) of each alert. We also reviewed data from FINRA’s System for Tracking Activities for Regulatory Policy and Oversight (STAR), which tracks the life cycle of FINRA’s regulatory matters, on the number of municipal-related broker- dealer examinations FINRA conducted from 2006 to 2010, the number of those examinations that identified violations of MSRB Rules G-14 and G- 30, and the resolution of each examination. We conducted a reliability assessment of the FINRA data and determined they were reliable for our purpose. Specifically, we reviewed information on the STAR system and FINRA’s policies and procedures for ensuring the data entered into the STAR system were accurate and complete. We reviewed a purposeful sample of 45 examinations FINRA conducted from 2006 to 2010 in which it identified violations of MSRB Rules G-14 and G-30. We reviewed these examinations to inform our understanding of how FINRA examiners applied their policies and procedures for assessing compliance with Rules G-14 and G-30. First, we selected all 11 examinations that had both G-14 and G-30 violations. Next, we selected an additional 6 examinations with G-30 violations that were forwarded to other agencies (such as SEC) for further review or initiated for a specific cause, as opposed to routine examinations. Third, 8 examinations with G-30 violations were selected systematically by selecting every 4th examination after ordering the remaining examinations with G-30 violations by the completion date. Finally, we similarly selected 20 additional examinations with G-14 violations by selecting every 20th examination from an ordered listing of remaining examinations with G-14 violations.extrapolate the information in the sample examinations to the universe of municipal broker-dealer examinations. Rather, we drew examples from some of the examinations to illustrate concepts in the report. With respect to federal banking regulators’ enforcement of Rules G-14 and G-30, we reviewed data on the number of bank dealer examinations each regulator conducted from 2006 to 2010 and the number of those examinations that identified violations of Rules G-14 and G-30, among other MSRB rules. We conducted a reliability assessment of the federal banking regulator data and determined they were reliable for our purpose. Specifically, we reviewed information from federal banking regulators on the systems from which they generated the data provided to us and their policies and procedures for ensuring the data were accurate and complete. From the examination data, we selected and reviewed examinations or their relevant excerpts to observe examples of cases in which the federal banking regulators identified violations of Rule G-14 or As with the FINRA examinations, we reviewed these other MSRB rules. examinations to inform our understanding of how federal banking examiners applied their policies and procedures for assessing compliance with Rules G-14 and G-30. We did not extrapolate the information to the universe of bank dealer examinations. Because OCC, FDIC, and the Federal Reserve identified few examinations with Rule G- 14 violations and no examinations with G-30 violations, we expanded our sample to examinations with other violations (for example, MSRB Rule G-27 on supervision). This allowed us to see more examination reports and observe how these regulators conducted their examinations in general. We reviewed a combined total of 15 examination reports from these regulators. procedures MSRB uses to identify late trades in RTRS. We determined these data were reliable for our purposes. To identify trends in settlement failures in municipal securities transactions, we reviewed data from the National Securities Clearing Corporation (NSCC). This self-regulatory organization (SRO) provides clearance and settlement services for a variety of securities, including equity, corporate, and municipal securities. Because NSCC typically does not track settlement failures by security type, we requested that NSCC perform a specialized query to provide us with this information. NSCC reviewed a 5- day trading period, from June 6 to June 10, 2011, and provided us with the dollar value of municipal securities settlement failures, as well as the total dollar value of all settlement failures, for that period. We did not assess the reliability of these data because we used the data solely for descriptive purposes and not for the purpose of making recommendations or drawing conclusions about causality. However, we corroborated the data by asking regulators and market participants about their experience with municipal trade failures, and what they told us was consistent with the trends in the data. To understand how SEC oversees the municipal market, we reviewed the SEC Office of Compliance Inspections and Examinations’ (OCIE) guidance for conducting oversight inspections of FINRA-registered broker-dealers, focusing on policies and procedures for assessing compliance with MSRB rules related to pricing and trade reporting, and we reviewed OCIE’s guidance for conducting inspections of SROs. We reviewed data from OCIE on broker-dealer examinations it conducted from 2002 to 2010 that assessed compliance with municipal securities rules and regulations, including information on the MSRB rule violations examiners identified. We conducted a reliability assessment of these data and determined that there were limitations to how we could use them. We reviewed information on OCIE’s system for tracking examination data (the Super Tracking and Reporting System, or STARS), reviewed OCIE’s policies and procedures for ensuring the completeness and accuracy of the data, and interviewed OCIE officials. Although we determined that STARS data are reliable, we learned that STARS does not contain a unique field that allows users to retrieve all examinations with a municipal component. Rather, OCIE officials ran a report by searching for key words that, based on their experience with STARS data, were likely to be included in an examination with a municipal component. This produced a list of approximately 1,100 examinations conducted from 2002 to 2010. We determined this was a reasonable way to proceed to identify a significant portion of the targeted universe of examinations from which we would draw selected exams for our review. We reviewed a purposeful sample of 35 examinations that OCIE conducted from 2002 to 2010 in which it identified violations of MSRB Rules G-14 and G-30. First, we selected all 13 examinations that had G- 30 violations. Four of these examinations also had G-14 violations. We then selected an additional 22 examinations with G-14 violations (from a total of 80 examinations with G-14 violations during the time period). For the latter group, we attempted to select examinations representing the entire time period and a variety of recommended actions (from minor deficiency letters to enforcement referrals). We reviewed the examinations to understand how OCIE examiners applied OCIE’s examination policies and procedures to assess broker-dealers for compliance with MSRB rules. However, we did not cite any OCIE examination statistics in the report, given that the list of 1,100 examinations may not have included all municipal examinations OCIE conducted from 2002 to 2010, as well as the fact that OCIE uses a risk- based method and does not necessarily review for broker-dealer compliance with Rules G-30 and G-14 in every examination. We also reviewed OCIE’s 2002 and 2005 inspections of MSRB and FINRA’s fixed- income program, focusing on OCIE’s review of FINRA’s surveillance, examination, and enforcement programs for overseeing municipal securities trading. In addition, we reviewed MSRB’s and FINRA’s responses to OCIE’s inspection reports. Finally we reviewed OCIE’s inspections of FINRA’s district offices from 2000 to 2010 and its 2009 inspections of the Depository Trust Company and NSCC, SROs that clear and settle municipal securities transactions. We also reviewed meeting minutes, e-mails, training presentations, MSRB’s memorandum of understanding with FINRA, and other relevant documentation from MSRB to understand the coordination among SEC, MSRB, FINRA, and federal banking regulators in conducting oversight of the municipal securities market. Finally, we interviewed officials from OCIE, Office of Municipal Securities, SEC’s Division of Enforcement, MSRB, FINRA, and federal banking regulators to better understand their oversight of the municipal securities market and efforts to coordinate their oversight activities. We conducted this performance audit from November 2010 to January 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. To understand how trade prices for individual investors differ from those for institutional investors, we analyzed trade data on newly issued fixed- rate municipal securities from the Municipal Securities Rulemaking Board’s Real-Time Trade Reporting System from January 1, 2005, through December 31, 2010, using trade size as a proxy for whether the trade involved institutional or individual investors. We focused on trades that occurred within the period from 30 days prior to and 120 days after the dated date on municipal securities. We chose to examine this time frame because we observed that (1) securities in our sample trade most frequently around the time of issuance, (2) trading activity declines within days after issuance, and (3) trading activity has typically leveled off by about 120 days after issuance. Focusing on a period with more trading activity improves the precision with which we measure the relationships described below. We chose to examine only trades of newly issued bonds to ensure that all the trades we analyzed involved bonds that had been available to investors for a similar amount of time and to limit the likelihood that unobserved, time-varying characteristics of bonds influence our analysis. First, we analyzed how relative prices—defined as trade prices as a percentage of the reoffering prices (the prices at which the securities were originally sold to the public by the underwriter)—changed as trade size increased for different types of trades (dealer sales to investors and dealer purchases from investors). To do so, we estimated regressions on security trades. The dependent variables in these regressions are the relative price of a trade, and the independent variables in these regressions are trade amount interacted with trade type and indicator variables for each security in the sample. The security indicators control for time-invariant features of a security that may affect the relative price at which it trades. We estimated separate regressions for securities issued in each year from 2005 through 2010. We present our regression results in table 6. Our analysis shows that, relative to institutional investors, individual investors generally pay higher prices when buying—and receive lower prices when selling—municipal securities. Relative prices at which broker-dealers sold securities to investors declined on average with trade amount for all years in the analysis. For all years, this negative relationship is statistically significant at the 1 percent level. Relative prices at which broker-dealers purchased securities from investors increased with trade amount for bonds issued in every year except 2009. For every year except 2009, this positive relationship is statistically significant at the 1 percent level. For 2009, this relationship is negative but it is not statistically significant from zero. Second, we analyzed how broker-dealers’ spreads—defined as the difference between the price on dealer sales to investors and the price on dealer purchases from investors as a percentage of the price on dealer purchases—changed as trade size increased, using three different measures of spread. For this analysis, we estimated regressions on securities. The dependent variables in these regressions are the spread on a security over a $10,000 trade amount increment, and the independent variables in these regressions are trade amount and indicator variables for each security in the sample. The security indicators control for time-invariant features of a security that may affect its spread. We estimated separate regressions for securities issued in each year from 2005 through 2010. We present our regression results in table 7. Our analysis showed, on average, broker-dealers receive larger spreads when trading smaller blocks of municipal securities. For all years and for all three measures of spread, this relationship is statistically significant at the 1 percent level. The inside spread, which estimated a lower bound for broker-dealer spreads, declined as trade size increased for all years in the analysis. The mean spread, which estimated average broker-dealer spreads, declined as trade size increased for all years in the analysis. The outside spread, which estimated an upper bound for broker- dealer spreads, declined as trade size increased for all years in the analysis. The mean spread is the difference between the mean trade price on a dealer sale and the mean trade price on a dealer purchase as a percentage of the mean trade price on a dealer purchase. Inside spread -0.22% -0.19% Brackets contain the absolute values of t-statistics calculated using standard errors that are adjusted for heteroskedasticity and for within-bond correlation. For all years and for all three measures of spread, the relationship between spread and trade amount is negative, and the negative relationship is statistically significant at the 1 percent level. Third, we analyzed how price dispersion, defined as the difference between the maximum and minimum trade price as a percentage of average trade price, changed as trade size increased. For this analysis, we again estimated regressions on securities. The dependent variables in these regressions are the price dispersion over a $10,000 trade amount increment, and the independent variables in these regressions are trade amount interacted with trade type and indicator variables for each security in the sample. The security indicators control for time-invariant features of a security that may affect its price dispersion. We estimated separate regressions for securities issued in each year from 2005 through 2010. See table 8 for the regression results. Our analysis showed that prices for larger trades tended to be more concentrated, while prices for smaller trades tended to be more dispersed. For all years, this relationship is statistically significant at the 1 percent level. For broker-dealer sales to investors, the measure of dispersion declined as trade amount increased. For broker-dealer purchases from investors, the measure of dispersion also declined as trade amount increased. The Municipal Securities Rulemaking Board intends for its Market Information Transparency Programs (Transparency Programs) to protect investors by fostering availabilty and transparency of critical information about municipal securities and market activity. From fiscal years 2004 through 2010, MSRB spent significant resources developing and operating these programs. MSRB’s total revenue has fluctuated during this period. To generate additional revenues to continue to enhance and maintain these transparency programs, in fiscal year 2010 MSRB increased transaction fees for broker-dealers and imposed a new technology fee. To establish a more stable long-term revenue base as well as ensure a more equitable allocation of assessments among the municipal broker- dealers that fund MSRB’s operations, MSRB authorized changes to its revenue sources in fiscal year 2011 that it expects will generate significant new revenues. First, MSRB increased the transaction fee charged to broker-dealers from $0.005 per $1,000 par value to $0.01 per $1,000 par value on most municipal securities sales transactions reported to MSRB. The new fee became effective in January 2011. MSRB expects the increased transaction fee to generate an estimated $7 million annually. Second, effective January 2011, dealers in municipal securities are required to pay a technology fee of $1.00 per transaction for all sales transactions. MSRB expects the technology fee to generate an estimated $8.5 million annually. MSRB stated that the technology fee would be transitional in nature and that it would review the fee periodically to determine whether it should continue to be assessed. MSRB said that these new and increased fees are necessary because its expenses have increased significantly as a consequence of its capital investments in technology and the regulatory responsibilities it has assumed under the Dodd-Frank Act. MSRB said it would use the new technology fee to establish a technology renewal fund, which would be segregated for accounting purposes. The technology renewal fund is intended to fund replacement of aging and outdated technology and to fund new technology initiatives. For example, MSRB noted that certain of the existing public information systems it operates, including RTRS, now rely on dated technology and can be expected to need comprehensive reengineering in the coming years. In addition, MSRB said that it will need to develop information systems to facilitate its increased regulatory responsibilities under the Dodd-Frank Act, which, among other things, broadened its mission to include the protection of municipal issuers and extended its regulatory authority to include municipal advisers. The Dodd-Frank Act provided for additional revenue sources for MSRB, although these revenues are unlikely to represent a significant source of funding. The Dodd-Frank Act expanded the regulatory jurisdiction of MSRB to include municipal advisers. MSRB amended its rules in November 2010 to begin collecting initial fees ($100) and annual registration fees ($500) for municipal advisers. However, MSRB officials said that they did not anticipate these fees would provide a revenue stream comparable to what MSRB receives from all fees on broker-dealer activities, including the transaction, underwriting, and technology fees previously discussed. The Dodd-Frank Act also mandated that SEC and the Financial Industry Regulatory Authority, Inc., remit to MSRB a portion of the fines collected for violation of MSRB rules. Effective October 2010, SEC must remit half of the fines it collects to MSRB, and FINRA must remit one-third, although that amount may be modified by agreement among SEC, MSRB, and FINRA. MSRB stated that the amounts actually received will be dependent on the level of enforcement by SEC and MSRB and is expected to vary considerably from year to year. Since the early 1970s, several Securities and Exchange Commission cases and opinions have addressed instances in which broker-dealers charged excessive markups or markdowns in municipal securities transactions with customers. Table 11 summarizes the details of a few of these cases. The Financial Industry Regulatory Authority, Inc., the federal banking regulators, and the Securities and Exchange Commission use a variety of methods to help ensure broker-dealers’ compliance with rules issued by the Municipal Securities Rulemaking Board. We reviewed their written policies and procedures to understand how they assess broker-dealers’ compliance with MSRB Rules G-30 (fair and reasonable pricing) and G- 14 (timely, accurate, and complete trade reporting). FINRA has established electronic surveillances of data reported to MSRB’s Real- Time Transaction Reporting System, by which it analyzes the data to generate “alerts” for potential violations of certain MSRB rules. FINRA, in certain circumstances, refers potential violations by bank dealers to the appropriate federal banking regulators for further investigation. During the period of our review, MSRB Rule G-16 required FINRA and the federal banking regulators to conduct routine examinations of the firms under their jurisdiction once every 2 years for compliance with all MSRB rules and other applicable laws. The SEC’s Office of Compliance Inspections and Examinations also conducts oversight activities through examinations of selected broker-dealers. FINRA utilizes parameters to help target its surveillance for fair pricing and markup violations. For example, a surveillance program established to identify transactions that were not executed at the prevailing market price would flag any transactions priced outside of a certain range of comparable prices. Similarly, a surveillance program established to identify excessive markups or markdowns would flag any transactions with markups or markdowns above a specified percentage of the contemporaneous costs (for markups) or proceeds (for markdowns). FINRA staff stated that these parameters are merely guidelines to assist them in identifying transactions for further review. FINRA analysts follow a series of steps to determine whether alerts generated by the surveillances represent actual violations of Rule G-30. Specifically: The analyst uses various data sources, such as Bloomberg, MSRB’s Electronic Municipal Market Access website, or audit trail data, to verify the information in the alert and confirm or establish the prevailing market price for the municipal security at the time of the trade in question. If necessary, the analyst asks the firm for documentation and an explanation of how it determined that its price and markup or markdown were fair and reasonable. After reviewing the firm’s documentation, the analyst prepares a memorandum recommending a particular disposition for review and approval by FINRA managers. In their G-30 compliance reviews during broker-dealer examinations, FINRA examiners check for price manipulation and excessive markups and markdowns. The manipulation module of FINRA’s examination tool kit includes several questions and warning signs that help examiners identify whether broker-dealer firms intentionally tried to manipulate prices. FINRA’s examination tool kit also contains a module to help examiners identify excessive markups or markdowns. Examiners follow a series of steps: Examiners collect a variety of records from the firm, such as order tickets and confirmations for a given sample of transactions, as well as daily transaction reports. Using MSRB data, they identify a comparison transaction that best represents the market (i.e., the prevailing market price) for each sample security at the time of each sample customer transaction. Using the comparison transaction data and records collected from the firm, examiners calculate the markups and markdowns that the firm charged on the sample transactions. For markups or markdowns outside of specific parameters, examiners request an explanation from the firm. Again, the parameters are merely guidelines to assist them in identifying transactions for further review. Examiners consider the facts and circumstances of each individual case and, when necessary, consult with FINRA fixed-income experts to substantiate violations. FINRA’s periodic late trade reporting surveillance identifies transactions reported more than 15 minutes after they occurred, with analysts following a similar review process as they follow for pricing and markup or markdown alerts. FINRA reviews MSRB transaction data and selects firms with higher levels of potential noncompliance during a given surveillance period. As with surveillances for pricing and markups, analysts use various data sources, such as Bloomberg, the EMMA website, or audit trail information to provide context for each case. If necessary, the analyst asks the firm for documentation, including an explanation for the late reporting and any trade memorandums in support of that explanation, a copy of the firm’s written supervisory procedures regarding municipal securities transaction reporting, and any evidence of the firm’s own review of the transactions in question. The analyst reviews the documentation and prepares a memorandum recommending a particular disposition for review and approval by FINRA managers. In their G-14 compliance reviews conducted during examinations, using a sample of trades from the firm’s trading blotters, FINRA examiners conduct a “failure to report” review to detect transactions that the firm effected but failed to report to MSRB. They also check whether firms have filed and kept current a Form RTRS with MSRB. This form contains information that ensures that the firm’s trade reports can be processed correctly. Finally, examiners look for unreported and inaccurately reported trades, as well as late reported trades that would not have been detected by FINRA’s surveillance activities. In doing so, they adhere to the following procedures: Examiners review monthly RTRS statistics on trades that the firm executed or cleared during the review period. They select a time period for review and run statistical reports related to each broker- dealer firm’s trade reporting for that time period. They also obtain detailed trade information from MSRB. Examiners then select a sample of trades from the time period they chose for review. For the selected sample, they request and review order tickets and confirmations from the firm and compare the RTRS information to the information on those documents, making note of differences between the two sources. For discrepancies noted, they attempt to determine the root cause of the apparent violations and, if necessary, expand their sample to confirm the violation. The Office of the Comptroller of the Currency, the Federal Deposit Insurance Corporation, and the Board of Governors of the Federal Reserve System constitute the federal banking regulators that oversee those banks that are registered as dealers of municipal securities. In general, the three federal banking regulators’ examination policies and procedures require bank examiners to select a sample of the bank dealer’s transactions, review the relevant bank documentation and MSRB data for those transactions, and analyze the data to evaluate whether any prices appear to be unfair or unreasonable. Federal banking regulator officials told us that bank examiners obtain and review MSRB transaction data prior to their on-site examinations. When conducting on-site bank dealer examinations, federal banking examiners generally select a sample of the bank dealers’ transactions for a given review period. They typically request and review copies of the bank’s transaction records for the review period and compare the bank’s records with MSRB transaction data to ensure that the bank reported all of its trades to MSRB accurately and on time. In checking for G-30 compliance during broker-dealer examinations, OCIE examiners generally take some or all of the following steps: Examiners review MSRB data to select a sample of the firm’s transactions that appear to have higher markups than other reported transactions in a given review period. Using order tickets, confirmations, and information on contemporaneous costs or proceeds, they calculate the markups or markdowns the firm charged on the sample transactions. Examiners ask the firm to explain cases that fall outside of certain parameters. In checking for G-14 compliance during broker-dealer examinations, OCIE examiners generally do the following: Examiners select a population of municipal transactions for a given review period. They compare the MSRB trade information with the information on the firm’s purchase and sales blotter to determine whether all transactions were reported. They also select a sample of order tickets and confirmations for the trades and compare that information with the MSRB report to check for accuracy of reporting. In addition to the contact named above, Karen Tremba, Assistant Director; Pedro Almoguera; Silvia Arbelaez-Ellis; Ben Bolitzer; Emily Chalmers; William R. Chatlos; Rachel DeMarcus; Stefanie Jonkman; Courtney LaFountain; Marc Molino; Edward Nannenhorn; Robert Pollard; Lisa Reynolds; Jessica Sandler; and Ardith Spence made key contributions to this report.
Municipal securities are debt instruments that state and local governments typically issue to finance diverse projects. Individual investors, through direct purchases or investment funds, own 75 percent of the estimated $3.7 trillion in municipal securities in the U.S. market. In the secondary market, where these securities are bought and sold after issuance, trading largely occurs in over-the-counter markets that are less liquid and less transparent than the exchange-traded equity securities market. The Dodd-Frank Wall Street Reform and Consumer Protection Act required GAO to review several aspects of the municipal securities market, including the mechanisms for trading, price discovery, and price transparency. This report examines (1) municipal security trading in the secondary market and the factors that affect the prices investors receive, and (2) the Securities and Exchange Commission’s (SEC) and self-regulatory organizations’ (SRO) enforcement of rules on fair pricing and timely reporting. For this work, GAO analyzed trade data, reviewed federal regulators’ programs for enforcing trading rules, and interviewed market participants and federal regulators. In the secondary market for municipal securities, both institutional and individual investors trade through brokers, dealers, and banks (broker-dealers). However, GAO analysis of trade data showed that institutional investors generally trade at more favorable prices than individual investors. Broker-dealers said these differences generally reflected the higher average transaction costs associated with trading individual investors’ smaller blocks of securities. Market participants added that institutional investors have more resources, including networks of broker-dealers, and the expertise to independently assess prices. In recent years, the Municipal Securities Rulemaking Board (MSRB)—an SRO that writes rules regulating the broker-dealers that trade municipal securities—has required timely and public posting of trade prices in an effort to make post-trade price information more widely available. However, unlike the equities market, the relatively illiquid municipal market lacks centrally posted and continuous quotes, and other sources of pretrade price information are not centralized or publicly available to individual investors. In 2010, SEC began a review of the municipal securities market, in part to examine pretrade price information. MSRB has also begun a study that includes a review of the market structure to determine whether access to additional pretrade price information could improve pricing and liquidity. Both SEC and MSRB plan to complete these studies in 2012. Several regulators share responsibility for overseeing the municipal securities market. The Financial Industry Regulatory Authority (FINRA)—an SRO that regulates 98 percent of the broker-dealers that trade municipal securities—and federal banking regulators enforce broker-dealer compliance with MSRB rules under their respective jurisdictions through electronic surveillances of trade data and routine examinations. SEC evaluates the quality of FINRA and MSRB’s municipal regulatory programs through its SRO inspection program, which has recently evolved to a risk-based approach. SEC last inspected MSRB and FINRA’s fixed-income surveillance program, which encompass municipal securities trading, in 2005. SEC staff said that staffing constraints have prevented them from conducting inspections of these SROs sooner, although they have recently begun a new inspection of FINRA’s fixed-income surveillance program. SEC’s limited monitoring of FINRA and MSRB between inspections may not be sufficient to support its new risk-based inspection approach. For example, SEC’s last inspection of FINRA’s fixed-income surveillance program predated the financial crisis—and its ensuing volatility in the municipal market—but SEC had collected limited information since its last inspection that would help it assess the quality of FINRA’s broker-dealer oversight. SEC currently receives periodic reports from FINRA that provide statistical information on its regulatory activities related to municipal securities trading. According to SEC staff, while they might be able to use the reports to identify significant deviations in FINRA’s efforts, they cannot use them solely to determine the effectiveness of FINRA’s municipal securities program. Without ongoing collection and analysis of information to assess the effectiveness of SROs’ regulatory programs, SEC may be unable to identify and act on regulatory problems in a timely manner. GAO recommends that SEC collect and analyze information on SROs’ fixed-income regulatory programs on an ongoing basis to better inform its risk-based inspection approach. SEC agreed, but noted it would need additional resources to conduct more frequent oversight of the SROs. Such ongoing monitoring, however, could help SEC better leverage its resources for inspections.
In recent years, the Congress heard and expressed concerns about the ability of federal land management agencies to provide high-quality recreational opportunities. These concerns focused on declines in visitor services, extensive needs for repairs and maintenance at the facilities and infrastructure that support recreation, and a lack of information on the condition of natural and cultural resources and the trends affecting them. In addressing these concerns, the Congress faced a dilemma: While the needs of federal recreation areas and the rate of visitation to these areas were increasing, the funding for addressing these needs and providing visitor services was growing tighter. As a result, the Congress was looking for means, other than appropriations, to provide additional resources to these areas. The recreational fee demonstration program was one such means. Authorized by the Congress in 1996 as a 3-year pilot program, the recreational fee demonstration program allows the Park Service, the Forest Service, the Bureau of Land Management (BLM), and the Fish and Wildlife Service to experiment with new or increased fees at up to 100 demonstration sites per agency. The program aims to bring additional resources to recreation lands by generating recreational fee revenues and spending most of the fee revenues at the sites where the fees are collected to increase the quality of the visitors’ experience and to enhance the protection of the sites’ resources. In addition, in carrying out the program, the agencies are to (1) be creative and innovative in designing and testing the collection of fees, (2) develop partnerships with federal agencies and with state and local agencies, (3) provide higher levels of service to the public, and (4) assess the public’s satisfaction with the program. The conference report on the program’s original legislation requested that the Secretary of the Interior and the Secretary of Agriculture each prepare a report that evaluates the demonstration program, including recommendations for further legislation, by March 31, 1999. The program is currently authorized through fiscal year 2001. The agencies have until the end of fiscal year 2004 to spend money generated under the program. Each of the four federal land management agencies included in the program provides a variety of recreational opportunities to the visiting public. Together, these agencies manage over 630 million acres of land—over one-quarter of the land in the United States. In 1997, they received over 1.2 billion visits. Table 1.1 provides information on the acreage, visitation, and lands managed by the four agencies. The fee demonstration program was established to test ways to address deteriorating conditions at many federal recreation areas, particularly those managed by the Park Service, which collects the most fee revenues, and the Forest Service, which hosts the most recreational visitors. Our prior work has detailed significant needs, including the following: The federal land management agencies have accumulated a multibillion-dollar backlog of maintenance, infrastructure, and development needs. The quality and the scope of visitor services at federal recreation sites have been declining. Some sites have closed facilities, while others have reduced their hours of operation or are providing fewer services. The condition of many key natural and cultural resources in the national park system is deteriorating, and the condition of many others is not known. Despite annual increases in federal appropriations for operating the national park system, the financial resources available have not been sufficient to stem the deterioration of the resources, services, and recreational opportunities managed by the agency. One way of addressing these needs was providing additional financial resources to these agencies through new or increased recreational fees. But while new or increased fees could have increased the federal land management agencies’ revenues, generally these additional fees did not directly benefit the agencies’ field units until the fee demonstration program was established. The Land and Water Conservation Act of 1965, as amended, limited the amount of revenues that could be raised through collecting recreational fees and required that the funds be deposited in a special U.S. Treasury account. The funds in the special Treasury account could only be used for certain purposes, including resource protection and maintenance activities, and only became available through congressional appropriations. These amounts were generally treated as a part of, rather than a supplement to, the agencies’ regular appropriations, and were included under the spending limits imposed by the Budget Enforcement Act. In the context of the Budget Enforcement Act’s limits, in order for the agencies to address deteriorating conditions at recreation areas through fee revenues, the Congress had to provide authority for the agencies to retain the fees. In 1996, the Congress authorized the fee demonstration program to test recreational fees as a source of additional financial resources for the federal land management agencies. The Congress directed that at least 80 percent of the revenues collected under the program be spent at the units collecting the fees; the remaining 20 percent could be spent at the discretion of each agency. By allowing the local units to retain such a large percentage of the fees they collected, the Congress created a powerful incentive for unit managers to emphasize fee collections. In essence, the more revenues that field units could generate through fees, the more they would have to spend on improving conditions in the areas they managed. In addition, the program’s legislative history reflected the congressional belief that allowing the local units to retain most of the revenues they collected would be likely to improve the public’s acceptance of the fees. This belief was consistent with past studies of visitors to recreation areas that indicated that most visitors would support increases in fees if the fees remained at the local units. Under the legislation, the program’s expenditures were to be used to increase the quality of visitors’ experiences at public recreation areas and to enhance the protection of resources. Specifically, authorized expenditures were to address backlogged repair and maintenance projects; enhancements to interpretation, signage, habitats, or facilities; and resource preservation, annual operations (including fee collections), maintenance, and law enforcement relating to public use. In broad terms, these authorized expenditures cover the principal aspects of managing recreation areas on federal lands. The legislation also provided an opportunity for the agencies to be creative and innovative in developing and testing fees by giving them the flexibility to develop a wide variety of fee proposals, including some that were nontraditional as well as others that simply increased previously existing fees. During the demonstration period, the agencies were to experiment with (1) various types of fees to determine what does and does not work and (2) various methods of collecting fees to make payment easier and more convenient for the visiting public. In addition, according to the program’s legislative history, the agencies were expected to coordinate with each other, as well as with state and local recreation areas, so that visitors did not face numerous fees from several agencies in the same geographic area. Coordination among the agencies could yield better service to the public, thereby potentially improving the program’s chances of success. Federal land management agencies have traditionally charged several types of fees to visitors, all of which may still be charged under the fee demonstration program. Most of these fees can be categorized generally as either entrance or user fees. Entrance fees are generally charged for short-term access to federal recreation sites. Most are charged on a per-vehicle basis, but some are charged to individuals hiking or cycling into a recreation area. The entrance fee gives the visitor access to the key features of the area. For example, visitors pay $10 per car to enter Zion National Park in Utah; this fee covers everyone in the vehicle and is good for up to a week. Another example of an entrance fee is collected within the Wasatch-Cache National Forest in Utah, where visitors to the Mirror Lake area pay an entrance fee of either $3 per vehicle for a day or $6 per vehicle for a week. Annual passes allow entrance or use of a site for the next 12 months, benefiting frequent visitors to a single recreation area, such as a park or forest. For example, instead of paying a $10 entrance fee every time they drive into Shenandoah National Park in Virginia, frequent visitors can purchase an annual pass for $20, which will give them unlimited access to the park during the next year. Similarly, in the White Mountain National Forest in New Hampshire, visitors can pay $20 for an annual pass rather than pay $5 for a daily vehicle pass. The Golden Eagle Passport provides unlimited entry for a year to most national parks, Fish and Wildlife Service sites where entrance fees are charged, and several Forest Service and BLM sites. Costing $50 for the purchaser and his or her passengers in a privately owned vehicle, the passport can be economical when people are planning to visit a number of sites that charge entrance fees within a single year. While the Golden Eagle Passport covers entrance fees, it does not cover most user fees;hence, passport holders pay separately for activities such as boat launching, camping, parking, or going on an interpretive tour. User fees are charged for engaging in specific activities. They are generally charged to individuals or groups for activities such as boat launching, camping, parking, or going on an interpretive tour. For example, individuals pay $3 for a guided interpretive tour of the Frederick Douglass home at the Frederick Douglass National Historic Site in Washington, D.C. Another example of a user fee is at Paria Canyon, a BLM demonstration site in Utah, where visitors pay $5 per day for hiking or backpacking. Individual sites may charge several types of fees for entry and other activities. For example, a demonstration site may have a $10 entrance fee, good for 7 days, and a $20 annual pass. In addition, visitors to the site may pay user fees for a variety of specific activities, such as backcountry hiking, camping, interpretive tours, or disposing of waste from a recreational vehicle. Our review included fee demonstration sites in the Park Service, the Forest Service, BLM, and the Fish and Wildlife Service. At each of these agencies, we contacted staff from headquarters and at least two regional offices. In addition, we visited 15 judgmentally selected sites operated by the four agencies. More of the selected sites were operated by the Park Service than by any other agency because the Park Service (1) had the most sites in the program and (2) generates considerably more fee revenues than any of the other agencies. The 15 selected sites were both large and small and were located throughout the country in eight different states and the District of Columbia. Table 1.2 lists the sites, by agency. We collected information on revenues, expenditures, and visitation from the headquarters offices of the four agencies and the 15 sites we visited. For each agency’s revenues and expenditures, we collected actual data for fiscal year 1997 and the agency’s estimates for fiscal year 1998. At each of the 15 sites, we collected more detailed information on revenues, such as the types of fees and the methods used to collect fees. We also compared actual with planned expenditures and classified the expenditures, using the broad purposes authorized in the program’s legislation. To determine the extent to which the agencies had adopted innovative or coordinated approaches to the fee program, we used the information we collected to accomplish our first two objectives. Various agency officials, agency task forces, and officials from the industry and user groups we contacted provided comments and ideas on innovative or coordinated approaches available to the agencies—including identifying practices employed by the private sector. To prepare for our review of the implementation of the demonstration program to date, we reviewed prior fee legislation, the program’s authorizing legislation, and its legislative history. To determine what, if any, impact the fee demonstration program had on visitation, we attempted to compare data on visitation during the demonstration period with baseline information on visitation developed since 1993. Since visitation at the Park Service’s sites accounted for over three-fourths of total visitation among all fee demonstrations at the four agencies, we compared trends in visitation at their demonstration sites with nondemonstration sites for the 1993-97 period. To conduct this analysis, we obtained visitation data from the Park Service’s Public Use Statistics Office. For each of the agencies, we collected anecdotal information on trends in visitation from officials at agency headquarters and at the sites we visited as well as officials from each of the affected industry and user groups we contacted. We also contacted six experts who either had conducted surveys of visitors concerning the recreational fee demonstration program or had prior experience with recreational fees on federal lands. These individuals were Dr. Deborah J. Chavez, Research Social Scientist, Pacific Southwest Research Station, U.S. Forest Service; Dr. Sam H. Ham, College of Forestry, Wildlife and Range Sciences, University of Idaho; Dr. David W. Lime, Senior Research Associate, University of Minnesota, Department of Forest Resources; Dr. Gary E. Machlis, Visiting Chief Social Scientist, Park Service; Mr. Jim Ridenour, Director, The Epply Institute for Parks and Public Lands, Department of Recreation and Park Administration, Indiana University, and former Director, National Park Service; and Dr. Alan E. Watson, Aldo Leopold Wilderness Research Institute, U.S. Department of Agriculture and Department of Interior, Missoula, Montana. During our review, we contacted various industry and user groups that might be affected by the fee demonstration program. We spoke with these groups to obtain their views on the agencies’ management of the program. We selected these groups because they (1) had participated in congressional hearings on the demonstration fee authority, (2) had been identified as affected parties by agency officials or officials from other industry or user groups, or (3) were widely known to be involved with recreation on federal lands. Table 1.3 provides the names and a brief description of each group we contacted. In addition to contacting these industry and user groups, we reviewed the testimonies of several other affected groups that participated in congressional hearings on the fee demonstration program. These included industry groups, such as Kampgrounds of America, the Outdoor Recreation Coalition of America, and the National Tour Association, and user groups, such as the American Hiking Society, the American Motorcyclist Association, and the Grand Canyon Private Boaters Association. We did not independently verify the reliability of the financial or visitation data provided, nor did we trace the data to the systems from which they came. In some cases, data were not available at headquarters and could be collected only at the local site. We conducted our review from September 1997 through November 1998 in accordance with generally accepted government auditing standards. Among the four agencies, the pace and the approach used to implement the recreational fee demonstration program have differed. Some of the agencies had more demonstration sites operational earlier than others. This difference is a result of the agencies’ experiences in charging fees prior to the demonstration. Nonetheless, there have been substantial increases in the amount of fees collected. Each agency estimated that it has generated at least 70 percent more in fee revenues than it did prior to the demonstration program, and the combined estimated revenues for the four agencies have nearly doubled since fiscal year 1996. According to estimates for fiscal year 1998, the Park Service has collected the most revenues under the program, generating about 85 percent of all the revenues collected at demonstration sites by the four agencies. Since getting the authority to begin testing the collection of new and increased fees, each of the agencies has taken different approaches. The agencies’ approaches have largely been influenced by (1) their respective traditions and experiences in collecting fees, (2) the geographic characteristics of the lands they manage, and (3) a recent amendment to the law authorizing the demonstration program that increased incentives to the agencies. As a result of these differing approaches, the pace of implementation among the agencies has varied. Fees are not new to the four agencies in the demonstration program. Prior to the program, each of the agencies collected fees from visitors at recreation areas. However, the agencies’ experiences with fees have differed. For example, prior to the demonstration, the Park Service collected entrance fees at about one-third of its park units. The Forest Service and BLM collected user fees at many of their more developed recreation areas—predominantly for camping—and the Fish and Wildlife Service charged a mix of entrance and user fees at about 65 of its sites. Not only did their past experiences with fees differ, but the geographical characteristics of the lands they were managing also were different, making fee collection easier in some areas and more difficult in others. For example, many sites in the Park Service have only a few roads that provide access to them. With limited access, collecting fees at an entrance station is very practical. In contrast, many Forest Service, BLM, and Fish and Wildlife Service sites have multiple roads accessing areas they manage. Multiple roads make it difficult for an agency to control access to an area, thus making it difficult to charge entrance fees. As a result, most Forest Service, BLM, and Fish and Wildlife Service sites have not charged entrance fees but instead charged user fees for specific activities. Figures 2.1 and 2.2 further illustrate the varying characteristics of federal lands. As an example, figure 2.1 shows the relatively few access points to Arches National Park in Utah. This park has only one paved road going in and out of the park. In comparison, figure 2.2 shows the multiple access points that exist along the many roads that go through the White Mountain National Forest in New Hampshire and Maine. Many of the traditions in collecting fees have influenced the agencies in both their pace of implementation and the types of fees they charge. Because many sites in the Park Service previously charged entrance fees, the agency was quickly able to bring a large number of sites into the demonstration program by increasing the entrance fees that existed prior to the demonstration. For the Park Service, of the 96 demonstration sitesin the first year of the program, 57 of them increased existing entrance fees. According to officials in several of the agencies, it is generally easier for the agencies to increase existing fees than to implement new fees because (1) the fee-collection infrastructure is already in place and (2) the public is already accustomed to paying a fee. The three other agencies were collecting predominantly new fees at their demonstration sites during fiscal year 1997, the first year of the program, including all 10 of BLM’s sites, 29 of 39 Forest Service sites, and 35 of 61 Fish and Wildlife Service sites. Compared with increases in existing fees, new fees are generally more difficult to implement because the agencies need to (1) develop an infrastructure for collecting fees and (2) inform the public and gain acceptance for the new fees. This infrastructure could include new facilities; new signs; new collection equipment, such as cash registers and safes; and new processes, such as implementing internal control standards for handling cash. Figures 2.3 and 2.4 show examples of new facilities that were constructed or put in place during the demonstration period to collect new fees. Through the first half of fiscal year 1998—that is, as of March 31, 1998—each of the four agencies added sites to the program. Through March 1998, the four agencies had 284 sites in the program, compared with 206 sites through fiscal year 1997. The Park Service added 4 sites through March 1998 and has a total of 100 sites in the program—the maximum allowed by law. Each of the other three agencies has added sites to the program, with the majority of new sites coming from BLM—the agency that had the fewest sites in fiscal year 1997. For the second half of fiscal year 1998 and fiscal year 1999, the Forest Service plans to add as many as 38 to 45 sites to the program. Officials from BLM indicated they plan to add 15 to 20 sites to the program. The Fish and Wildlife Service has added six sites during the last half of fiscal year 1998 but does not plan to add any further sites unless the demonstration program is extended beyond fiscal year 1999. Table 2.1 lists the number of fee demonstration sites, by agency, for fiscal year 1997 and through the first half of fiscal year 1998. An amendment to the law authorizing the demonstration program was one of the factors contributing to the addition of sites to the program. The law originally authorized each agency to retain the fee revenues that exceeded the revenues generated prior to the demonstration. As a result, the agencies could only retain the portion of the fee revenues that were in addition to existing fees. In November 1997, the law was amended to permit the agencies to retain all fee revenues generated by demonstration sites. This amendment created additional incentives for agencies to add existing fee sites to the program because the agency could retain all of the fee revenues generated at the site. While the approach and pace of implementation have varied, the four agencies have each been successful in raising substantial new revenues through the fee demonstration program. Before the program was authorized, each of the agencies collected fees at many recreation sites. But since the implementation of the program, each of the agencies has estimated that it has increased its fee collections by more than 70 percent above fiscal year 1996 levels—the last year before the program began. On the basis of estimates for fiscal year 1998, the Park Service has brought in significantly more in fee revenues than the other agencies. The estimated revenues of the Park Service account for about 85 percent of the revenues generated by the four agencies at demonstration sites. As shown in figure 2.5, as a result of the demonstration program, the four agencies have nearly doubled total combined fee collections since fiscal year 1996, according to the agencies’ estimates. In addition, each of the four agencies estimated that their fees increased under the demonstration by over 70 percent above fiscal year 1996 levels. Revenues under the fee demonstration program have come from a mix of new fees and increases to fees that existed before the program was authorized. In fiscal year 1996, the last year before the demonstration program was implemented, the four agencies collected a total of about $93.3 million in fees from visitors. In fiscal year 1997, the four agencies generated a total of about $144.6 million in fee revenues, of which about $123.8 million was attributed to fees at demonstration sites. For fiscal year 1998, the agencies estimate that total fee revenues will increase to about $179.3 million, with about $159.8 million in revenues from demonstration sites. (App. I contains information on each agency’s gross fee revenues for fiscal years 1996 through 1998.) Three of the four agencies have not developed formal estimates for fiscal year 1999. The one agency with fiscal year 1999 estimates—the Park Service—predicts only modest increases in revenues since the agency has already implemented the maximum number of demonstration sites authorized under the program. However, officials at each of the other three agencies estimated that as more sites become part of the demonstration program, revenues will increase. Each agency collected fees prior to the demonstration program, and as sites with existing fees were converted to demonstration sites, much of the agencies’ fee revenues have now been included in the demonstration. As a result, much of the demonstration fee revenues collected in fiscal year 1997 and beyond come from sites where fees were collected prior to the demonstration. Of the four agencies, the Park Service has generated about 85 percent of the $159.8 million in total estimated fee demonstration revenues for fiscal year 1998. The agency with the second largest revenues is the Forest Service, which estimated that it generated about 11 percent of the total fee demonstration revenues. The relative size of each agency’s revenues compared with the total revenues of the four-agency program is depicted in figure 2.6. The substantially higher revenues of the Park Service are mostly due to the agency’s large number of high-revenue sites. For fiscal year 1997, 28 Park Service sites each generated more than $1 million in fee revenues, and 2 of these sites—the Grand Canyon and Yosemite National Parks—each generated more than $10 million. Nearly all of these 28 sites attract high numbers of visitors and had histories of charging entrance fees prior to the demonstration program. In addition to the high-revenue sites of the Park Service, the Forest Service has two sites with revenues above $1 million. In contrast, in fiscal year 1997, the Fish and Wildlife Service and BLM did not have any sites with revenues above $1 million. During the first year and a half of the recreational fee demonstration program, overall expenditures at individual demonstration sites have been limited in comparison to revenues collected. So far, only about 24 percent of the revenues collected has been expended. Most of the expenditures have gone toward repair and maintenance, the costs of collection, and routine operations at the respective sites. At the sites we visited, we found that the agencies’ expenditures appeared to be consistent with the purposes authorized in the legislation establishing the program. The amount of collections varied considerably among the agencies and the individual sites within each agency, more than doubling operating budgets at some sites, while providing little revenue at others. As a result, assuming appropriations remain stable and that the program is extended beyond fiscal year 1999, many sites in the program will, in time, have sufficient revenues to address all of their needs—regardless of their relative priority within the agency. At the same time, other sites within an agency may not have enough to meet their most critical needs. Over the long term, this condition raises questions about the appropriateness of the high-revenue sites retaining 80 percent or more of their revenues as currently required by law. The four agencies have spent about 24 percent of the revenues available under the fee demonstration program through March 1998. Under the program’s original authority, not all of the revenues generated during fiscal year 1997 were available for expenditure. As a result, of the $123.8 million generated at demonstration sites in fiscal year 1997, $55 million was available to the agencies. For fiscal year 1998, the Congress amended the law authorizing the program to permit the agencies to retain all of the fee revenues generated under the program. As a result, the agencies have the full amount of the fee revenues generated at their demonstration sites in fiscal year 1998 available for expenditure. Through the first half of fiscal year 1998, the four agencies had generated about $36 million in fee revenues. Thus, the total amount available to the agencies for expenditure under the demonstration program through March 1998 was about $91 million. On a national basis, the four agencies estimated that of the $91 million available for expenditure through March 1998, about $22 million had been spent. Under the demonstration program’s current authorization, the participating agencies have until the end of fiscal year 2004 to spend the revenues raised by the program. Table 3.1 provides information comparing the fee revenues available for expenditure with actual expenditures through March 1998 for each of the four agencies. According to the managers in the participating agencies, the reasons that only 24 percent of the revenues available have been spent included (1) the approval of the authorizing legislation occurring in mid fiscal year 1996, (2) the delays in setting up accounting systems to track collections and return the funds to the sites, (3) the time needed to set up internal processes for headquarters’ approval of site expenditure plans, (4) the time needed to plan and implement expenditure projects, (5) the need to use funds during fair weather construction seasons, and (6) the fiscal year 1997 requirement for expenditures to exceed the base year amount before funds could be spent on the collecting site. The legislation authorizing the fee demonstration program permits the agencies to fund a broad array of activities from fee revenues, including the cost of fee collection, health and safety items, interpretation and signage, habitat enhancement, facility enhancement, resource preservation, annual operations, and law enforcement. The legislative history of the program further emphasized that fees were to be a new source of revenues to address backlogged repairs and maintenance. The law also states that at the discretion of agency heads, 20 percent of the fee revenues may be set aside for agencywide use for the same purposes. Of the $21.6 million in expenditures by the four participating agencies as of March 31, 1998, most have been for repairs and maintenance, the cost of collection, and operations. Figure 3.1 displays the relative size of three agencies’ expenditures by the categories authorized by the program’s legislation. As of March 31, 1998, the Park Service’s actual expenditures were mainly for the costs of repairs and maintenance, the cost of fee collection, resource preservation, and annual operations. Expenditures at the Forest Service’s demonstration sites were predominantly for annual operations, the cost of fee collection, repairs and maintenance, and interpretation. At the Fish and Wildlife Service’s sites, the cost of collection, repairs and maintenance, health and safety, and facility enhancement were the top expenditure categories. BLM did not have a national breakdown available. At the sites we visited, we found that the agencies’ expenditures appeared to be consistent with the purposes authorized in the legislation establishing the program. The top expenditures among the 15 sites visited were for the cost of fee collection, followed by annual operations and repairs and maintenance. Agency officials said that cost of fee collection is among the top categories of expenditure because of the necessary start-up costs for the demonstration program. The program’s authorization allows the agencies to spend their revenues on the actual cost of collection rather than funding the activity from other sources, such as appropriated funds. Since few expenditures had been made overall as of March 31, 1998, agency officials said the cost of collection makes up a disproportionately large part of the actual expenditures through that date. Each of the four agencies has developed its own approach for using the fees collected through the demonstration program. Each has exercised a different amount of direction and oversight over its demonstration sites’ expenditures. As a result, the agencies’ priorities and criteria for spending the fee revenues, their decisions on spending the 20 percent of the revenues not required to remain with the collecting sites, and their procedures for approving projects funded with fee revenues vary considerably. The following sections provide information about each agency’s overall expenditures. More detailed information on each agency’s expenditures for legislatively authorized purposes at the sites we visited appears in appendixes II through V. The Park Service has developed the most detailed criteria for spending fee revenues. After using the fees to cover the cost of their collection, the Park Service has given the highest priority to reducing its repair and maintenance backlog. The Park Service has required both headquarters and regional reviews of the demonstration site managers’ expenditure proposals. In addition, an Interior Department-level work group, including Park Service representatives, was commissioned by the Assistant Secretary for Policy, Management, and Budget to review the proposals. The Park Service’s headquarters had intended to have regional offices approve the expenditure of fee demonstration funds but found, after reviewing region-approved projects, that some did not meet the established criteria. The Park Service is addressing its spending priorities with both the 80 percent of the fee revenues that stay at the collecting sites and with the 20 percent of the funds that are put into an agencywide account for distribution primarily to nondemonstration sites. The Park Service spent $12.8 million on projects at its demonstration sites through March 31, 1998. This amounts to about 17 percent of its $75.2 million in fee revenues available for park use through that date. Park Service officials said that the amount of funds expended was small because the amendment to the authorizing legislation in November 1997 made significantly more revenues available to the agencies for expenditure than they had expected to be allowed to spend. Furthermore, the Park Service recreational fee program coordinator and the Park Service comptroller’s staff reported that because accounts and allocation procedures took time to establish, the first release of funds to the collecting sites for expenditure came in mid fiscal year 1997. Another factor affecting the start-up of the Park Service’s expenditures under the demonstration has been the time needed for the extensive reviews of proposed projects. On a national basis, the Park Service’s demonstration sites’ expenditures were in the categories displayed in figure 3.2. The Forest Service permits demonstration sites to retain 95 percent of their fee collections and to use them as allowed by the program’s authorizing legislation—with the remaining 5 percent to be spent at the discretion of each site’s regional office. Accordingly, the Forest Service has instructed their demonstration sites to use their fee revenues for any of the broad purposes set forth in the legislation. At the same time, the agency has emphasized the need to use the revenues in ways that visibly benefit visitors. Forest Service headquarters officials said the determination of the program’s expenditures is driven by the project managers at the demonstration sites. The ranger districts and forests involved develop lists of projects and set priorities among them. The fee demonstration sites have typically sought public input on what projects should be done, along with meeting other requirements. The Forest Service began to use the funds raised by the recreational fees at 40 demonstration sites in fiscal year 1997 to address the deferred maintenance backlog, visitor services, and maintenance enhancements. Of the $13 million in demonstration fee revenues through March 31, 1998, the demonstration sites have expended $7.8 million, or about 60 percent, according to data collected by Forest Service headquarters recreation staff. Headquarters officials noted that in fiscal year 1997, most sites had not been able to spend all the revenues they collected because fee collection started in the middle of the fiscal year, time was needed to make the fee deposits available to the sites for expenditure, and time was needed to plan and contract for the projects to be funded with fee revenues. On a national basis, as of March 31, 1998, the Forest Service’s demonstration sites have expended the greatest amount of fee revenues in the following categories: operations, the cost of fee collection, repairs and maintenance, and interpretation and signage (see fig. 3.3). Details on the expenditures at the Forest Service’s sites we visited are in appendix III. The Fish and Wildlife Service decided to allow its demonstration sites to use their fee revenues to maintain or improve recreation opportunities and enhance visitors’ experiences. Fish and Wildlife Service headquarters reviews the demonstration sites’ expenditure of the funds after the fact, using the agency’s overall criteria and specific guidance. The Fish and Wildlife Service has allowed its regional directors to determine where to use the 20 percent of the fee revenues that does not have to be spent at the collecting sites. This has resulted in collecting sites in four of the Fish and Wildlife Service’s seven regions being permitted to retain all of the fee revenues they generate. Directors of the three other regions have decided to require that 20 percent of the fee revenues from their demonstration sites be submitted to a central account for use as seed money to initiate fee programs at other sites, for improvements to visitor services, or for backlogged maintenance projects at other sites in the region. In the first year and a half of the program, the Fish and Wildlife Service’s demonstration sites have spent about one-quarter of the fee revenues they generated. Of the $2 million in fee revenues through March 31, 1998, the demonstration sites had expended $500,949, or 25 percent, according to data provided by Fish and Wildlife Service headquarters staff. According to the Fish and Wildlife Service, of the $500,949 spent nationally on projects during the first year and a half of the program, 71 percent was for the cost of collection, including start-up costs, with the remainder spent on repairs and maintenance, health and safety, facility enhancement, and interpretation projects (see fig. 3.4). Details on expenditures at the Fish and Wildlife Service sites we visited are included in appendix IV of this report. BLM headquarters decided to allow demonstration sites to spend funds for any of the purposes in the authorizing legislation and permitted the following uses for the demonstration funds: operations, maintenance, and improvements and interpretation to enhance recreational opportunities and visitors’ experiences. Site managers and their state offices decide on expenditures but are required to report the expenditures to the public and headquarters after each fiscal year. BLM headquarters decided to allow 100 percent of the revenues to be retained at the collecting sites, rather than requiring 20 percent of it to be submitted to a central fund for distribution. BLM’s demonstration sites have expended $572,034, or 56 percent, of the $1.0 million in fee revenues they collected through March 31, 1998, according to data provided by BLM headquarters staff. According to BLM headquarters staff, no breakdown by category of the actual expenditures as of March 31, 1998, was available for all of the agency’s sites. BLM’s fee demonstration program has expanded significantly in fiscal year 1998, from 10 active sites in fiscal year 1997 to a total of 63 approved sites as of March 31. Not all 53 new sites had begun collections or expenditures as of March 31, however. Details on expenditures at the BLM sites we visited are in appendix V. For many sites in the demonstration program—particularly the Park Service’s sites—the increased fee revenues equal 20 percent or more of the sites’ annual operating budgets. For the purposes of this report, we refer to these sites as high-revenue sites. At sites with backlogs of needs for maintenance, resource preservation and protection, and visitor services, this level of additional revenues will be sufficient to eliminate the backlogs over several years—assuming the program is extended and that existing appropriations remain stable. And, at sites with small or no backlogs, the additional revenues will support further site development and enhancement. However, the agencies selected demonstration sites not necessarily because of their extent of unmet needs for repairs, maintenance, or resource preservation, but rather because of their potential to generate fee revenues. At sites outside the demonstration program or sites that do not collect much fee revenues, the backlog of needs may remain or further development of the site may not occur. As a result, some of the agencies’ highest-priority needs may not be addressed. This potential for inequity among sites raises questions about the desirability of the current legislative requirement that at least 80 percent of the fee revenues be expended at the collecting site. Under the recreational fee demonstration program, 44 park units included in the Park Service’s 100 demonstration sites retained fees that exceeded 20 percent of their annual operating budgets in fiscal year 1998. Of these 44 sites, 13 retained fees exceeding 50 percent of their annual operating budgets, and 4 retained fees equaling or exceeding their operating budgets. For example, Arches National Park expects to supplement its fiscal year 1998 operating budget of $0.9 million with fees of $1.4 million—an effective increase of 160 percent in funds available on site. Castillo de San Marcos National Monument is expected to retain $1.3 million in fees, which is 110 percent of its operating budget of $1.2 million. Bryce Canyon National Park expected to retain $2.3 million in fees, which is 110 percent of its operating budget. Such substantial increases in the financial resources available to these sites should improve their ability to address their outstanding needs. Table 3.2 provides data on the fees retained by the 44 parks. Of the seven Park Service sites we visited during our review, four—Zion National Park, Timpanogos Cave National Monument, Carlsbad Caverns National Park, and Shenandoah National Park—were among those with fee revenues exceeding 20 percent of their operating budgets. Except for Timpanogos Cave, each of these sites had a list of backlogged repair and maintenance needs to be addressed. Managers at each of the three sites told us that the additional fee revenues would allow them to address these needs in a relatively short time. For example, Zion National Park officials told us that the park expected to receive so much new fee revenue in fiscal year 1998—about $4.5 million, a doubling of its operating budget—that they might have difficulty preparing and implementing enough projects to use the available funds if a major new $20 million alternative transportation system was not begun in the park. Without this major project, they probably would not be able to spend all of the money available to them in ways that were consistent with the demonstration program’s objectives, they said. The new transportation system is being initiated to eliminate car traffic from the most popular area of the park. Similarly, managers at Shenandoah National Park told us that the fee demonstration program revenues they expect to receive will be very useful in addressing unmet needs. The revenues expected in fiscal year 1998 of $2.9 million is equal to about 32 percent of the park’s operating budget. If the park continues to receive this level of fee revenues, the park superintendent said it should be able to eliminate its estimated $15 million repair and maintenance backlog in relatively few years. Unlike Zion and Shenandoah, Timpanogos Cave National Monument in Utah is a smaller park and does not have a backlog of repair and maintenance needs. According to managers, appropriated funds have been sufficient to keep up with the monument’s repairs and maintenance. Consequently, the managers plan to use the fee revenues they retain—$318,000 in fiscal year 1998, or about 61 percent of the monument’s annual operating budget—to enhance visitor services, such as by providing more cave tours. Park Service and Interior officials have recognized that certain sites with high fee revenues and small or nonexistent backlogs of needs will have difficulty spending their new revenues for projects that meet the demonstration program’s criteria. For example, the Comptroller of the Park Service said that some sites would run out of backlogged repair and maintenance needs to address with their fee revenues. In his view, an exemption from the requirement to retain 80 percent of the collected fees at the collecting sites and the authority to transfer more than 20 percent to a central fund for distribution to other sites would be among the options to consider. In addition, the Assistant Secretary of the Interior for Policy, Management, and Budget has testified that setting aside some of the fee revenues for broader agency priorities is important and has cautioned that permanent legislation giving collecting sites a high percentage of the revenues could “create undesirable inequities” within an agency. Similarly, some managers at higher-revenue sites we visited supported more flexibility in splitting revenues between high-revenue sites and other locations that have little or no fee revenues or that have large maintenance needs or both. Some sites participating in the demonstration program and many nonparticipating sites have repair and maintenance backlogs or health and safety needs but little or no fee revenues to address them. Under the demonstration program’s current 80-20 percent split of the revenues, the Park Service’s park units will stand to receive very uneven shares of the program’s $136 million in estimated fee revenues for fiscal year 1998: Of the 100 fee-collecting sites (which actually includes 116 park units), the top 44 units in terms of revenues are expected to retain $93 million, or 68 percent of the total, while the remaining collecting sites are expected to retain $13 million, or 10 percent of the total, leaving $30 million, or 22 percent of the total, for 260 nonparticipating sites. These sites include heavily visited locations like the Statue of Liberty National Monument in New York and some of the less visited sites such as Hopewell Furnace National Historic Site in Pennsylvania. At the other three agencies, particularly the Forest Service, there are also many sites that have as high a level of fee revenues as that realized by many of the Park Service’s sites. At least 33 of the Forest Service’s 39 demonstration sites operating in fiscal 1997 had fee revenues over 20 percent of their estimated operating budgets. Under the agency’s policy, the demonstration sites are retaining 95 percent of the fees for their own use, and the remaining 5 percent is spent at the discretion of the sites’ regional offices. As shown in table 3.3, for fiscal year 1997, of these 33 sites, 21 had fee revenues exceeding 50 percent of their operating budgets, and 8 of the sites had fee revenues equaling or exceeding their operating budgets. Data for the first half of fiscal year 1998 indicate that an even higher number of the collecting sites will generate revenues amounting to 20 percent or more of their operating budgets by year end. The Forest Service’s high-revenue sites include the Salt and Verde Rivers Recreation Complex in Tonto National Forest, Arizona, where fee collections in fiscal year 1997 were 279 percent of the fiscal year 1997 operating budget. For fiscal year 1998, the complex expects to collect $2.5 million, or about 435 percent of its operating budget, in fees. Similarly, at Mount St. Helens National Volcanic Monument in the Gifford Pinchot National Forest in Washington, $2 million in fees was collected in fiscal year 1997, which was 94 percent of the operating budget. For fiscal year 1998, about 102 percent of the monument’s operating budget is expected to be collected in fees. A two- to four-fold increase in funds available compared with the sites’ annual operating budgets amounts to a tremendous boost in available resources. While absorbing this level of additional funding for the needs of these sites is possible, the extent of sites’ unmet needs was not the principal factor in selecting them for participation in the program. Under these circumstances, it is likely that other higher-priority needs within the agency will go unaddressed at sites within the national forest system that do not have a high level of revenues or that are not participating in the demonstration program. Accordingly, keeping all of the revenues at the demonstration sites that collect substantial amounts of fees may not be in the best interests of the agency as a whole. Data on the revenues and operating budgets for the Fish and Wildlife Service’s and BLM’s demonstration sites were more limited. As a result, we did not do analyses that were comparable to those we did on the Park Service’s and Forest Service’s sites. However, since visitation at the Fish and Wildlife Service’s and BLM’s sites is generally less than at park or forest sites, it is likely that these agencies do not have a high proportion of high-revenue sites. Among the sites we visited, one of the Fish and Wildlife Service’s sites and one of BLM’s sites realized fee revenues through the demonstration program that were high in relation to their operating budgets. BLM has allowed its demonstration sites to retain 100 percent of the fee revenues they collect to address their own needs. However, it is likely that only a few sites have or will generate high levels of revenues relative to their operating budgets, according to BLM headquarters staff. We could not determine specifically how many BLM demonstration sites have or will generate fee revenues equal to 20 percent or more of their operating budgets because this information was not available at BLM headquarters and only 10 sites were operational in fiscal 1997, with 53 more approved as of March 31, 1998, that were to begin collecting fees during fiscal year 1998. Among all of BLM’s demonstration sites, the Red Rock Canyon National Conservation Area that we visited in Nevada is the highest revenue site, according to BLM staff. At Red Rock, the annual operating budget is estimated to be $1.2 million, while estimated gross revenues from the demonstration program for fiscal year 1998 are $0.9 million, or 75 percent of the operating budget. Another of BLM’s demonstration sites with relatively high revenues is the Lower Deschutes Wild and Scenic River in central Oregon where boater use and campsite fees generated $326,088 in fiscal year 1997, which is 53 percent of the recreation site’s annual operating budget of $617,000. As with BLM, data on how many of the Fish and Wildlife Service’s demonstration sites are generating fee revenues amounting to 20 percent or more of their operating budgets were limited. However, agency staff have reported few sites generating revenues that might amount to 20 percent or more of their operating budgets. Of the three Fish and Wildlife Service sites we visited, only one—Chincoteague National Wildlife Refuge in Virginia—had relatively high recreational fee revenues. There, about $300,000 was expected in fee revenues in fiscal year 1998, or about 17 percent of the refuge’s annual operating budget of $1.8 million. If this level of revenue continues, and appropriations remain stable, then managers at the refuge thought that the entire repair and maintenance backlog could be addressed with the program’s revenues. The fee demonstration program has created a significant new revenue source, particularly for the Park Service and the Forest Service, during a period of tight budgets. However, at high-revenue sites, there is no assurance that the needs being addressed are among those having the highest priority within an agency—raising questions about the desirability of the legislative requirement that at least 80 percent of the revenues remain at the collecting site. Using the revenues created by the fee demonstration program on projects that may not have the highest priority is inefficient and restricts the agencies from maximizing the potential benefits of the program. While giving recreation site managers a significant financial incentive to establish and operate fee-collection programs, the current legislation may not provide the agencies with enough flexibility to address high-priority needs outside of high-revenue sites. Factors such as the benefit to visitors, the size of a site’s resource and infrastructure needs, the site’s fee revenues, and the most pressing needs of the agency as a whole are important to consider in deciding where to spend the funds collected. Even if the demonstration program is made permanent and all recreation sites are permitted to collect fees, inequities between sites will continue. As the Congress decides on the future of the fee demonstration program, it may wish to consider whether to modify the current requirement that at least 80 percent of all fee revenues remain in the units generating these revenues. Permitting some further flexibility in where fee revenues could be spent, particularly the fees from high-revenue sites, would provide greater opportunities to address the highest-priority needs of the agencies. However, any change to the 80-percent requirement would have to be balanced against the need to maintain incentives at fee-collecting units and to maintain the support of the visitors. Two agencies within the Department of the Interior raised concerns about this chapter. In general, the Park Service agreed with the findings of the report. However, the Park Service commented on the abilities of some park units to address their backlogged repair and maintenance needs through fee revenues. Specifically, the Park Service said that our portrayal of this issue paints a false picture as the report does not address backlogged resource management needs in addition to repair and maintenance needs. We disagree with the Park Service’s comment on this point. We acknowledge that regardless of what happens to the repair and maintenance backlog, there may continue to be needs related to the natural and cultural resources at the parks we reviewed and at other sites. However, early in its implementation of the demonstration program, the Park Service directed its demonstration sites to focus program expenditures on addressing backlogged repair and maintenance items. Because of this Park Service emphasis, we sought to determine to what extent the new fee revenues would be able to address these items. We found that park managers at several parks indicated that they could address their existing repair and maintenance backlog in a few years (5 years or less) through these fee revenues. For example, managers of some of the parks we visited, such as Zion and Shenandoah, indicated that they could resolve their backlog of repair and maintenance needs in a few years through revenues from the demonstration program. In our view, this belief that individual park units may be able to eliminate their repair and maintenance backlog is not consistent with the Park Service’s past portrayal of a large repair and maintenance backlog, especially since the backlog, and not resource needs, is the agency’s stated focus for new revenues. The Fish and Wildlife Service disagreed with what it viewed as “an inference in the draft report that the practice of retaining 80 percent of the revenues at the station where fees are collected may not be a good practice.” In fact, the 80-percent requirement is appropriate in some cases; however, providing the agencies with greater flexibility may enable them to better address their highest-priority needs. The matter for congressional consideration on providing additional flexibility to the agencies that we have offered is primarily directed at high-revenue sites. Furthermore, our comments on this issue are consistent with the testimony of the Assistant Secretary of the Interior for Policy, Management, and Budget who said that setting aside some of the fee revenues for broader agency priorities is important and cautioned that giving the collecting sites a high percentage of the revenues could create undesirable inequities within an agency. The Department of Agriculture’s Forest Service agreed with our matter for congressional consideration that the 80-percent requirement be changed to permit greater flexibility. They noted that the emphasis on this point should remain on high-revenue sites and that any change to the 80-percent requirement would have to be balanced against the need to maintain incentives at fee-collecting units and to maintain the support of the visitors. Each of the agencies can point to a number of success stories and positive impacts that the fee demonstration program has had so far. Among the four agencies, a number of examples exist in which a new or innovative approach to collecting fees has resulted in greater convenience for the visitors and has improved efficiency for the agency. In addition, several of the agencies have tried innovative approaches to pricing that have resulted in greater equity in fees. However, some agencies could do more in this area. For example, while the Park Service has been innovative in looking for new ways to collect fees, it has been reluctant to experiment with different pricing approaches. As a result, the agency has not taken full advantage of the opportunity presented by the demonstration program. Greater innovation, including more business-like practices like peak-period pricing, could help address visitors’ and resource management needs. In addition, although the Congress envisioned that the agencies would work with one another in implementing this program, the coordination and the cooperation among the agencies have, on the whole, been erratic. More effective coordination and cooperation among the agencies would better serve visitors by making the payment of fees more convenient and equitable and, at the same time, reduce visitors’ confusion about similar or multiple fees being charged at nearby or adjacent federal recreation sites. One of the key legislative objectives of the demonstration program is for the agencies to be creative and innovative in implementing their fee programs. The program offers an opportunity to try new things and to learn lessons on what worked well and what did not. Among the four agencies, numerous examples can be found of innovation in developing new methods for collecting fees. In addition, the Forest Service and BLM have also experimented with new pricing structures that have resulted in greater equity in fees. However, the Park Service and the Fish and Wildlife Service have generally maintained the traditional pricing practices they used prior to the demonstration program. Accordingly, the Park Service and the Fish and Wildlife Service can do more in this area. Furthermore, greater experimentation would better meet the objective of the demonstration program as agencies could further their understanding of ways to make fees more convenient, equitable, and potentially useful as tools to influence visitation patterns and to protect resources. Examples of innovations in fee programs are differential pricing and vendor sales, which have been widely used by commercial recreation enterprises for many years. For instance, golf courses and ski areas frequently charge higher prices on the weekend than they do midweek, and amusement parks often sell entrance passes through many vendors. These concepts had rarely, if ever, been part of the four agencies’ fee programs prior to the demonstration. The Park Service, the Forest Service, and BLM are trying new ways of collecting fees that may prove more convenient for visitors. For example, the Park Service is now using automated fee-collecting machines at over 30 of its demonstration sites. These machines are similar to automated teller machines (ATM): Visitors can pay their fees with cash or credit cards, and the machine issues receipts showing the fees were paid. For example, the Grand Canyon National Park sells entrance passes at machines located in several areas outside the park, including in the towns of Flagstaff and Williams, Arizona, which are both along frequently used routes to the park and more than 50 miles from the park’s south entrance. The park has dedicated one of the four lanes at its entrance station for visitors who have already purchased their entrance passes. Thus, visitors who use the machines outside the park can avoid lines of cars waiting to pay fees at the park’s entrance station. At other demonstration sites within the Park Service, visitors can use automated fee-collection machines to pay for entrance fees, annual passes, or boat launch fees. As part of the demonstration program, the Forest Service is looking for ways to make paying fees more convenient for the visitor and more efficient for the agency. In some instances, paying fees at a location inside a forest may not always be convenient for visitors—particularly if that location is not near where visitors enter the forest, according to a Forest Service headquarters official. Some sites have experimented with having businesses and other groups outside of the forest collect entrance and user fees from visitors before they come into the forest. The vendors of the entrance and user permits are frequently small businesses, such as gas stations, grocery stores, or fish and tackle stores, that are located near the forest. For example, 350 vendors sell passes to visitors for recreation on any of four national forests in southern California. By having vendors sell entrance and user permits, a forest can increase the number of locations where visitors can pay fees and can thereby make paying fees more convenient. At Paria Canyon-Coyote Buttes in Arizona, one of BLM’s demonstration sites, the agency is experimenting with selling hiking and camping permits via the Internet. Permits are required for overnight camping by up to a total of 20 persons per day in the Paria Canyon area and for hiking by up to a total of 20 persons per day in the Coyote Buttes area. BLM, working in cooperation with Northern Arizona University and the Arizona Strip Interpretive Association, has developed a website that allows visitors to obtain information on the area, check on the availability of permits for future dates, make reservations, fill out and submit detailed application forms, or print out the application forms for mailing. In addition, visitors can pay for permits over the Internet using credit cards, although the agency is still in the process of developing the security protocols that are needed to properly protect the transactions. Visitors can also fax credit card payments or send payments through the mail. Besides innovating and experimenting to make paying fees more convenient for visitors, two of the agencies are also experimenting with various pricing strategies at demonstration sites. Pricing strategies being tried by the Forest Service and BLM are focused on charging fees that vary based on the extent of use or on whether the visit is made during a peak period—such as a weekend—or during an off-peak period. This concept is generally referred to as differential pricing and has resulted in greater equity in pricing at the sites where it has been tried. For example, in Utah, Uinta National Forest and Wasatch-Cache National Forest have both experimented with differential pricing. At American Fork Canyon/Alpine Loop Recreation Area, within the Uinta National Forest, the forest began charging a new entrance fee under the demonstration program of $3 per car for a 3-day visit and $10 for a 2-week visit. Similarly, at the Mirror Lake area within the Wasatch-Cache National Forest, visitors pay a new entrance fee of either $3 per vehicle for a day or $10 per vehicle for a week. Thus visitors to both the Uinta and Wasatch-Cache National Forests pay fees that vary with the extent of use. Fees that vary with use are more equitable than a single fee for all visitors regardless of use, as has been the traditional practice at many federal recreation sites. The Forest Service and BLM have also experimented with charging fees that differ based on peak and off-peak periods. For example, at Tonto National Forest in Arizona, the entrance fees vary depending on the day of the week. The forest sells two annual passes for day use, including use of the boat launch facilities, at six lakes within the forest. One pass sells for $90 per year and is valid 7 days a week. The other pass sells for $60 per year and is valid only Monday through Thursday, the forest’s off-peak period. Another example of peak pricing is at the Lower Deschutes Wild and Scenic River in Oregon, one of BLM’s sites, where as part of the demonstration program, the agency charges a camping fee of $10 per site per day on weekends in the summer and a $5 per site per day fee midweek and during weekends in the off-season. By charging lower fees for off-peak use, these agencies are using fees as a management tool to encourage greater use when sites have fewer visitors. This practice can help to mitigate the impact of users on resources during what would normally be the sites’ busiest periods. While the Park Service has tried new methods for collecting fees, opportunities remain for the agency to further the goals of the demonstration program by being more innovative and experimental in its pricing strategies. While the agency certainly does not need to retool its program or use differential pricing arrangements at each of its sites, the Park Service could build on what it has already done. Specifically, it could look for ways, where appropriate, to provide greater equity in fees to give visitors incentives to use parks during less busy periods, thus reducing demand on park facilities and resources during the busiest times. Because of the large numbers of visitors and the large amount of fee revenues generated, the Park Service has an opportunity to improve its pricing strategies. For the types of areas managed by the Park Service, entrance fees have worked well for the agency and are convenient for most visitors to pay. However, visitors to units of the national park system having entrance fees (about one-third of the 376 units) generally pay the same fee whether they are visiting during a peak period, such as a weekend in the summer, or an off-peak period, such as midweek during the winter, and whether they are staying for several hours or several days. A more innovative fee system would make fees more equitable for visitors and may change visitation patterns somewhat to enhance economic efficiency and reduce overcrowding and its effects on parks’ resources. For example, managers at several of the parks we visited, including Assateague Island National Seashore and Shenandoah National Park, discussed how during peak visitation periods, such as summer weekends, long lines of cars frequently form at entrance stations, with visitors waiting to pay the fee to enter the parks. The lines are an inconvenience to the visitors and the emissions from idling cars could affect the sites’ resources. By experimenting with pricing structures that have higher fees for peak periods and lower fees for off-peak periods, sites might be able to shift more visitation away from high-use periods. Our past work has found that increased visitation has eroded many parks’ ability to keep up with visitors’ and resource needs. Innovative pricing structures that result in less crowding in popular areas would also improve the recreational experience of many park visitors. Furthermore, according to the four agencies, reducing visitation during peak periods can lower the costs of operating recreation sites by reducing (1) the staff needed to operate a site, (2) the size of facilities, (3) the need for maintenance and future capital investments, and (4) the extent of damage to a site’s resources. As we already pointed out, the private sector uses such pricing strategies as a matter of routine—including when the private sector operates within parks. The private sector concessioner that operates the lodging facilities in Yosemite National Park in California, for example, employs peak pricing practices. Lodging rates are higher during the peak summer months and lower during the months when the park attracts fewer visitors. Furthermore, most parks with entrance fees charge the same fee regardless of the extent of use. For example, Zion and Olympic National Parks both charge an entrance fee of $10 per vehicle for a visit of up to 1 week. This fee is the same whether visitors are enjoying these areas for several hours, a day, several days, or the full week. This one-size-fits-all approach is convenient for the agency but may not be equitable or efficient because visitors staying longer enjoy more benefits from a site. At one park, the lack of an alternative to the 7-day entrance fee has contributed to the formation of a “black market” in entrance passes. According to recent media reports, some visitors to Yellowstone National Park are reselling their $20 1-week entrance passes—after staying only a few days or less at the park—to other visitors planning to enter the park. Since the passes are valid for 7 days, a family could sell its pass to another carload of park visitors for perhaps half price and reduce the cost of visiting the park for both parties. Even though the entrance pass is nontransferable and selling a pass is illegal and subject to a $100 fine, the park does not have an estimate of the extent of the situation. The park has not experimented with an entrance fee for visits of less than 7 days, a pricing option that would be likely to address the illegal resale of passes. Park Service headquarters officials indicated that the agency had not tried differential pricing at demonstration sites because, in their view, it (1) would be difficult to conduct sufficient enforcement activities to ensure compliance, (2) would increase the costs of fee collection, and (3) may result in a decrease in fee revenues. While we acknowledge that it may be simpler to charge only one rate to visitors at demonstration sites, the agencies that are currently using differential pricing—the Forest Service and BLM—have been able to address the concerns raised by the Park Service. Given the potential benefits of differential pricing to both the agency and the visitors, an opportunity exists for the Park Service to experiment with such pricing at a small sample of demonstration sites. The four agencies have implemented a number of multiple-agency fee demonstration projects. Although these efforts are few in comparison to the more than 200 fee projects that have begun so far, they demonstrate that multiple agencies with somewhat varying missions can form successful partnerships when conditions, such as geographical proximity, present the opportunity. While we found several examples of successful, multiple-agency fee demonstration projects, more could be done. At several of the sites we visited, opportunities existed for improving the cooperation and coordination among the agencies that would increase the quality of service provided to visitors. The legislative history of the fee demonstration program includes an emphasis on the participating agencies’ working together to minimize or eliminate confusion for visitors where multiple fees could be charged by recreation sites in the same area. There are several areas that are now working together to accomplish this goal. For example, a joint project was developed in 1997 at the American Fork Canyon/Alpine Loop Recreation Area in Utah between the Forest Service’s Uinta National Forest and the Park Service’s Timpanogos Cave National Monument. The monument is surrounded by Forest Service land, and the same roads provide access to both areas. Because of this configuration, the agencies generally share the same visitors and charge one fee for entrance to both areas. The sites also have similar public service and resource management goals. Fee-collection responsibilities are shared between the two agencies, and expenditures are decided upon by representatives from both agencies as well as from two other partners in the project—the State of Utah Department of Transportation and the county government. Figure 4.1 shows the partnership’s entrance station for the area. Since 1997, fee revenues from the project have paid for the rehabilitation of several bridges in popular picnic areas (see fig. 4.2). Future fee revenues will fund the staffing and maintenance of entrance stations where fees are collected; the repair and maintenance of camping areas, trails, and parking areas; additional law enforcement services; and resource management projects. Agencies—federal and nonfederal—have worked together to improve visitor services and reduce visitor confusion as part of the fee demonstration program in other areas as well. Examples include (1) the Tent Rocks area in northern New Mexico (BLM and an Indian reservation); (2) recreation sites along the South Fork of the Snake River in Idaho (the Forest Service, BLM, state agencies, and county governments); (3) recreation sites in the Paria Canyon-Coyote Buttes area in Arizona (BLM, the Arizona Strip Interpretive Association, and Northern Arizona University); (4) the Pack Creek bear-viewing area in the southeast Alaska (the Forest Service and the Alaska Department of Fish and Game); and (5) the proposed Oregon Coastal Access Pass (the Park Service, BLM, the Forest Service, and Oregon state parks). Through the partnership at the Tent Rocks area in north-central New Mexico between Albuquerque and Sante Fe, visitors get access to a unique geological area that BLM administers via a 3-mile access road across Pueblo de Cochiti, an Indian reservation. BLM’s site, known as the Tent Rocks Area of Critical Environmental Concern and National Recreation Trail, features large, tent-shaped rocks that hug steep canyon walls. The area is surrounded by two Indian reservations. The only access road for vehicles to Tent Rocks crosses land owned by Pueblo de Cochiti. In 1998, a cooperative partnership agreement gave visitors access to Tent Rocks, while specifying prohibited activities to preserve the tranquility of the pueblo community. The agreement also specifies resource preservation measures to protect the Tent Rocks area. Annually, Tent Rocks is visited by about 100,000 people. Under the terms of the agreement, BLM is responsible for collecting fees and shares $1 of the $5 vehicle fee with Pueblo de Cochiti. The pueblo provides interpretive talks, trash pickup, and road maintenance. As of July 1998, this interorganizational demonstration project was working satisfactorily, according to BLM officials. The Oregon Coastal Access Pass has been proposed for visitors to enter several adjacent federal and state recreation sites, each of which now charges a separate entrance fee. These include the Park Service’s Fort Clatsop National Memorial, BLM’s Yaquina Head Outstanding Natural Area, the Forest Service’s Oregon Dunes National Recreation Area, and the state of Oregon’s Department of Parks and Recreation. All of these sites currently charge separate fees, ranging from several dollars per person to over $10. For a number of years, visitors to these sites have commented on the lack of government coordination over the numerous entrance and user fees these facilities charge. During the last 2 years, representatives from the federal and state agencies involved have held meetings to develop an Oregon Coastal Access Pass, which would be good for entrance and use at all participating federal and state sites along the Oregon coast. According to a Forest Service official, two issues need to be resolved before implementing the pass: (1) the estimation of the revenues from each of the facilities to determine the amount of anticipated revenues to be shared and (2) the development of and agreement on an equitable formula to share fee revenues among the federal and state sites. The pass could be implemented in 1999, according to a Forest Service official participating on this project. While some progress is being made to increase coordination among agencies, our work shows that there are still opportunities for improvement that would benefit both the federal government and visitors. Further coordination among the agencies participating in the fee demonstration program could reduce confusion for the visitors as well as increase the revenues available for maintenance, infrastructure repairs, or visitor services. Even at the few participating sites we visited, we identified three areas where better interagency coordination would provide improved services and other benefits to the visiting public, while at the same time generating increased fee revenues. For example, in New Mexico, BLM administers a 263,000-acre parcel called El Malpais National Conservation Area. Within the BLM boundaries of this site is the El Malpais National Monument created in 1987 and managed by the Park Service (see fig. 4.3). Adjoining several sides of the agencies’ lands are two Indian reservations. Interstate, state, and county roads cross and border the BLM and Park Service lands. Presently, neither parcel has an entrance or user fee. In 1997, as part of the fee demonstration program, BLM proposed a $3 daily fee to the site. According to a BLM official, the proposed demonstration site was to be managed as a joint fee demonstration project with the Park Service, with the fee applicable to both areas. According to BLM, a demonstration project would not only increase revenues to pay for work needed at the site but also increase the presence of agencies’ officials at the site, which would help deter vandalism and other resource-related crimes. Because it is difficult for visitors to distinguish between the two sites, a unified and coordinated approach to fee collection made good management sense and would avoid confusion among fee-paying visitors to the sites. The surrounding communities endorsed BLM’s proposal, but Park Service officials at the site did not. They told us that they believed that there would be low compliance with any fee requirements because of the multiple access roads to the site, that potentially delicate situations would arise with Native Americans using the land for ceremonial purposes, and that theft and vandalism would increase because of the proposed project’s unstaffed fee-collection tubes. A local BLM official, however, said that the site could generate significant revenues (over $100,000 annually), that fee exemption cards could be developed for Native Americans using the land for traditional purposes, and that past experience in the southwest has not shown extensive damage to unstaffed fee-collection devices like those proposed for use at this site. As a result of the differing views between BLM and Park Service officials at this site, no coordinated approach has been developed. However, our work at the site indicated that experimenting with a new fee at the location would be entirely consistent with the objective of the demonstration program. As of August 1998, neither agency had documented its analysis of the situation, and BLM was considering deleting the site as a potential fee demonstration project. In the state of Washington, we found another opportunity for interagency coordination. Olympic National Park and the Olympic National Forest share a common border for hundreds of miles and are both frequently used by backcountry hikers. For backcountry use, hikers are subject to two separate fees at Olympic National Park—a $5 backcountry hiking permit and a $2 per night fee for overnight stays in the park. In contrast, Olympic National Forest does not have an entry fee, a backcountry permit fee, or any overnight fee in areas that are not specifically designated as campsites. However, the forest does have a trailhead parking fee of $3 per day per vehicle or $25 annually per vehicle. As a result, backcountry users who hike trails that cross back and forth over each agency’s lands are faced with multiple and confusing fees. Figure 4.4 shows an example of a backcountry hike from Lena Creek (Olympic National Forest land) to Upper Lena Lake (Olympic National Park land)—14 miles round-trip—where backcountry users would face such multiple fees. Table 4.1 lists the fees involved for the hike. We discussed this situation with on-site managers from both agencies. They agreed that they should better coordinate their respective fees to reduce the confusion and multiplicity of fees for backcountry users. However, so far, neither agency has taken the initiative to make this happen. At the time of our review, no one at the departmental or agency headquarters level routinely got involved in these kinds of decisions. Instead, the decisions were left to the discretion of the site managers. A third example of where greater coordination and cooperation would lead to operational efficiencies and less visitor confusion is in Virginia and Maryland at the Chincoteague National Wildlife Refuge, administered by the Fish and Wildlife Service, and the Assateague Island National Seashore, administered by the Park Service. Although the sites adjoin each other on the same island (see fig. 4.5), they are not a joint project in the fee demonstration program—each site is a separate fee demonstration project. During our review, we found many similarities between these two sites that offer the possibility of testing a single entrance fee for both sites. Both sites charge a daily entrance fee ($5 per vehicle), cooperate on law enforcement matters, and run a joint permit program for off-road vehicles. In 1997, according to Park Service officials, the two agencies together issued 5,000 annual off-road vehicle permits at $60 each. By agreement between the two agencies, the permit revenues are shared, with one-third going to the refuge and two-thirds going to the Park Service. The Park Service already provides staff to operate and maintain a ranger station and bathing facilities on refuge land. Despite these overlapping programs and similarities, the units still maintain separate, nonreciprocal entrance fee programs. This situation is continuing even though officials at the refuge told us that visitors are sometimes confused by separate agencies managing adjoining lands without any reciprocity of entrance fees. For example, during a 7-day period in July 1998, refuge officials counted 71 of 4,431 visitor vehicles as wishing to use their vehicle entrance passes for Assateague to gain admittance to Chincoteague. Similarly, during the 7-day period of July 31 through August 6, 1998, Assateague officials counted 40 of 4,056 visitor vehicles as presenting Chincoteague entrance passes to gain admittance to Assateague. In both instances, visitors needed explanations about the entrance fee policies and practices of the two sites. Refuge and seashore officials have discussed this issue, but the matter remains unresolved. While there are many notable examples of innovation and experimentation in setting and collecting fees at demonstration sites, further opportunities remain in this area. Innovation and experimentation were one of the objectives under the demonstration program’s authority and could result in fees that are more equitable, efficient, and convenient and could also work toward helping the agencies accomplish their resource management goals. Congressional interest in encouraging more interagency coordination and cooperation was focused not only on seeking additional revenues but also on developing ways to lessen the burden of multiple, similar fees being paid by visitors to adjoining or nearby recreation sites offering similar activities. Successful experiences with interagency coordination and cooperation have produced noteworthy benefits to the agencies and to visitors. Additional coordination and cooperation efforts should be tested at other locations to get a better understanding of the full impact and potential of the program. We recommend that the Secretary of the Interior require that the heads of the Park Service and the Fish and Wildlife Service take advantage of the remaining time under the fee demonstration authority to look for opportunities to experiment with peak-period pricing and with fees that vary with the length of stay or extent of use at individual sites. We also recommend that the Secretaries of the Interior and Agriculture direct the heads of the participating agencies to improve their services to visitors by better coordinating their fee-collection activities under the recreational fee demonstration program. To address this issue, each agency should perform a review of each of its demonstration sites to identify other federal recreation areas that are nearby. Once identified, each situation should be reviewed to determine whether a coordinated approach, such as a reciprocal fee arrangement, would better serve the visiting public. Two agencies within the Department of the Interior commented on this chapter. The Park Service raised concerns about experimenting with differential or peak-period pricing. The agency said that experimenting with fees could result in complex fee schedules, increased processing times at entrance stations, confused visitors, and more difficult enforcement. In addition, the agency took exception to the draft report’s comparisons to the differential pricing practices used at amusement parks, golf courses, and ski areas, noting that the agency’s purpose is different from the purposes of such operations. However, we disagree that these concerns are reasons not to implement different pricing policies at some parks. We recognize that the Park Service’s current fee schedule has been successful but question whether the agency has responded sufficiently to one of the intents of the recreational fee demonstration program: that agencies experiment with innovative pricing structures. If done well, experimenting with differential pricing at Park Service demonstration sites need not result in complex fee schedules, delays at entrance stations, confused visitors, or significant increases to the cost of collection. It is in this context, that we provided the examples of golf courses, amusement parks, and ski areas—recreation activities that routinely use differential pricing to which the public is already accustomed. In many cases, these fee systems are equitable, easily understood by the public, and do not cause delay or confusion. Furthermore, the Park Service comments on this point are not consistent with the January 1998 report to the Congress on the status of the fee demonstration program, which was jointly prepared by the Park Service, the Forest Service, BLM, and the Fish and Wildlife Service and transmitted by the Undersecretary of the Department of Agriculture and an Assistant Secretary of the Department of the Interior. In that report, the four agencies noted that among the lessons learned up to that point was that differential pricing could be used to maximize resource protection or to minimize infrastructure investment. The report states that “higher fees on weekends, summer months, or other traditionally-high recreation use, might reduce the peak loads on resources and facilities . . . . Reductions in peak loads can directly reduce the cost to taxpayers associated with operating the recreation sites, providing services to these sites, and any attendant damage to the resource.” The Park Service also raised concerns about the draft report’s discussion of the potential for a joint fee demonstration site between the Park Service and BLM at El Malpais National Monument and El Malpais National Conservation Area. (BLM did not comment on this point.) The Park Service said that (1) a cost-benefit analysis showed it was not worth collecting fees and (2) collecting fees would affect the use of the area by five neighboring Native American tribes. It was clear from our work that there was disagreement among Park Service and BLM officials over whether El Malpais was a suitable site for inclusion in the demonstration program and that this disagreement continues. The boundaries of the agencies’ land make it unlikely that the project could succeed without a joint effort. We disagree with the Park Service’s concerns raised on this point and question their accuracy since the analysis showing that fee revenues would be low, referred to in the Park Service’s comments, has not been completed. We obtained a draft of that analysis which, according to Park Service staff at El Malpais National Monument, was the most recent analysis available as of October 15, 1998. The draft analysis contains no information on anticipated costs or revenues from charging fees at this site. Furthermore, we disagree with the Park Service’s assertion that fees would affect Native American use of the site. According to the Park Service regional fee demonstration coordinator, at park units where similar situations existed, local managers were able to resolve cultural issues with the Native Americans using the sites. The Fish and Wildlife Service commented that there may be opportunities for the agency to experiment with off-peak pricing, but such opportunities would be limited to those sites where there is sufficient visitation to create crowding and provide an incentive for off-peak use. We agree. In fact, crowded parking at one refuge was a big enough concern that managers were considering measures to better handle visitation during peak periods. The Fish and Wildlife Service also commented on the need for greater coordination among the agencies. The agency noted that cooperative fees have been tried in many instances where they are appropriate and that some of these have resulted in moderate success. We encourage the agency to continue to look for opportunities to coordinate since it would generally increase the level of service provided to the visiting public. The Department of Agriculture’s Forest Service agreed with the recommendation for the agencies to look for opportunities to coordinate their fee programs. Data from recreational fee demonstration sites participating in 1997 suggest that the new or increased fees have had no overall adverse effect on visitation, although visitation did decline at a number of sites. Such data, however, are based on only 1 year’s experience, so the full impact of fees on visitation will not be known until completion of the program. Early research on visitors’ opinions of the new fees has shown that visitors generally support the need for, and the amount of, new fees. However, these conclusions are based on limited analysis in that only two of the four agencies—the Park Service and the Forest Service—have completed visitor surveys at a small number of sites participating in the demonstration program. Accordingly, the survey results may not represent visitors’ opinions at all participating sites or represent views of nonvisitors. Each participating agency planned to conduct additional visitor surveys in 1998 and 1999 to more fully assess the impact of fees on visitation. However, some interest groups and recreation fee experts have identified some research gaps, such as potential visitors who do not come to recreation sites or who do go to sites but drive off because of the new or increased fees and fail to participate in the survey. A number of interest groups we contacted were generally supportive of the program. However, some had concerns about the program and how it was being implemented. Although data for more years will be needed to fully assess the effect of increased recreational fees on visitation, 1997 data from the 206 sites participating in the demonstration program preliminarily suggest that the increased fees have had no major adverse effect on visitation. Except for BLM, each agency reported that, overall, visitation increased from 1996 to 1997 at its sites, even though some individual sites experienced declines in visitation, especially when new fees were charged. Data from 1997 are the first available to assess the impact of the fee demonstration program on visitation, since the four agencies spent 1996 designing the program and selecting the sites. Overall, of the 206 demonstration sites operated by the four agencies, visitation during 1997 increased by 4.6 percent from 1996. Visitation increased at three agencies’ sites, with the Park Service sites showing the largest increase, while BLM reported an overall decline in visitation of 10.4 percent (see table 5.1). Among the 206 sites, visitation increased at 120 sites, decreased at 84 sites, and was unchanged at 2 sites (see table 5.2). Because these data represent only the change in 1 year and many factors besides fees can affect visitation levels, several agency officials told us that the 1996 to 1997 visitation changes provide only a preliminary indicator of the impact of increasing or imposing fees at the demonstration sites. In addition, visitation can be affected by a variety of factors, such as weather patterns, the overall state of the economy, gasoline prices, currency exchange rates, and historical celebrations. Accordingly, changes in fee levels or instituting new fees, by themselves, do not fully account for changes in visitation levels. Nonetheless, on the basis of the data currently available, a report by the four participating agencies to the Congress states, “Visitation to the fee demonstration sites does not appear to have been significantly affected, either positively or negatively, by the new fees.” While overall visitation increased 4.6 percent among all agencies in 1997, visitation levels varied among agencies and among sites within the same agency. During the period, visitation at nondemonstration sites among the agencies increased 3.6 percent. Changes in visitation to sites participating in the recreational fee demonstration program are summarized below for each of the four participating agencies. Annual visitation at the Park Service’s 96 sites participating in the recreational fee demonstration program in 1997 increased 5.6 percent over 1996—from 141.1 million to 149.0 million visitors. Visitation increased at 50 sites, decreased at 45 sites, and remained unchanged at 1 site. Some sites that raised existing fees in 1997 experienced significantly higher rates of visitation after the increased or new fees went into effect. For example, at one site we visited, Timpanogos Cave National Monument in Utah, a new entrance fee plus increased fees for cave tours allowed the park to hire additional cave interpreters, which lengthened the season for cave tours by 3 months. As a result, visitation increased 16 percent, and about 16,000 more visitors were able to tour the site in 1997 than 1996. In contrast, at another site we visited, Frederick Douglass National Historic Park in Washington, D.C., visitation declined 24 percent from 45,000 in 1996 to 34,000 in 1997. In 1997, the site instituted a new $3 per person entrance fee, whereas in 1996, entrance was free. According to a Park Service official, the new fees probably played a role in the decline in visitation. In commenting on a draft of this report, the Park Service stated that the closure of a nearby museum and several major road projects may have also influenced visitation at the site. Because visitation at the Park Service’s sites represents about three-quarters of total 1997 visitation at all of the demonstration program sites, we asked the Park Service for data on historical visitation levels at both its demonstration and nondemonstration sites. These data show that visitation at nondemonstration sites rose faster from 1996 to 1997, 7.0 percent compared with 5.6 percent for demonstration sites. The higher fees might be one factor accounting for a smaller percentage increase in visitation at the demonstration sites, but other factors might be more important. We found that the larger percentage increase at the nondemonstration sites in 1997 was consistent with changes in visitation over the last few years (1993-97) and, therefore, might have occurred even if fees had not been increased at the demonstration sites. Since 1994, there has been a steady trend in which visitation at nondemonstration sites has grown relative to visitation at demonstration sites. In fact, there was a much more substantial difference between the two groups in the changes in visitation from 1995 to 1996 before fees were increased at any of the sites. During that period, visitation increased by 0.9 percent at the nondemonstration sites but fell by 4.1 percent at the demonstration sites. Of the Forest Service’s 39 fee demonstration sites operating in 1997, visitation totaled 35.2 million—an increase of 724,000 recreation visits or a 2-percent increase over 1996. Visitation increased at 25 sites and decreased at 14 sites. At some sites where new fees were charged or where fees were paid only for entrance to a visitor center, visitation generally declined, according to a Forest Service official. For example, after Mono Lake in the Inyo National Forest in northern California instituted a $2 fee per person for day use or entry to a section of the visitor center (an exhibit room and movie theater), visitation declined 10 percent from the prior year, according to a Forest Service official. At other Forest Service sites, visitation increased despite new fees. At one site we visited, the Mount St. Helens National Volcanic Monument in Washington State, 1997 visitation rose to 3.1 million—a 15-percent increase over 1996. This increase occurred even though the site implemented two new fees: a user fee of $8 for a 3-day pass to the visitor centers and other developed sites and a climbing fee of $15. In 1997, the site also opened an additional visitor center and deployed snow plows earlier than in prior years, further increasing visitation. Visitation at the Fish and Wildlife Service’s 61 sites participating in the program increased from 9.4 million in 1996 to 9.5 million in 1997, or slightly over 1 percent. In 1997, visitation decreased at 17 sites, increased at 43 sites, and was unchanged at 1 unit compared with visitation in 1996. At the 30 refuges charging fees for the first time as well as at the 31 refuges that increased existing fees, there was little or no change in the level of visitation or participation in activities. The three sites we visited reflected these national visitation patterns. At Nisqually National Wildlife Refuge in Washington State, the entrance fee was increased from $2 to $3, and visitation increased by 41 percent, from about 45,000 in 1996 to 63,000 in 1997. At Chincoteague National Wildlife Refuge in Virginia, the entrance fee increased from $4 to $5, and visitation increased 7 percent, from 1.3 million visitors in 1996 to 1.4 million visitors in 1997. At another site we visited, Bosque del Apache National Wildlife Refuge in New Mexico, the entrance fee increased from $2 to $3, and visitation declined 10 percent from 132,000 in 1996 to 119,000 in 1997. Overall visitation at BLM’s 10 demonstration sites dropped by 10.4 percent from 1996 to 1997. This drop reflected decreases at eight sites and increases at two other sites. According to BLM, factors affecting visitation in 1997 included (1) inclement weather and flooding that limited access to recreation sites such as Paria Canyon-Coyote Buttes in Arizona and Utah, where visitation declined 16 percent between 1996 and 1997; (2) construction projects that interfered with visitors’ use of several sites such as the Kipp Recreation Area in Montana; and (3) new fees, such as at Anasazi Heritage Center in Colorado, where visitation declined 22 percent, in part because of resistance to new fees. At one BLM site we visited, Red Rock Canyon National Conservation Area west of Las Vegas, Nevada, a new entrance fee of $5 was implemented in 1997, but visitation increased from about 1 million in 1996 to about 1.14 million in 1997. At another BLM site we visited, Yaquina Head Outstanding Natural Area on the central Oregon coast, site visitation declined 10 percent, from about 540,000 in 1996 to about 486,000 in 1997. Visits to the interpretive center declined 27 percent when fees were introduced, and at the lighthouse, visits dropped from 531 walk-in visitors a day to 65—an 88-percent decrease. Subsequent changes in the lighthouse fee raised the average daily attendance to 425 in July 1998. Surveys completed by the Park Service and the Forest Service show that visitors generally support the need for, and the amount of, new or increased entrance or user fees. However, these surveys are limited to only a few sites and do not cover visitors to the sites of the Fish and Wildlife Service and BLM. Both the Park Service and the Forest Service are planning additional surveys for 1998 and 1999 that will probe more deeply into visitation issues. In addition, some representatives of interest groups and recreation fee researchers identified several areas needing further research to fully assess the impact of the fee demonstration program. Agency officials agreed that additional research is needed in a number of areas. All four agencies have research planned to address several of the research topics. Research on actual impact of the fee demonstration program by both the Park Service and the Forest Service shows that most visitors support the need for fees and believe that the fees are set at about the right level. A Park Service survey in 11 national park units taken during summer 1997 showed that 83 percent of the respondents were either satisfied with the fees they paid or thought the fees were too low; 17 percent thought the fees were too high. According to 96 percent of respondents, the fees would not affect their current visit or future plans to visit the park. Visitors supported the new fees in large part because they wanted all or most of the fee revenues to remain in the park where they were collected or with the Park Service so that the funds could be used to improve visitor services or protect resources, rather than be returned to the U.S. Treasury. Three surveys at fee demonstration sites administered by the Forest Service found general support for the program. A survey of over 400 visitors at the Mount St. Helens National Volcanic Monument in Washington State in 1997 found 68 percent of those surveyed said their visitor experience was worth the fee they paid. Although over 50 percent of those surveyed were not aware of the new fees prior to coming to Mount St. Helens, 69 percent said their visitation plans did not change as a result of the new fees. Overall, 92 percent of those surveyed were either very satisfied or satisfied with their experience at the site. A June 1997 to May 1998 survey of 1,392 backpackers and hikers at Desolation Wilderness, Eldorado National Forest, in California found that a majority accepted the concept of wilderness use fees and considered the amount charged to be about right. However, day-use fees were less acceptable than overnight camping fees—about 33 percent of those who were surveyed disliked day-use fees compared with 20 percent who disliked camping fees. Starting in 1997, visitors to all 39 of the Forest Service’s fee demonstration sites were given the opportunity to respond to a customer “comment card” when they purchased a permit. As of March 1998, 528 cards had been received from visitors to 45 individual national forests participating in the fee demonstration program. About 57 percent of the respondents either agreed or strongly agreed with the statement that the opportunities and services they experienced during their visits were at least equal to the fee they paid. Because only two of the four agencies participating in the recreational fee demonstration program have completed visitor surveys, additional research is planned for 1998 and 1999 to more fully assess visitors’ views on new or increased recreational fees. In 1998, both BLM and the Fish and Wildlife Service began their initial evaluations of the impact of the fee demonstration program on visitors. These surveys will be included as part of the final evaluation report of the demonstration program, which is intended to be a comprehensive evaluation on the impact of fees on visitation by each of the four agencies. Additional research by all four agencies, when completed, should more fully illustrate public acceptance and reaction to new or increased fees. Surveys on the impact of fees on visitation and other issues planned for 1998 and 1999 include the following: The Park Service plans additional research on visitation in 1998 that will (1) survey the managers at all 100 recreational fee demonstration sites concerning visitation and obtain their perceptions of the equity, the efficiency, and the quality of visitors’ experiences resulting from the fee demonstration program; and (2) conduct detailed case study evaluations at 13 fee demonstration sites, including a detailed visitor survey at each site. The case study sites will explore such questions as whether fees affected the mix of sites’ visitors and how fees and changes in fee levels have affected the visitors’ experience at the sites, among other questions. The surveys are being administered for the Park Service by the University of Idaho with assistance from the University of Montana and Pennsylvania State University. Survey results are expected by April 1999. The Forest Service plans to survey visitors at several national forests in 1998 to assess their views on new or increased fees under the demonstration program. Several visitor surveys will be completed at the national forests in Southern California as part of the fee demonstration project. The primary objectives of the surveys are to assess visitors’ responses to new recreational fees and the effects of the new fees on visitation patterns and to complete a follow-up survey of users who visited the demonstration sites before the new fees were in place. The surveys are being done by the Pacific Southwest Research Station in Riverside and by California State University, San Bernadino, and should be completed in 1999. In addition, a follow-up to a 1997 visitor survey is planned to assess the opinions of campers on new fee charges at the Boundary Waters Canoe Area Wilderness in Minnesota. The survey is being done by the College of Natural Resources, University of Minnesota, and should be completed by November 1998. A 1998 survey of a total of 2,600 visitors is planned at nine of the Fish and Wildlife Service’s wildlife refuges, according to an agency official. The survey objectives are to obtain visitors’ opinions on the fairness and equity of the fee being charged, alternative fee-collection methods, and the use of revenues from fee collections, among other topics. The nine sites selected will include those charging both entrance and user fees as well as sites with new fees and those that changed existing fees. The study is being completed for the Service by a contractor to the Department of the Interior’s National Biological Survey with assistance from Colorado State University. Survey results will be available by the end of 1998. During September 1998, BLM plans to survey a total of 800 people who visited eight different demonstration sites to assess their views on the program. The specific objectives of the survey are to determine the appropriateness of the fees charged, how revenues from fees should be used, and how fees will affect future visitation, among other topics. The sites selected will represent a cross-section of both dispersed and developed recreation sites. The survey is being done with assistance from the University of Virginia Survey Research Center and should be completed by December 1998. While much of the completed research on visitors’ opinions about recreational fees shows general support for the demonstration program, recreation fee experts and some interest groups we contacted raised concerns about some effects that completed or planned visitation research, generally, does not address. The concerns fell into three areas: the impact of new or increased fees on those not visiting recreation sites, backcountry users, and low-income users. First, almost all completed and planned visitation surveys concerning the recreational fee demonstration program have assessed or will assess visitors who have paid a user or entrance fee at the recreation site. This practice is consistent with the agencies’ evaluation approach of assessing visitors’ reactions to paying new or increased fees. However, potential visitors who do not come to the recreation site or who come to the site but leave because of new or increased fees have not been included in the surveys. For example, at Glacier National Park in 1997-98 a fee was collected at the park’s western entrance on certain winter weekends. According to reports in the media, during this period, passengers in a number of cars refused to pay the fee and canceled their visit to the park. It is because of situations like this that several recreation fee researchers we contacted said further research is needed to determine whether recreational fees are precluding potential recreation users from visiting the sites in the demonstration program. Representatives from two of the four agencies participating in the fee demonstration program agreed this was an important research concern that completed or planned visitation research will not address. The Forest Service plans a national recreation survey in 1998-99 that, among other topics, will address the general public’s reaction to new or increased fees. In commenting on this report, the Park Service said it plans to conduct a survey of the general public to determine the impact of new or increased fees on visitation. This survey should be completed by December 1999. Fish and Wildlife Service officials said they had not planned such research because (1) this type of research was expensive to conduct and (2) it was not yet a high enough priority among competing research needs within the agency. Officials from BLM said that if fee increases appeared to be a factor in causing a decline in 1998 visitation figures, the agency would be likely to conduct research on this topic. Second, limited visitation surveys have been completed or are planned on the impact of new or increased fees on backcountry recreation. Only one of the completed surveys and one survey planned for 1998 has or will focus exclusively on backcountry recreation: the Forest Service’s 1997-98 survey of Desolation Wilderness in northern California and its summer 1998 survey of visitors to the Boundary Waters Canoe Area Wilderness in Minnesota. Furthermore, only 1 of the 11 national park units included in the Park Service’s 1997 visitation survey had instituted fees for backcountry use. One interest group contacted, Outward Bound USA,suggested that visitors’ acceptance of new or increased fees was greater in developed recreation areas and that backcountry users were less enthusiastic about the program because agencies charge multiple fees for backcountry activities in the same area and many backcountry fees are new fees rather than increases in existing fees. Several recreation fee researchers contacted said that since many backcountry use fees were new, additional research was needed to determine if fees were affecting backcountry visitation patterns. While representatives from the Park Service and the Forest Service agreed this was an important research concern, Fish and Wildlife Service officials did not, since their recreation sites do not involve nearly as much dispersed backcountry recreation as the Park Service’s and the Forest Service’s. A BLM official acknowledged this was an important issue, but said the agency’s visitation survey would only be administered at a small number of sites with dispersed backcountry recreation. In commenting on a draft of this report, the Park Service said that it plans to conduct a survey of backcountry/winter recreation users, to be completed by December 1999, to determine the impact of new or increased fees on visitation. A Forest Service official said the agency’s two surveys would shed some light on the impact of fees on backcountry use but believed more research was needed to fully assess the impact of fees on the Forest Service’s many sites with backcountry use. The Forest Service official favored more emphasis on such research but said that funding it would have to be balanced with other research priorities. Third, concerns have been expressed about the effect of new or increased fees on low-income visitors to federal recreation sites participating in the fee demonstration program. While BLM and the Fish and Wildlife Service plan surveys to address this issue, neither the Park Service nor the Forest Service has completed or plans research sufficient to address this topic at a number of sites participating in the demonstration program. Two groups we contacted, the National Parks and Conservation Association and Outward Bound USA, emphasized that although recreational fees are becoming more common, at some point fee increases will affect the demographics of recreation users, particularly those with limited means. In commenting on a draft of this report, the Forest Service stated that it is considering requiring fee demonstration sites to (1) collect data on the impact of fees on low-income and ethnic populations and (2) offer proposals to mitigate any impacts. Prior recreation fee research has also raised concerns about the impact of fees on the visitation patterns of low- and moderate-income users. For example, a study of the impact of fees on recreational day use at Army Corps of Engineers recreation facilities suggests that a larger proportion of low-income users would stop visiting a site if fees were charged and, since low-income users are more sensitive to the magnitude of fees charged, that higher fees would displace a higher proportion of low-income users. In addition, a 1997 survey of 1,260 visitors to 11 national park units found that 17 percent thought the fees charged were too high and that the lower the respondent’s income, the greater the tendency to think the fees charged were too high. Several recreation fee researchers contacted said that while some completed research has shown support for new fees among users of all income levels, further research is needed to understand how new fees and fee levels affect visitation of low-income users at federal recreation sites. A number of interest groups we contacted, while generally supportive of the program, had some concerns about how the program was being implemented and were withholding a strong endorsement until more tangible results of the program were available. Some groups were concerned that recreational fee increases represented an unfair burden on commercial recreation providers and that public acceptance of fee increases may diminish if fee increases go much higher. Also, some users were concerned that fees were too high and amounted to double taxation. All nine of the interest groups we contacted supported the recreational fee demonstration program, but some had concerns about how the program was being implemented. For example, the American Recreation Coalition supports the program because fees have generated funds to preserve aging agency facilities, provide new interpretative services, or experiment with new or innovative fee-collection initiatives, such as a regional trail pass program. However, the coalition was concerned that, in some cases, new or increased fees were being added to permit fees already paid by commercial recreation providers to the agencies, which represented an unfair and costly burden to their operations. The National Parks and Conservation Association told us it supports the fee demonstration program because fees are retained at the sites where they are collected and are used to reduce maintenance backlogs. At the same time, however, the association was concerned that at some point the public’s acceptance of fee increases may erode. For example, according to the association, excessive use fees for private boaters along the Colorado River and a doubling or tripling of entrance fees at certain popular national parks such as Yosemite are actions that are likely to stretch the limit of public acceptance of new recreational fees. Another group from Washington State, the Mountaineers, told us that while the public has initially accepted the program, the group was withholding a strong endorsement of it until it could see the results from the agencies’ spending on increased maintenance, enhanced visitor services, or interpretative programs and the results of visitor surveys. Some visitors to federal recreation sites under the demonstration program have voiced opposition to new or increased fees. For example, a Forest Service analysis of 528 comment cards found that about 26 percent disagreed or strongly disagreed with the statement that the value of the recreation opportunities and services the visitors had experienced was at least equal to the fee they paid. In addition, 43 percent of the 420 people providing written comments on the cards made negative statements about the recreational fees, such as “the price is too high,” “this is double taxation,” or “I oppose the fees.” Similarly, an analysis of 484 pieces of correspondence received by the Park Service between July 1996 and September 1997 showed that 67 percent of respondents expressed some opposition to new fees. According to Park Service and Forest Service officials, the surveys were not based on statistical sampling and, therefore, are not representative of all users. Comment cards and correspondence are more likely to be completed by those having a strong opinion on fees, especially those who are opposed to fees.
Pursuant to a congressional request, GAO reviewed the implementation of the recreational fee demonstration program by the National Park Service (NPS), the Forest Service, the Bureau of Land Management (BLM), and the Fish and Wildlife Service (FWS), focusing on the: (1) implementation of the program and the fee revenues generated; (2) program's expenditures; (3) extent to which the agencies have used innovative or coordinated approaches to fee collection; and (4) program's effects, if any, on visitation. GAO noted that: (1) among the four agencies, the pace and the approach used to implement the recreational fee demonstration program have differed; (2) this difference reflects the extent of the agencies' experiences in charging fees prior to the demonstration; (3) nonetheless, each agency has been successful in increasing fee revenues; (4) the four agencies estimated that their combined recreational fee revenues have nearly doubled from about $93 million in fiscal year (FY) 1996 to an about $179 million in FY 1998; (5) of the four agencies, NPS is generating the most fee revenues; (6) for FY 1998, NPS estimates that its fee revenues will be about 85 percent of the total estimated revenues collected by the four agencies at demonstration sites; (7) about 76 percent of the funds available under the program had not been spent through March 1998; (8) thus far, most expenditures have been for repairs and maintenance and the cost of fee collection; (9) the agencies expect to make significant expenditures in the latter part of FY 1998 and in FY 1999; (10) in the longer term, because some sites may have a much greater potential than others for raising revenues, the requirement that at least 80 percent of the fees be retained at the location where they were collected may lead to substantial inequities between sites; (11) some sites may reach the point where they have more revenues than they need for their projects, while other sites still do not have enough; (12) opportunities remain for the agencies to be more innovative and cooperative in designing, setting, and collecting fees; (13) among the agencies, several notable examples of innovation exist at demonstration sites of the Forest Service and the BLM; (14) these innovations have resulted in either more equitable pricing for the visitors, or greater convenience for visitors in how they pay fees; (15) while NPS has been innovative in making fees more convenient for visitors to pay, it has not experimented with different pricing structures to make fees more equitable; (16) coordination of fees among agencies has been erratic; (17) overall, preliminary data suggest the increased or new fees have had no major adverse effect on visitation to the fee demonstration sites; (18) with data from just 1 year, however, it is difficult to accurately assess the fees' impact on visitation; (19) the agencies' surveys indicate that visitors generally support the purpose of the program and the level of the fees implemented; and (20) each agency is planning additional visitor surveys and research in 1998 and 1999.
Federal agencies’ contracting with private businesses is, in most cases, subject to goals for various types of small businesses, including SDVOSBs. The Small Business Act sets a government-wide goal for small business participation of not less than 23 percent of the total value of all prime contract awards—contracts that are awarded directly by agencies—for each fiscal year. The Small Business Act also sets annual prime contracting goals for participation by four other types of small businesses: small disadvantaged businesses (5 percent); women-owned (WOSB, 5 percent); service-disabled veteran-owned, (3 percent); and businesses located in historically underutilized business zones (HUBZone, 3 percent). Although there is no government-wide prime contracting goal for participation by all VOSBs, VA had voluntarily set an internal goal for many years before the enactment of the 2006 Act. The Veterans Benefits Act of 2003 authorized agencies to set contracts aside and make sole-source awards of up to $3 million ($5 million for manufacturing) for SDVOSBs (but not other VOSBs). However, an agency can make a sole-source award to an SDVOSB only if the contracting officer expects just one SDVOSB to submit a reasonable offer. By contrast, VA’s authorities under the 2006 Act apply both to SDVOSBs and other VOSBs. The 2006 Act provides VA authorities to make noncompetitive (sole-source) awards and to restrict competition for (set-aside) awards to SDVOSBs and VOSBs. VA is required to set aside contracts for SDVOSBs or other VOSBs (unless a sole-source award is used) if the contracting officer expects two or more such firms to submit offers and the award can be made at a fair and reasonable price that offers the best value to the United States. VA may make sole-source awards of up to $5 million. VA’s Office of Small Disadvantaged Business Utilization (OSDBU) in conjunction with the Office of Acquisition and Logistics is responsible for development of policies and procedures to implement and execute the contracting goals and preferences under the 2006 Act. Additionally, OSDBU serves as VA’s advocate for small business concerns; provides outreach and liaison support to businesses (large and small) and other members of the private sector for acquisition-related issues; and is responsible for monitoring VA’s implementation of socioeconomic procurement programs, such as encouraging contracting with WOSBs and HUBZone businesses. The Center for Veterans Enterprise (CVE) within OSDBU seeks to help veterans interested in forming or expanding their own small businesses. For FY07, VA established a contracting goal for VOSBs at 7 percent––that is, VA’s goal was to award 7 percent of its total procurement dollars to VOSBs. In FY07, VA exceeded this goal and awarded 10.4 percent of its contract dollars to VOSBs (see fig. 1). VA subsequently increased its VOSB contracting goals to 10 percent for FY08 and FY09, and exceeded those goals as well––awarding 14.7 percent of its contracting dollars to VOSBs in FY08 and 19.7 percent in FY09. For FY07, VA established a contracting goal for SDVOSBs equivalent to the government-wide goal of 3 percent and exceeded that goal by awarding 7.1 percent of its contract dollars to SDVOSBs (see fig. 2). VA subsequently increased this goal to 7 percent for FY08 and FY09, and exceeded the goal in those years as well. Specifically, VA awarded 11.8 and 16.7 percent of its contract dollars to SDVOSBs in FY08 and FY09, respectively. In nominal dollar terms, VA’s contracting awards to VOSBs increased from $1.2 billion in FY07 to $2.8 billion in FY09, while at the same time, SDVOSB contracting increased from $832 million to $2.4 billion. The increase of awards to VOSBs and SDVOSBs largely was associated with the agency’s greater use of the goals and preference authorities established by the 2006 Act. For example, veteran set-aside and sole- source awards represented 39 percent of VA’s total VOSB contracting dollars in FY07. But in FY09, VA’s use of these preference authorities increased to 59 percent of all VOSB contracting dollars. In nominal dollar terms, VA’s use of these authorities increased by $1.2 billion over the past 3 years. According to SBA’s Goaling Program, a small business can qualify for one or more small business categories and an agency may take credit for a contract awarded under multiple goaling categories. For example, if a small business is owned and controlled by a service-disabled, woman veteran, the agency may take credit for awarding a contract to this business under the SDVOSB, VOSB, and WOSB categories. All awards made to SDVOSBs also count towards VOSB goal achievement. In FY09, of the $2.8 billion awarded to VOSBs, the majority (63 percent) applied to both the VOSB and SDVOSB categories and no other (see fig. 3). Furthermore, of the $1.7 billion awarded through the use of veteran preference authorities (VOSB and SDVOSB set-aside and sole-source) in FY09, an even greater majority (77 percent) applied both to the VOSB and SDVOSB categories and no other (see fig. 3). In the Veterans’ Benefits Improvement Act of 2008 (the 2008 Act) Congress enhanced the 2006 Act’s provisions by requiring that any agreements VA enters with other government entities on or after January 1, 2009, to acquire goods or services on VA’s behalf, must require the agencies to comply, to the maximum extent feasible, with VA’s contracting goals and preferences for SDVOSBs and VOSBs. Since January 1, 2009, VA has entered into three interagency agreements (see table 1). According to agency officials, VA entered into agreements with additional federal agencies, such as the Army Corps of Engineers, before January 1, 2009, and therefore the provisions of the 2008 Act do not apply. VA issued guidance to all contracting officers about managing interagency acquisitions in March 2009. However, the agreement with DOI did not include the required language addressing VA’s contracting goals and preferences until it was amended on March 19, 2010, after we informed the agency the agreement did not comply with the 2008 Act. According to VA officials, the agency’s acquisition and contracting attorneys are responsible for reviewing interagency agreements for compliance with these requirements. VA uses Office of Management and Budget templates to develop its interagency agreements. However, VA did not ensure that all interagency agreements include the 2008 Act’s required language or monitor the extent to which agencies comply with the requirements. For example, agency officials could not tell us whether contracts awarded under these agreements met the SDVOSB and VOSB preferences. Without a plan or oversight activity such as monitoring, VA cannot be assured that agencies have made maximum feasible efforts to contract with SDVOSBs or VOSBs. In May 2008—approximately a year and a half after the 2006 Act was enacted and a year after the provisions discussed here became effective— VA began verifying businesses and published interim final rules in the Federal Register, which included eligibility requirements and examination procedures, but did not finalize the rules until February 2010 (see fig. 4). According to VA officials, CVE initially modeled its verification program on SBA’s HUBZone program; however, CVE reconsidered verification program procedures after we reported on fraud and weaknesses in the HUBZone program. More recently, in December 2009, the agency finalized changes to its acquisition regulations (known as VAAR) that included an order of priority (preferences) for contracting officers to follow when awarding contracts and trained contracting officers on the preferences and the VetBiz.gov database from January through March 2010. Leadership and staff vacancies plus a limited overall number of positions also have contributed to the slow pace of implementation. For approximately 1 year, leadership in VA’s OSDBU was lacking because the former Executive Director retired and the position remained vacant from January 2009 until January 2010. Furthermore, one of two leadership positions directly below the Executive Director has been vacant since October 2008 and an Acting Director temporarily filled the other position. The agency also faced delays in obtaining contracting support. More than a year after the agency began verifying businesses, a contractor began conducting site visits (which further investigate control and ownership of businesses as part of the verification process). As of April 2010, CVE had 6.5 full-time equivalent position vacancies, and VA officials told us existing staff have increased duties and responsibilities that also contributed to slowed implementation. The slow implementation of the program appears to have contributed to VA’s inability to meet the requirement in the 2006 Act that it use its veteran preference authorities to contract only with verified businesses. Currently, contracting officers can use the veteran preference authorities with both self-certified and verified businesses listed in VetBiz.gov. However, in its December 2009 rule VA committed to awarding contracts using these authorities only to verified businesses as of January 1, 2012. According to our analysis of FPDS-NG data, in FY09 the majority of contract awards (75 percent) made using veteran preferences went to unverified businesses. In March 2010, the recently appointed Executive Director of OSDBU acknowledged in a Congressional hearing before this committee how large an undertaking the verification program has been and some challenges associated with starting a new program. As of April 8, 2010, VA had verified about 2,900 businesses––approximately 14 percent of VOSBs and SDVOSBs in the VetBiz.gov database. VA has been processing an additional 4,701 applications but the number of incoming applications continues to grow (see fig. 5). As of March 2010, CVE estimates it had received more than 10,000 applications for verification since May 2008. As discussed previously, VA must maintain a database of verified businesses and in doing so must verify the veteran or service-disability status, control, and ownership of each business. The rules that VA developed pursuant to this requirement require VOSBs and SDVOSBs to register in VetBiz.gov to be eligible to receive contracts awarded using veteran preference authorities. An applicant’s business must qualify as “small” under federal size standards and meet five eligibility requirements for verification: (1) be owned and controlled by a service-disabled veteran or veteran; (2) demonstrate good character (any small business that has been debarred or suspended is ineligible); (3) make no false statements (any small business that knowingly submits false information is ineligible); (4) have no federal financial obligations (any small business that has failed to pay significant financial obligations to the federal government is ineligible); and (5) have not been found ineligible due to an SBA protest decision. VA has a two-step process to make the eligibility determinations for verification. CVE staff first review veteran status (and, if applicable, service-disability status) and publicly available, primarily self-reported information about control and ownership for all applicants. Business owners submit applications (VA Form 0877), which ask for basic information about ownership, through VetBiz.gov. When applicants submit Form 0877, they also must be able to provide upon request other items for review, such as financial statements; tax returns; articles of incorporation or organization; lease and loan agreements; payroll records; and bank account signature cards. Typically, these items are reviewed at the business during the second step of the review, known as the site visit. Site visits further investigate control and ownership for select high-risk businesses. In September 2008, VA adopted risk guidelines to determine which businesses would merit the visits. Staff must conduct a risk assessment for each business and assign a risk level ranging from 1 to 4–– with 1 being a high-risk business and 4 a low-risk one. The risk guidelines include criteria such as previous government contract dollars awarded, business license status, annual revenue, and percentage of veteran- ownership. For example, if a business has previous VA contracts totaling more than $5 million, staff must assign it a risk level of 1 (high). According to VA, it intends to examine all businesses assigned a high or elevated risk level with a site visit or by other means, such as extensive document reviews and phone interviews with the business’ key personnel. VA plans to refine its verification processes to address recommendations from an outside contractor’s review of the program. VA hired the contractor to assess the verification program’s processes, benchmark VA’s program to other similar programs, and provide recommendations for improving it. VA received the contractor’s report and recommendations in November 2009. VA officials told us that they plan to implement the contractor’s recommendations to require business owners to submit additional documentation as part of their initial application and to upgrade their data systems. Based on our review of a random sample of the files for 112 businesses that VA had verified by the end of FY09, an estimated 48 percent of the files lacked required information or documentation that CVE staff followed key verification procedures. Specifically, 20 percent were missing some type of required information, such as evidence that veteran status had been checked or a quality review took place; 39 percent lacked information about how staff justified determinations that control and ownership requirements were met; and 14 percent either were missing evidence that a risk assessment had taken place or the risk assessment that occurred did not follow guidelines. Data system limitations also appear to be contributing factors to weaknesses we identified in our file review. For example, data entry into CVE’s internal database largely is done manually, which can result in missing information or errors. Furthermore, CVE’s internal database does not contain controls to ensure that only complete applications that have received a quality review move forward. Internal control standards for federal agencies require that agencies effectively use information technology in a useful, reliable, and continuous way. According to agency officials, two efforts are underway to enhance CVE’s data systems. For example, CVE plans systems enhancements that would automatically check and store information obtained about veteran status and from some public databases. Additionally, CVE plans to adopt case management software—as recommended in the contractors’ report—to help manage its verification program files. The new system will allow CVE to better track new and renewal verification applications and manage the corresponding case files. VA started verifying businesses in May 2008, but did not start conducting site visits until October 2009. As of April 8, 2010, VA has used contractors to conduct 71 site visits but an additional 654 high- and elevated-risk businesses awaited visits. Because of this delay, it currently has a large backlog of businesses awaiting site visits and some higher-risk businesses have been verified months before their site visits occurred or were scheduled to occur. According to VA officials, the agency plans to use contractors to conduct an additional 200 site visits between May and October 2010. However, the current backlog likely will grow over future months. According to site visits reports, approximately 40 percent of the visits resulted in evidence that control or ownership requirements had not been met, but as of April 2010, CVE had not cancelled any business’ verification status. According to these reports, evidence of misrepresentation dates to October 2009, but VA had not taken actions against these businesses as of April 2010. According to VA’s Office of Inspector General, it has received one referral (on April 5, 2010) as a result of the verification program. Staff have made no requests for debarment as a result of verification program determinations as of April 2010. Under the 2006 Act, businesses determined by VA to have misrepresented their status as VOSBs or SDVOSBs are subject to debarment for a reasonable period of time, as determined by VA for up to 5 years. Additionally, under the verification program rules, whenever CVE determines that a business owner submitted false information, the matter will be referred to the Office of Inspector General for review and CVE will request that debarment proceedings be initiated. However, beyond the directive to staff to make a referral and request debarment proceeding, VA does not have detailed guidance in place (either in the verification program procedures or the site visit protocol) that would instruct staff under which circumstances to make a referral or a debarment request. To summarize our observations concerning VA’s verification efforts, the agency has been slow to implement a comprehensive program to verify the veteran status, ownership, and control of small businesses and maintain a database of such businesses. The weaknesses in VA’s verification process reduce assurances that verified firms are, in fact, veteran owned and controlled. Such verification is a vital control to ensure that only eligible veteran-owned businesses benefit from the preferential contracting authorities established under the 2006 Act. These remarks are based on our ongoing work, which is exploring these issues in more detail. As required by the 2006 Act, we will issue a report on VA’s contracting with VOSBs and SDVOSBs later this year. We anticipate the forthcoming report will include recommendations to the Department of Veterans Affairs to facilitate progress in meeting and complying with the 2006 Act’s requirements. Madam Chairwoman and Members of the Subcommittee, I appreciate this opportunity to discuss these important issues and would be happy to answer any questions that you may have. Thank you. For further information on this testimony, please contact William B. Shear at (202) 512-8678 or ShearW@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Harry Medina, Assistant Director; Paola Bobadilla; Beth Ann Faraguna; Julia Kennon; John Ledford; Jonathan Meyer; Amanda Miller; Marc Molino; Mark Ramage; Barbara Roesmann; Kathryn Supinski; Paul Thompson; and William Woods. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Veterans Benefits, Health Care, and Information Technology Act of 2006 (the 2006 Act) requires the Department of Veterans Affairs (VA) to give priority to veteranowned and service-disabled veteran-owned small businesses (VOSB and SDVOSB) when awarding contracts to small businesses. This testimony discusses preliminary views on (1) the extent to which VA met its prime contracting goals for SDVOSBs and VOSBs in fiscal years 2007-2009, and (2) VA's progress in implementing procedures to verify the ownership, control, and veteran status of firms in its mandated database. GAO obtained and analyzed data on VA's contracting activities, and reviewed a sample of verified businesses to assess VA's verification program. VA exceeded its contracting goals with SDVOSBs and VOSBs for the past 3 years, but faces challenges in monitoring agreements with other agencies that conduct contract activity on VA's behalf. The increase of awards to SDVOSBs and VOSBs was associated with the agency's use of the unique veteran preferences authorities established by the 2006 Act. However, GAO's review of interagency agreements found that VA lacked an effective process to ensure that interagency agreements include required language that the other agencies comply to the maximum extent feasible with VA's contracting goals and preferences for SDVOSBs and VOSBs. VA has made limited progress in implementing its verification program. While the 2006 Act requires VA to use veteran preferences authorities only to award contracts to verified businesses, VA's regulation does not require that this take place until January 1, 2012. To date, VA has verified about 2,900 businesses-- approximately 14 percent of businesses in its mandated database of SDVOSBs and VOSBs. Among the weaknesses GAO identified in VA's verification program were files missing required information and explanations of how staff determined that control and ownership requirements had been met. VA's procedures call for site visits to investigate the ownership and control of higher-risk businesses, but the agency has a large and growing backlog of businesses awaiting site visits. Although site visit reports indicate a high rate of misrepresentation, VA has not developed guidance for referring cases of misrepresentation for enforcement action. Such businesses are subject to debarment under the 2006 Act.
The objective of the Executive Branch Management Scorecard is to provide a tool that can be used to track progress in achieving the President’s Management Agenda. Using broad standards, the scorecards in the president’s budget grade agencies’ performance regarding five governmentwide initiatives, which are: strategic management of human capital, competitive sourcing, improved financial performance, expanded electronic government, and budget and performance integration. Central to effectively addressing the federal government’s management problems is recognition that the five governmentwide initiatives cannot be addressed in an isolated or piecemeal fashion separate from the other major management challenges and high-risk areas facing federal agencies. As stated in the President’s Management Agenda, they are mutually reinforcing. More generally, the initiatives must be addressed in an integrated way to ensure that they drive a broader transformation of the cultures of federal agencies. At its essence, this cultural transformation must seek to have federal agencies become less hierarchical, process oriented, stovepiped, and inwardly focused; and more flat, partnerial, results oriented, integrated, and externally focused. The focus that the administration’s scorecard approach brings to improving management and performance is certainly a step in the right direction. As we have seen by your example, Chairman Horn, in calling attention to agencies’ financial management, the year 2000 computer concerns, and computer security issues by grading agencies on their progress, this approach can create an incentive to improve management and performance. Similarly, we have found that our high-risk list has provided added emphasis on government programs and operations that warrant urgent attention to ensure our government functions in the most economical, efficient, and effective manner possible. The President’s Management Agenda focuses on important challenges for the federal government. The items on the agenda are consistent in key aspects with the federal government’s statutory framework of financial management, information technology, and results-oriented management reforms enacted during the 1990s. In crafting that framework, Congress sought to provide a basis for improving the federal government’s effectiveness, financial condition, and operating performance. Moreover, I believe it is worth noting the clear linkages between the five governmentwide initiatives and the nine program-specific initiatives identified by the administration, and the high-risk areas and major management challenges that were covered in GAO’s January 2001 Performance and Accountability Series and High-Risk Update. For example, we have designated strategic human capital management as a governmentwide high-risk area that presents a pervasive challenge throughout the federal government, and this is also one of the president’s governmentwide initiatives. Our work has found strategic human capital management challenges in four key areas, which are: strategic human capital planning and organizational alignment; leadership continuity and succession planning; acquiring and developing staffs whose size, skills, and deployment meet creating results-oriented organizational cultures. In the area of improved financial performance, we have continued to point out that the federal government is a long way from successfully implementing the statutory reforms Congress enacted during the 1990s. Widespread financial management system weaknesses, poor recordkeeping and documentation, weak internal controls, and the lack of cost information have prevented the government from having the information needed to effectively and efficiently manage operations or accurately report a large portion of its assets, liabilities, and costs. Agencies need to take steps to continuously improve internal control and underlying financial and management information systems to ensure that managers and other decision makers have reliable, timely, and useful financial information to ensure accountability; measure, control, and manage costs; manage for results; and make timely and fully informed decisions about allocating limited resources. Another of the administration’s initiatives is to integrate performance review with budget decisions, with a long-term goal of using information about program results in making decisions about which programs should continue and which to terminate or reform. The Office of Management and Budget (OMB) has changed the presentation of the president’s budget to provide added focus on whether programs are effective, and a management focus is present throughout the budget document’s discussions of the agencies. In our observations of agencies’ efforts to implement the Government Performance and Results Act (GPRA) and the Chief Financial Officers Act, more agencies were able to show a direct link between expected performance, resources requested, and resources consumed. These linkages help promote agencywide performance management efforts and increase the need for reliable budget and financial data. However, our work has also shown that additional effort is needed to clearly describe the relationship between performance expectations, requested funding, and consumed resources. The uneven extent and pace of development should be seen in large measure as a reflection of the mission complexity and variety of operating environments across federal agencies. Describing the planned and actual use of resources in terms of measurable accurate results remains an essential action that will continue to require time and effort on the part of all agencies, working with OMB and Congress. The administration has identified areas where it believes the opportunity to improve performance is greater. However, as stated in the president’s budget, “The marks that really matter will be those that record improvement, or lack of it, from these starting points.” The administration has pledged to update the scores twice a year and to issue a mid-year report during the summer. Updates and future reports will be important in ensuring that progress continues as agencies attempt to improve their performance. It is key that rigorous criteria be applied to ensure that, in fact, progress has been made. According to the administration, the President’s Management Agenda is a starting point for management reform. As such, we have drawn upon our wide-ranging work on federal management issues to identify elements that are particularly important in implementing and sustaining management improvement initiatives. These elements include: (1) demonstrate leadership and accountability for change, (2) integrate management improvement initiatives into programmatic decision making, (3) use thoughtful and rigorous planning to guide decisions, (4) involve and empower employees to build commitment and accountability, (5) align organizations to streamline operations and clarify accountability, and (6) maintain strong and continuing congressional involvement (which will be covered in the next section). These six elements have applicability for individual federal agencies, and the central management agencies, each of which plays a fundamental part in implementing reforms and improving federal government performance. One of the most important elements of successful management improvement initiatives is the demonstrated, sustained commitment of top leaders to change. Top leadership involvement and clear lines of accountability for making management improvements are critical to ensuring that the difficult changes that need to be made are effectively implemented throughout the organization. The unwavering commitment of top leadership in the agencies will be especially important to overcoming organizations’ natural resistance to change, marshalling the resources needed in many cases to improve management, and building and maintaining the organizationwide commitment to new ways of doing business. Sustaining top leadership commitment to improvement is particularly challenging in the federal government because of the frequent turnover of senior agency political officials. As a result, sustaining improvement initiatives requires commitment by senior career executives, as well as political leaders. Career executives can help provide the long-term focus needed to institutionalize reforms that political executives’ often more limited tenure does not permit. The Office of Personnel Management’s (OPM) amended regulations that place increased emphasis on holding senior executives accountable for organizational goals provide an opportunity to reinforce leadership and accountability for management improvement. Specifically, the amended regulations require agencies to hold executives accountable for results; appraise executive performance on those results balanced against other dimensions, including customer satisfaction and employee perspectives; and use those results as the basis for performance awards and other personnel decisions. Agencies were to implement their policies for the senior executives for the appraisal cycles that began in 2001. Although the respective departments and agencies must have the primary responsibility and accountability to address their own issues, leaders of the central management agencies have the responsibility to keep everyone focused on the big picture by identifying the key issues across the government and ensuring that related efforts are complementary rather than duplicative. The top leadership of OMB, OPM, the General Services Administration (GSA), and the Department of the Treasury need to continue to be involved in developing and directing reform efforts, and helping to provide resources and expertise to further improve performance. To be successful, management improvement initiatives must be part of agencies’ programs and day-to-day actions. Traditionally, the danger to any management reform is that it can become a hollow, paper-driven exercise where management improvement initiatives are not integrated into the day- to-day activities of the organization. The administration has recognized this danger and encouraged agency leaders to take responsibility for improving the day-to-day management of the government. Integrating management issues with budgeting is absolutely critical for progress in government performance and management. Such integration is obviously important to ensuring that management initiatives obtain the resource commitments needed to be successful. More generally, however, the budget process is the only annual process we have in government where programs and activities come up for regular review and reexamination. Integration also strengthens budget analysis by providing new tools to help analysts review the relative merits of competing agency claims and programs with the federal budget. The management issues in the president’s agenda have both governmentwide and agency-specific components. Those aspects of the problem that are governmentwide and cut across agency boundaries demand crosscutting solutions as well. Interagency councils such as the President’s Management Council, Chief Financial Officers’ Council, the Chief Information Officers’ Council, the Human Resources Management Council, the President’s Council on Integrity and Efficiency, and the Joint Financial Management Improvement Program can play central roles in addressing governmentwide management challenges. As I have noted in a previous testimony, interagency councils provide a means to help foster communication across the executive branch, build commitment to reform efforts, tap talents that exist within agencies, focus attention on management issues, and initiate improvements. The magnitude of the challenges that many agencies face call for thoughtful and rigorous planning to guide decisions about how to improve performance. We have found, for example, that annual performance plans that include precise and measurable goals for resolving mission-critical management problems are important to ensuring that agencies have the institutional capacity to achieve results-oriented programmatic goals. On the basis of our long experience examining agency-specific and governmentwide improvement efforts, we believe the improvement plans that agencies are to develop in conjunction with tracking their progress in achieving the goals of the President’s Management Agenda should establish (1) clear goals and objectives for the improvement initiative, (2) the concrete management improvement steps that will be taken, (3) key milestones that will be used to track the implementation status, and (4) the cost and performance data that will be used to gauge overall progress in addressing the identified weaknesses. While agencies will have to undertake the bulk of the effort in addressing their respective management weaknesses, the improvements needed have important implications for the central management agencies as well. OMB, OPM, GSA, and Treasury will need to remain actively engaged throughout the planning and implementation of the president’s initiatives to ensure that agencies bring to bear the resources and capabilities to make real progress. These four agencies, therefore, need to ensure that they have the capabilities in place to support and guide agencies’ improvement efforts. These capabilities will be critical in helping agencies identify the root causes of their management challenges and pinpointing specific improvement actions, providing agencies with tools and additional support—including targeted investments where needed—to address shortcomings, and assisting agencies in monitoring and reporting progress. For example, OMB can assist agencies in developing and refining useful performance measures and ensuring that performance information is used in deliberations and key decisions regarding agencies’ programs. OPM can provide tools for agencies to use in better gauging the extent to which federal employees understand the link between their daily activities and agencies’ results. In this regard, OPM has announced a major internal restructuring effort driven in large part by the need to provide better support and resources to agencies. Agencies can improve their performance by the way that they treat and manage their people, building commitment and accountability through involving and empowering employees. All members of an organization must understand the rationale for making organizational and cultural changes because everyone has a stake in helping to shape and implement initiatives as part of agencies’ efforts to meet current and future challenges. Allowing employees to bring their expertise and judgment to bear in meeting their responsibilities can help agencies capitalize on their employees’ talents, leading to more effective and efficient operations and improved customer service. However, our most recent survey of federal managers found that at only one agency did more than half of the managers report that to a great or very great extent they had the decision-making authority they needed to help the agency accomplish its strategic goals. Effective changes can only be made and sustained through the cooperation of leaders, union representatives, and employees throughout the organization. We believe that agencies can improve their performance, enhance employees’ morale and job satisfaction, and provide a working environment where employees have a better understanding of the goals and objectives of their organizations and how they are contributing to the results that American citizens want. In that regard, our work has identified six practices that agencies can consider as they seek to improve their operations and respond to the challenges they are facing. These are: demonstrating top leadership commitment; engaging employee unions; training employees to enhance their knowledge, skills, and abilities; using employee teams to help accomplish agency missions; involving employees in planning and sharing performance information; delegating authorities to front-line employees. Successful management improvement efforts often entail organizational realignment to better achieve results and clarify accountability. Agencies will need to consider realigning their organizations in response to the initiatives in the President’s Management Agenda. For example, as competitive sourcing, e-government, financial management, or other initiatives lead to changes in how an agency does business, agencies may need to change how they are organized to achieve results. In recent years, Congress has shown an interest in restructuring organizations to improve service delivery and program results and to address long-standing management weaknesses by providing authority and sharpening accountability for management. Most recently, Congress chartered the Transportation Security Administration in November 2001 and required: measurable goals to be outlined in a performance plan and their progress to be reported annually; an undersecretary who is responsible for aviation security, subject to a performance agreement, and entitled to a bonus based on performance; and a performance management system that included goals for managers and employees. In implementing the President’s Management Agenda, it will be important to ensure that information is available so that Congress, other interested parties, and the public can assess progress and help to identify solutions to enhance improvement efforts. As stated in the president’s budget, “The Administration cannot improve the federal government’s performance and accountability on its own. It is a shared responsibility that must involve the Congress.” Therefore, transparency will be crucial in developing an effective approach to making needed changes. It will only be through the continued attention of Congress, the administration, and federal agencies that progress can be sustained and, more importantly, accelerated. Support from Congress has proven to be critical in sustaining interest in management initiatives over time. Congress has, in effect, served as the institutional champion for many of these initiatives, providing a consistent focus for oversight and reinforcement of important policies. Making pertinent and reliable information available will be necessary for Congress to be able to adequately assess agencies’ progress and to ensure accountability for results. Key information to start with includes the agencies’ improvement plans that are being developed to address the agencies’ scores. Congress can use these improvement plans to engage agencies in discussions about progress that is being made, additional steps that need to be taken, and what additional actions Congress can take to help with improvement efforts. More generally, effective congressional oversight can help improve federal performance by examining the program structures agencies use to deliver products and services to ensure that the best, most cost-effective mix of strategies are in place to meet agency and national goals. As part of this oversight, Congress can identify agencies and programs that address similar missions and consider the associated policy, management, and policy implications of these crosscutting programs. This will present challenges to the traditional committee structures and processes. A continuing issue for Congress to consider is how to best focus on common results when mission areas and programs cut across committee jurisdictions. In summary, Mr. Chairman, serious and disciplined efforts are needed to improve the management and performance of federal agencies. Highlighting attention through the President’s Management Agenda and the Executive Branch Management Scorecards are steps in the right direction. At the same time, it is well recognized that consistent progress in implementing these initiatives will be the key to achieving improved performance across the federal government. In implementing the President’s Management Agenda, the elements highlighted during this testimony should be considered and adapted as appropriate in view of the fact that experience has shown that when these elements are in place lasting management reforms are more likely to be implemented that ultimately lead to improvements. Finally, Congress must play a crucial role in helping develop and oversee management improvement efforts throughout the executive branch. Congress has proven to be critical in sustaining management reforms by monitoring implementation and providing the continuing attention necessary for management reform initiatives to be carried through to their successful completion. Mr. Chairman, we are pleased that you and your colleagues in Congress have often turned to GAO for assistance on federal management issues and we look forward to continuing to assist Congress and agencies in this regard. We have issued a large body of reports, guides, and tools on issues directly relevant to the President’s Management Agenda. We will be issuing additional such products in the future that should prove also helpful to Congress and agencies in improving federal management and performance. This concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have. For further contacts regarding this testimony, please contact J. Christopher Mihm at (202) 512-6806. Individuals making key contributions to this testimony included Jacqueline Nowicki, Susan Ragland, and Aonghas St Hilaire.
Federal agencies need to work with other governmental organizations, nongovernmental organizations, and the private sector, both domestically and internationally, to achieve results. By focusing on accountable, results-oriented management, the federal government can use this network to deliver economical, efficient, and effective programs and services to the American people. The administration's plan to use the Executive Branch Management Scorecard to highlight agencies' progress in achieving management and performance improvements outlined in the President's Management Agenda is a promising first step. However, many of the challenges facing the federal government are long-standing and complex and will require sustained attention. Using broad standards, the scorecards in the president's budget grade agencies on the following five governmentwide initiatives: (1) strategic management of human capital, (2) competitive sourcing, (3) improved financial performance, (4) expanded electronic government, and (5) budget and performance integration. These initiatives cannot be addressed in an isolated or piecemeal fashion separate from other management challenges and high-risk areas.
Breast cancer is the second leading cause of cancer deaths among American women. The American Cancer Society estimates that there will be 184,300 new cases of breast cancer diagnosed in U.S. women in 1996 and that 44,300 women will die from the disease. One in eight women will develop breast cancer during her lifetime. Breast cancer is generally classified into four main stages based on the size of the tumor and the spread of the cancer at the time of diagnosis. Mortality rates are strongly related to the stage of the disease at the time of detection. Stage I patients have an excellent chance of long-term survival, while stage IV (metastatic) breast cancer is usually fatal. A wide variety of treatments exists for breast cancer patients, including surgery, chemotherapy, radiation therapy, and hormone therapy. The particular treatments used depend on the stage and characteristics of the cancer and other aspects of the patient and her health. ABMT is a therapy that allows a patient to receive much higher dosages of chemotherapy than is ordinarily possible. Because high-dose chemotherapy is toxic to the bone marrow (which supports the immune system), methods have been developed for restoring the bone marrow by reinfusing stem cells (the bone marrow cells that mature into blood cells) taken from the patient before chemotherapy. Stem cells are removed from the patient’s blood or bone marrow, then concentrated, frozen, and sometimes purged in an attempt to remove any cancerous cells. The patient then undergoes chemotherapy at dosages 2 to 10 times the standard dosage. To restore the ability to produce normal blood cells and fight infections, the patient’s concentrated stem cells are thawed and reinfused after chemotherapy. When the transplant is done from the blood rather than the bone marrow, the procedure is often referred to as peripheral blood stem cell transplantation. ABMT is an expensive treatment although the cost per patient has been falling in recent years. Aside from financial costs, the treatment is usually very unpleasant for the patient and may pose significant risks. The high doses of chemotherapy are very toxic, leading to treatment-related morbidity and mortality rates that, while declining, are still higher than for conventional chemotherapy. There may also be problems in restoring the patient’s ability to produce normal blood cells and thereby fight infections. ABMT is being evaluated in the treatment of a number of types of cancer other than breast cancer and is considered standard therapy for treating certain types of leukemia and lymphoma under certain conditions. Many clinical trials have been conducted to assess ABMT for breast cancer, but most of these studies have been phase I and phase II trials, which most experts agree have been of limited use in firmly establishing the effectiveness of ABMT compared with conventional therapy. NCI is currently sponsoring three randomized clinical trials that seek to determine whether ABMT is better than current standard therapy in comparable breast cancer patients. These trials seek to ultimately involve a total of about 2,000 women at more than 70 institutions around the country. Although most experts believe the clinical research has not yet established that ABMT is superior to conventional therapy, and for which patients, insurance coverage of the treatment has become relatively common and use of the treatment is diffusing rapidly. According to the Autologous Blood and Marrow Transplant Registry-North America, the number of breast cancer patients receiving ABMT has increased rapidly, growing from an estimated 522 in 1989 to an estimated 4,000 in 1994. About one-third of all ABMTs reported to the Registry in 1992 were for breast cancer, making it the most common cancer being treated with this therapy. The Registry reports that although the treatment is most commonly used in women with advanced disease, there is a growing trend to use it more frequently on patients with earlier stages of breast cancer. There has also been a dramatic increase in the number of patients undergoing this treatment in Europe. Many insurers, including some of the nation’s largest, now routinely cover ABMT for breast cancer both inside and outside of clinical trials, although some still deny coverage for the treatment because they consider it experimental. One study looked at 533 breast cancer patients in clinical trials who requested coverage for ABMT from 1989 through 1992. It found that 77 percent of them received approval for coverage of the treatment after their initial request. We reviewed the current medical literature and spoke with several leading oncologists and technology assessment experts regarding ABMT for breast cancer. While there were differences of opinion, the consensus of most of the experts and the literature was that current data indicate ABMT may be beneficial for some breast cancer patients but that there is not yet enough information to establish that it is more effective than standard chemotherapy. The medical literature includes several studies showing longer periods before relapse and improved survival for some poor prognosis, high-risk breast cancer patients receiving ABMT rather than conventional therapy.However, it is unclear whether the superior outcomes of patients receiving ABMT in these studies were the result of the treatment itself or the result of bias caused by the selection of patients chosen to receive the treatment. Most of the medical literature and nearly all of the experts we spoke with said that the current data are not yet sufficient to make definitive conclusions about the effectiveness of ABMT and about which groups of breast cancer patients would be most likely to benefit. Although there are wide differences of opinion about the appropriate use of ABMT, nearly all sides of the debate agree that the results of randomized clinical trials are needed to provide definitive data on the treatment’s effectiveness. Several studies have reviewed and analyzed the extensive medical literature related to ABMT for breast cancer. In 1995, ECRI, an independent, nonprofit technology assessment organization, published an analysis stating that the weight of the evidence in the medical literature did not indicate greater overall survival for metastatic breast cancer patients receiving ABMT compared with conventional therapy. The Blue Cross and Blue Shield Association’s Technology Evaluation Center, after reviewing the available data in 1994, concluded that the evidence was not yet sufficient to draw conclusions about the effectiveness of ABMT compared with conventional therapy for breast cancer patients. Similarly, NCI, at a congressional hearing, said that while ABMT has shown promise in some clinical studies, the results of the NCI randomized clinical trials were needed before conclusions could be reached about whether and for whom the treatment is more beneficial than conventional therapy. We interviewed the medical director, or another official who makes coverage decisions, at 12 U.S. health insurance companies. We discussed the insurer’s coverage policies and the factors that influenced their coverage policy with regard to ABMT for breast cancer. The insurers’ coverage policies regarding ABMT for breast cancer reflected some incongruity. In general, the insurers said they did not normally cover experimental or unproven treatments and that they believed ABMT for breast cancer fell into this category. Yet, with some restrictions, all 12 insurers nonetheless covered ABMT for breast cancer with only one requiring that patients enroll in clinical trials. In explaining this, most cited as the primary influence the fact that although until recently the treatment had not been tested in randomized trials, it has become widely used and that the existing research suggests it may be beneficial to certain patients. But insurers told us that a variety of nonclinical factors also strongly influenced their coverage policy, such as the threat of litigation, public relations concerns, and government mandates. All health insurers must decide whether and when they will cover a new or experimental treatment. To do this, they engage in some form of technology assessment, a process that seeks to assess the safety and effectiveness of a medical technology based on the best available information. For the most part, health insurers do not gather primary data but, rather, rely heavily on peer-reviewed medical literature and on the assessment of experts inside and outside of their companies. Some large health insurers have elaborate technology assessment units. One example is the Technology Evaluation Center, a collaboration of the Blue Cross and Blue Shield Association and Kaiser Permanente. The Center’s staff includes physicians, research scientists, and other experts who review and synthesize existing scientific evidence to assess the safety and efficacy of specific medical technologies. The Center has published assessments for over 200 technologies since 1985, including several for ABMT for breast cancer. Other large insurers, including Aetna and Prudential, also have special programs that do formal assessments of specific technologies. Smaller insurers also do technology assessment, but on a smaller scale; for instance, they may have a small office that does literature searches or reviews the findings of larger technology assessment organizations. Using their assessments, insurers then decide whether they will cover a particular treatment and under what conditions. Whatever the overall policy, coverage of costly and complicated procedures may require special preapproval before they are covered. Among the insurers we spoke with, preapproval for ABMT was generally required by the office of the medical director or some other office that reviews claims for medical appropriateness. They said they wanted to ensure that a case meets any coverage restrictions and that ABMT is medically appropriate for that particular patient. For certain difficult cases, some insurers also use an outside panel of experts, serving as a mediation service, to determine whether ABMT is the appropriate treatment. Seven of the 12 insurers we spoke with explicitly characterized ABMT for breast cancer as experimental. Four others did not specifically term the treatment “experimental” but nonetheless said that ABMT for breast cancer should not yet be considered standard therapy since its effectiveness over conventional therapy had not yet been proven. One insurer did not express an opinion on the issue. Yet while the insurers said they typically do not cover experimental therapies, many said that in this case there was enough preliminary evidence that ABMT may be effective to justify covering it. Seven of the 12 insurers cited the clinical evidence as one of the primary reasons that they decided to cover ABMT. These insurers said that the existing data indicate that ABMT may hold promise for certain breast cancer patients and that flexibility was needed in paying for experimental treatments for seriously or terminally ill patients. Two insurers also said that they cover ABMT for breast cancer since, although its efficacy has not been established, it has become generally accepted medical practice in that it has become a common treatment for breast cancer throughout the United States and is covered by many other insurers. They said they would receive pressure from their beneficiaries if they were to deny coverage for a treatment that other insurers cover. While the medical evidence was an important factor in the coverage policy of a majority of the insurers, other factors were also clearly at work, with the threat of litigation being among the most important. When an insurer refuses to pay for a treatment requested by the patient or the patient’s physician, coverage may ultimately be decided in the court system. Over the past several years, many breast cancer patients have sued their insurers after being denied coverage for ABMT. Nine of the 12 insurers that we spoke with specifically mentioned litigation, or the threat of litigation, as a factor in their ABMT coverage policy. For five of these insurers, legal concerns were characterized as among the most important reasons for choosing to cover ABMT for breast cancer. Before changing their policies to cover ABMT for breast cancer, six of the insurers we spoke with had been sued after denying coverage for the treatment. Overall, the insurers had not been very successful in these cases and had often either settled before judgment was rendered or had a judgment rendered against them. The insurers who had been sued on the issue said the financial costs of legal fees, settlements, and damages were high. For the most part, the insurers said they found different courts to be widely inconsistent in ruling whether ABMT is experimental and should be covered, a point also made in reviews of case law on the issue. In addition to the financial costs, insurers said the lawsuits were harmful to their public relations. Publicity of their coverage policy led to the impression that they were denying a gravely ill patient a beneficial therapy for economic reasons. The insurers we spoke with no longer face many lawsuits on the issue since they now generally cover ABMT. Court decisions on health insurance coverage disputes have usually turned on the language of the insurance contracts, which generally bar coverage for experimental treatments but are often ambiguous with regard to what is defined as “experimental.” A recent review of such litigation noted that state courts have tended to favor policyholders in these coverage disputes, although federal courts, where disputes for self-insured companies are often decided, have been split on whether insurers must cover ABMT for breast cancer. The courts, in ruling whether an insurer must provide coverage for ABMT for breast cancer, have based their decisions on a number of factors. These have included whether ABMT is generally accepted in the medical community for the treatment of breast cancer, whether “experimental treatment” is defined clearly in the insurance policy, whether the treatment was intended primarily to benefit the patient or to further medical research, and whether the insurer’s denial of coverage was influenced by its own economic self-interest. This last argument was the focus of Fox v. Health Net of California, a highly publicized case in which a California jury awarded $89 million in damages to a policyholder whose deceased wife had been denied coverage of ABMT for breast cancer.Plaintiffs in a number of recent cases have alleged that denial of coverage for ABMT constitutes discrimination against women in violation of civil rights laws or discrimination against a specific disease in violation of the Americans With Disabilities Act. Most of these cases are still pending. Insurers have had some success in court as well. Some state courts have ruled that ABMT is still widely considered to be experimental and that the health insurance contract clearly precluded coverage of experimental treatments. Courts in at least three federal circuits have also upheld insurers’ coverage denials for ABMT to treat breast cancer. Courts in many of these cases permitted insurers wide discretion in making coverage decisions as long as the decisions were not arbitrary or capricious. The controversy over access to ABMT for breast cancer patients has led several states to propose or enact legislation regarding insurance coverage of the treatment. As of June 1995, at least seven states had enacted legislation that, under certain parameters, requires that insurers provide coverage for ABMT for breast cancer. At least seven additional states have similar legislation pending. Some of these laws are mandates requiring that coverage of ABMT for breast cancer be part of any basic package of health insurance. Other laws simply require that the treatment be made available as a coverage option, at perhaps a higher premium. The laws in six of the states require coverage whether or not the patient is enrolled in a clinical trial, while one state requires patients with certain types of breast cancer to join well-designed randomized or nonrandomized trials. Three of the 12 insurers we spoke with said they were required by a state mandate to cover ABMT for breast cancer for most of their beneficiaries. One of these three said it would not cover the treatment if it were not for the mandate. Those who advocate passage of the state laws argue that they are necessary to make a promising therapy available to breast cancer patients. Among the arguments used is that insurers classified ABMT for breast cancer as “experimental” as much for economic as medical reasons because ABMT is an expensive treatment. Insurers respond that ABMT for breast cancer is an experimental treatment still being evaluated in clinical trials and they should not be in the business of paying for research. Furthermore, insurers say that legislation mandating coverage of specific treatments is a poor way to make medical policy and that it distorts the market because self-funded plans are exempt from state mandates. The National Association of Insurance Commissioners (NAIC) is considering a model act for states that would set minimum standards of coverage for health insurers. The model act, which has not yet been approved by the full NAIC membership, would require insurers to cover an experimental treatment if the peer-reviewed medical literature has established that the treatment is an effective alternative to conventional treatment. A representative from NAIC told us that in a state that passed such an act, insurers would normally be required to cover ABMT for breast cancer if the treating physician considered it the medically appropriate treatment. Programs such as Medicaid, Medicare, and the Civilian Health and Medical Program of the Uniformed Services (CHAMPUS) have varying policies regarding coverage of ABMT for breast cancer. Coverage criteria for Medicaid, a jointly financed federal and state program that provides medical care to the poor, varies by state, but some states’ Medicaid programs will cover ABMT for breast cancer under at least some circumstances. Of nine state Medicaid programs we contacted, five provided coverage for ABMT for breast cancer. The Medicare program, which provides health coverage primarily for the elderly, specifically excludes ABMT coverage for solid tumors such as breast cancer because the Health Care Financing Administration, which administers the Medicare program, considers the treatment experimental. The practical impact of the Medicare policy is limited since the elderly are not normally appropriate candidates for ABMT treatment. CHAMPUS, the Department of Defense’s health care program for active duty and retired military personnel, and their dependents and survivors, considers ABMT for breast cancer experimental but provides coverage through a demonstration project in which beneficiaries may receive ABMT by enrolling in one of three NCI randomized clinical trials. The Federal Employees Health Benefits Program (FEHBP), run by OPM, provides health insurance coverage for over 9 million federal employees, retirees, and dependents through over 300 independent health plans. In September 1994, OPM imposed a requirement that participating health insurers must cover ABMT for breast cancer for all FEHBP beneficiaries both in and outside of clinical trials. OPM acknowledged to us that the evidence is mixed on the effectiveness of ABMT for breast cancer. They said they decided to mandate coverage largely because so many insurers were already covering the procedure and they wanted to make the benefit uniform across all of their carriers. Insurers we spoke with said they complied with the OPM mandate, although they criticized the mandate as a political rather than clinical decision. Two of the 12 insurers we spoke with specifically mentioned the OPM decision as having influenced their own coverage policy, largely because it brought so much publicity to the issue. Medical experts, insurers, and others have debated whether ABMT has become too widely used before there is convincing evidence of its efficacy. While the medical community seeks to learn whether ABMT is more effective for some breast cancer patients than conventional chemotherapy, the number of patients receiving the treatment and the number of facilities providing it continue to grow. If ABMT were a new drug, it would be restricted mostly to patients on clinical trials until its efficacy were established and the Food and Drug Administration (FDA) had approved its use in general medical practice. Yet because ABMT is a procedure, rather than a drug, it does not require approval from FDA, making it easier for it to be widely used while its effectiveness is still being tested in clinical trials. The rapid diffusion of ABMT for breast cancer has implications for patient care, health care costs, and research. There is debate over whether patients benefit from the rapid diffusion of a new technology that is still being tested in clinical trials. In the case of ABMT, the high doses of chemotherapy administered in conjunction with the treatment can make it a particularly difficult treatment for patients. This is evidenced both by the extreme sickness and side effects that patients may experience and by the higher rate of treatment mortality for ABMT than for conventional chemotherapy. If the clinical research ultimately shows ABMT to be preferable than conventional therapy for some groups of patients, then some of those patients will have benefited from the early diffusion of this technology. If it is shown not to be more effective, however, or if it is shown to be effective for a much smaller subset of patients than are currently being treated with the therapy, then many patients will have been unnecessarily subjected to an aggressive treatment that can be risky and produce many severe side effects. In addition, while ABMT formerly was available only at a select number of cancer research centers across the country, it is now being performed by a rapidly growing number of smaller hospitals and bone marrow transplant centers. Many physicians we talked with, including researchers and insurance company medical directors, expressed concerns that there may be some facilities that perform too few transplants to ensure sufficient staff expertise or that do not have the infrastructure needed to support this complicated procedure. Partly to address these concerns, several medical societies have developed guidelines that set out specific criteria for facilities that perform bone marrow transplants. ABMT is an expensive treatment, costing anywhere from $80,000 to over $150,000 per treatment, depending on the drugs used, any medical complications, and the length of hospital stay required. Conventional chemotherapy, by contrast, typically costs between about $15,000 and $40,000. The cost of ABMT has been decreasing over the years and is expected to decrease further as the technology is refined and becomes more common. Some medical centers have already been able to reduce the cost of the procedure by offering the treatment on more of an outpatient basis. While the cost per individual treatment is likely to decrease, total spending nationwide on the procedure is likely to increase. More patients in different stages of breast cancer are being treated with ABMT, a trend that is expected to continue. The fact that ABMT can be a highly profitable procedure for the institution that performs it, many experts say, has created further incentive for the diffusion of the treatment. Virtually all sides of the debate agree that ABMT is worth the cost if it is shown to be the best available treatment. But some worry that the research has not yet established which breast cancer patients, if any, are likely to benefit from ABMT and that the rapid diffusion of this costly treatment outside of research settings before its effectiveness has been proven may not be the best use of health care resources. There is clear consensus among the scientific community that, if possible, the best way to compare the effectiveness of a new treatment with conventional treatment is through randomized clinical trials. A randomized trial assigns patients either to a control group receiving conventional treatment or to one or more experimental groups receiving the treatment being tested. Random allocation helps ensure that differences in the outcome of the groups can be attributed to differences in the treatment and not differences in patient characteristics. In the case of ABMT, some experts have argued that early research showing favorable results for ABMT may have been due to the fact that the breast cancer patients receiving ABMT had more favorable characteristics than those who were not receiving the treatment. NCI has three large-scale randomized clinical trials ongoing to compare ABMT with conventional therapy for breast cancer. These trials randomly assign patients who fit certain criteria either to an experimental group that receives ABMT or to a control group that instead receives a more conventional form of therapy. NCI has had difficulty accruing enough patients to its randomized trials. Two of the three ongoing NCI trials are accruing patients at about half the rate researchers originally anticipated, and a fourth trial was closed because of low enrollment. NCI expanded the enrollment goal of the third trial to improve the statistical power of the results, and results from all three trials are not expected until nearly the turn of the century. NCI says patient accrual to the trials, although slow, appears to be progressing adequately, but many experts we spoke with questioned whether the NCI trials will ever be completed as planned. Many medical experts believe that the wide availability of the treatment is one reason researchers are having problems accruing patients to the randomized trials. ABMT is now widely available to many breast cancer patients either through other clinical trials or outside of a research trial. Under most circumstances, insurers that cover ABMT do not require that the patient enter a randomized trial, and many patients are reluctant to do so. Patients who believe ABMT is their best hope for survival may not be willing to enter a trial where they may be randomly assigned to a group receiving conventional chemotherapy. The ABMT Registry estimates that only about 5 percent of all breast cancer patients receiving ABMT are enrolled in the randomized clinical trials. Proponents of ABMT that we spoke with pointed out that most procedures in common medical practice today have not been subjected to the strict scrutiny of randomized trials and that this potentially lifesaving therapy should not be withheld until the NCI trials are completed many years from now. Other medical experts, insurers, and patient advocates we spoke with said that ABMT for breast cancer should only be available to patients enrolled in clinical trials, possibly only randomized trials. They argued that the proliferation of ABMT outside of randomized trials—or outside of any research setting at all—is making it difficult to gather the data necessary to assess whether and for whom ABMT may be a beneficial treatment. A large number of clinical trials are being conducted on ABMT for breast cancer apart from the NCI randomized trials. Many major cancer research centers are conducting nonrandomized trials, and numerous clinical trials are also under way at smaller hospitals and private transplant centers. Yet some experts have argued that many of these trials will contribute little useful information because the study population is too small, the trial is not sufficiently well-designed, or because the results will not be published. These experts are concerned that the proliferation of smaller clinical trials may be diverting patients from larger clinical trials, including the NCI randomized clinical trials, that are more likely to yield meaningful results about the effectiveness of ABMT for breast cancer. The controversy over ABMT has also highlighted the issue of the extent to which health insurers should pay for the costs of clinical research. Clinical research in the United States has been financed primarily by the federal government, private research institutions, the pharmaceutical industry, and insurers. Insurers have often paid the patient care costs for certain clinical trials. But given federal funding constraints and other economic pressures, many researchers and other experts we spoke with believe that health insurers should assume the costs of more clinical trials, especially the patient care costs of well-designed trials that offer promising treatments in an advanced stage of testing. They say the insurers would have to pay for patient care costs even if the patient were not in a trial and that the trials will ultimately benefit everyone by helping identify effective treatments. The insurance industry’s position has been that insurers should pay only for standard medical care and that insurers should not be in the business of financing research. But insurers have made exceptions, especially for clinical trials involving promising treatments for patients with terminal illnesses. Many insurance industry officials we spoke with said they would be open to paying the costs of some clinical trials for promising treatments, as long as the costs were to be spread equitably among all insurers and health providers, and as long as there were strict standards to ensure that the research being funded was of high quality. The controversy over insurance coverage of ABMT for breast cancer illustrates several issues related to the dissemination and insurance coverage of new technologies. The rapid diffusion of new, often expensive, medical technologies puts in conflict several goals of the U.S. health care system: access to the best available care, the ability to control health care costs, and the ability to conduct research adequate to assess the efficacy of a new treatment. Specifically, the ABMT controversy illustrates the challenge health insurers in the United States face in determining whether and when to provide coverage for a new technology of unknown efficacy, given the decentralized process for assessing new medical technologies. Insurers have less clear direction regarding coverage of medical procedures than they do for drugs because of FDA’s role in drug approval. Insurers thus have wide discretion, and little nationwide guidance, in determining whether and when a medical procedure should no longer be considered “experimental” and should be covered. The result can be great disparity in the coverage policies of insurers, with coverage decisions being influenced not just by the medical data and clinical judgments, but also by factors such as lawsuits and public relations concerns. Furthermore, the lack of a systematic process for the dissemination of new technologies in the United States raises issues for the health care system. Those who advocate widespread access to experimental technologies argue that patients should not be denied access to promising therapies, especially when clinical trials for those therapies may take many years. Those who advocate restricting access to new technologies argue that the rapid diffusion of a new treatment before its effectiveness has been definitively proven is not ultimately beneficial to patient care, may waste resources, and may impede controlled research on the treatment. NIH provided us with comments on a draft of this report. They agreed with the conclusions and stated that the report presented a balanced, thoughtful discussion of the controversial issues. NIH also noted that in the past, many insurers provided coverage only in the context of clinical trials, but this became untenable because of the factors discussed in the report, particularly the OPM decision to require FEHBP coverage of the treatment both inside and outside of clinical trials. NIH also recommended some technical changes, which we incorporated in the report where appropriate. (See app. I for a copy of the NIH comments.) OPM also reviewed the draft report and provided comments regarding the decision to require that all FEHBP health insurance plans provide coverage for ABMT for breast cancer. Their comments reemphasized that (1) many FEHBP plans were already providing this coverage; (2) the OPM decision was based on a desire to broaden coverage to all FEHBP enrollees; and (3) each plan retains the flexibility to determine when and how the treatment will be covered, but plans that limit coverage to patients enrolled in clinical trials have to offer coverage in nonrandomized as well as randomized trials. (See app. II for a copy of OPM’s comments.) As agreed with your office, unless you release its contents earlier, we plan no further distribution of this report for 30 days. At that time, we will send copies to other congressional committees and members with an interest in this matter, the Secretary of Health and Human Services; the Director, NIH; and the Director, OPM. This report was prepared by William Reis, Assistant Director; Joan Mahagan; and Jason Bromberg under the direction of Mark Nadel, Associate Director. Please contact me on (202) 512-7119 or Mr. Reis on (617) 565-7488 if you or your staff have any questions on this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed insurance coverage of autologous bone marrow transplantation (ABMT) for breast cancer, focusing on: (1) the factors insurers consider when deciding whether to cover treatment; (2) the effectiveness of the treatment; and (3) the consequences of the increased use and insurance coverage of the treatment while it is still in clinical trials. GAO found that: (1) the use of ABMT has become widespread and many insurers cover ABMT; (2) sufficient data do not exist to establish that ABMT is more effective than traditional chemotherapy; (3) despite the lack of data, many insurers cover ABMT because the research results of its effectiveness are promising, its use is widespread, and they fear costly litigation battles with their customers; (4) as of June 1995, seven states had enacted a law that mandates insurance coverage for ABMT and seven other states have similar laws pending; (5) of the federally funded health insurance programs, Medicaid coverage for ABMT varies by state, Medicare does not cover ABMT for solid tumors such as breast cancer, the Civilian Health and Medical Program of the Uniformed Services covers ABMT through a demonstration project in which beneficiaries may receive the treatment by enrolling in a randomized clinical trial; and (6) the widespread use of ABMT prior to conclusive data about its effectiveness may jeopardize patients unresponsive to the treatment, raise health care costs, and deter participation in randomized clinical trials.
The Navy’s fleet includes aircraft carriers, cruisers, destroyers, frigates, littoral combat ships, submarines, amphibious warfare, mine warfare, combat logistics, and fleet support ships. Our review focused on surface combatant and amphibious warfare ships, which constitute slightly less than half of the total fleet. Table 1 shows the classes of surface ships we reviewed along with their numbers, expected service lives, and current average ages. Figure 1 shows the administrative chain of command for Navy surface ships. The U.S. Pacific Fleet and U.S. Fleet Forces Command organize, man, train, maintain, and equip Navy forces, develop and submit budgets, and develop required and sustainable levels of fleet readiness, with U.S. Fleet Forces Command serving as the lead for fleet training requirements and policies to generate combat-ready Navy forces. The Navy’s surface type commanders—Commander, Naval Surface Force, U.S. Pacific Fleet and Commander, Naval Surface Force, Atlantic have specific responsibilities for the maintenance, training, and readiness of their assigned surface ships. To meet the increased demands for forces following the events of September 2001, the Navy established a force generation model—the Fleet Response Plan—and in August 2006 the Navy issued a Fleet Response Plan instruction. The plan seeks to build readiness so the Navy can surge a greater number of ships on short notice while continuing to meet its forward-presence requirements. As depicted in table 2, there are four phases in the Fleet Response Plan 27-month cycle that applies to surface combatant and amphibious warfare ships. The four Fleet Response Plan phases are (1) basic, or unit-level training; (2) integrated training; (3) sustainment (which includes deployment); and (4) maintenance. In September 2009, the Commanders of U.S. Pacific Fleet and U.S. Fleet Forces directed Vice Admiral Balisle, USN-Ret., to convene and lead a Fleet Review Panel to assess surface force readiness. The Panel issued its report in February 2010. It stated that Navy decisions made to increase efficiencies throughout the fleet had adversely affected surface ship current readiness and life cycle material readiness. Reducing preventative maintenance requirements and the simultaneous cuts to shore infrastructure were two examples of the detrimental efficiencies cited in the report. The report also stated that if the surface force stayed on the present course, surface ships would not reach their expected service lives. For instance, it projected that destroyers would achieve 25- 27 years of service life instead of the 35-40 years expected. The report concluded that each decision to improve efficiency may well have been an appropriate attempt to meet Navy priorities at the time, but there was limited evidence to identify any changes that were made with surface force readiness as the top priority—efficiency was sought over effectiveness. The Fleet Review Panel made several maintenance, crewing, and training recommendations that it stated should be addressed not in isolation but as a circle of readiness. According to the report, it will take a multi-faceted, systematic solution to stop the decline in readiness, and begin recovery. We have previously reported on the Navy’s initiatives to achieve greater efficiencies and reduce costs. In June 2010, we issued a report regarding the training and crew sizes of cruisers and destroyers. In it we found that changes in training and reductions in crew sizes had contributed to declining material conditions on cruisers and destroyers. We recommended that the Navy reevaluate its ship workload requirements and develop additional metrics to measure the effectiveness of Navy training. DOD agreed with these recommendations. Also, in July 2011 we reported on the training and manning information presented in the Navy’s February 2011 report to Congress regarding ship readiness. The Navy’s report included information on ships’ ability to perform required maintenance tasks, pass inspection, and any projected effects on the lifespan of individual ships. We concluded that the Navy’s report did not provide discussion of data limitations or caveats to any of the information it presented, including its conclusions and recommendations. However, we found that the Navy did outline specific actions that it was taking or planned to take to address the declines in readiness due to manning and crew changes. In January 2011, the commanders of U.S. Fleet Forces Command and U.S. Pacific Fleet jointly instructed their type commanders to develop a pilot program to “establish a sequenced, integrated, and building block approach” to achieve required readiness levels. This pilot program began in March 2011, and in March 2012, near the end of the pilot, the Navy issued its Surface Force Readiness Manual, which details a new strategy for optimizing surface force readiness throughout the Fleet Response Plan. The strategy calls for integrating and synchronizing maintenance, training, and resources among multiple organizations such as Afloat Training Groups and Regional Maintenance Centers. For the period from 2008 to 2012, available data show variations in material readiness between different types of ships—such as material readiness differences between amphibious warfare ships and surface combatants—but data limitations prevent us from making any conclusions concerning improvements or declines in the overall readiness of the surface combatant and amphibious warfare fleet during the period. Through a variety of means and systems, the Navy collects, analyzes, and tracks data that show the material condition of its surface ships—in terms of both their current and life cycle readiness. Three of the data sources the Navy uses to provide information on the material condition of ships are casualty reports; Defense Readiness Reporting System – Navy (DRRS-N) reports; and Board of Inspection and Survey (INSURV) material inspection reports. None of these individual data sources are designed to provide a complete picture of the overall material condition of the surface force. However, the data sources can be viewed as complementary and, when taken together, provide data on both the current and life cycle material readiness of the surface force. For example, some casualty report data must be updated every 72 hours and provides information on individual pieces of equipment that are currently degraded or out of commission. DRRS-N data is normally reported monthly and focuses on current readiness by presenting information on broader capability and resource areas, such as ship command, control, and communications, rather than individual equipment. INSURV data is collected less frequently—ships undergo INSURV inspections about once every 5 years—but the data is extensive, and includes inspection results for structural components, individual pieces of equipment, and broad systems, as well as assessments of a ship’s warfighting capabilities. The INSURV data is used to make lifecycle decisions on whether to retain or decommission Navy ships. Casualty reports, DRRS-N data, and INSURV reports are all classified when they identify warfighting capabilities of individual ships. However, when casualty reports and INSURV information is consolidated and summarized above the individual ship level it is unclassified. Even summary DRRS-N data is classified, and therefore actual DRRS-N data is not included in this unclassified report. Table 3 provides additional details on each of the data sources. INSURV and casualty report data from January 2008 through March 2012 consistently show differences in material readiness between different types of ships. As illustrated in Table 4, there are differences between frigates, destroyers, cruisers, and amphibious warfare ships in their overall INSURV ratings—which reflect ship abilities to carry out their primary missions; their INSURV Equipment Operational Capability scores—which reflect the material condition of 19 different functional areas; and their average numbers of casualty reports—which reflect material deficiencies in mission essential equipment. The differences within the average Equipment Operational Capability and casualty reports were found to be statistically significant. See additional details regarding the statistical significance of average Equipment Operational Capability scores and the average number of casualty reports in Appendix I. For example, the data in table 4 shows that, for the time period covered, the material condition of amphibious ships is generally lower than that of frigates and destroyers. For example, a lower percentage of amphibious warfare ships received overall “satisfactory” ratings in INSURV inspections than destroyers and frigates; likewise, amphibious ships had lower average INSURV Equipment Operational Capability scores than those two types of ships. Amphibious warfare ships also have on average more casualty reports per ship than destroyers and frigates. According to Navy officials, some of these differences may result from differences in the size, complexity, and age of the various types of ships. Likewise, cruisers have a lower material condition than that of destroyers. The data show that 22 percent of cruisers were rated “unsatisfactory” compared to 3 percent of destroyers, and the average cruiser Equipment Operational Capability score of 0.786 was also lower than the destroyer score of 0.829. Finally, the average of 18 casualty reports per cruiser was about 24 percent higher than the 14.5 casualty reports per destroyer. DRRS-N data also show that there are readiness differences between the Navy’s different types of ships but the precise differences are classified and therefore are not included in this report. Material readiness data show some clear differences between types of ships as shown in table 4. However, when we considered the surface combatant and amphibious warfare ships in aggregate, we were unable to make any conclusions concerning trends in the overall readiness of these ships. One readiness measure—casualty reports—indicates that the material readiness of these ships has declined but other readiness measures show both upward and downward movement. Because of the relatively small number of INSURV inspections conducted each year, it is not possible to draw a conclusion about trends in the material readiness of surface combatant and amphibious warfare ships from January 2008 to March 2012 based on INSURV data. Casualty report data from January 2008 to March 2012 show that there is a significant upward trend in the average daily number of casualty reports per ship for both surface combatants and amphibious warfare ships, which would indicate declining material readiness. Specifically, the average daily numbers of casualty reports per ship have been increasing at an estimated rate of about 2 and 3 per year, respectively. Furthermore, for both ship types, there is not a statistically significant difference in the trend when comparing the periods before February 2010—when the Fleet Review Panel’s findings were published—and after February 2010. According to Navy officials, increases in casualty reports could be reflective of the greater numbers of material inspections and evaluations than in the past, which is likely to identify more material deficiencies and generate more casualty reports. Figure 2 shows the increases in casualty reports over time. Table 5 shows the summary data for all the INSURV inspections of surface combatant and amphibious warfare ships that were conducted from January 2008 through March 2012. Throughout the period, the data fluctuate in both an upward and downward direction. For example, the proportion of surface combatant and amphibious warfare ships rated ‘satisfactory’ fell 11 percent from 83 percent in 2008 to 72 percent in 2010, and then increased to 77 percent in 2011. Average Equipment Operational Capability scores also fluctuated throughout the period— increasing in 2011 and declining in 2009, 2010, and 2012. As previously noted, because of the relatively small number of INSURV inspections conducted each year, it is not possible to draw a conclusion about trends in the material readiness of surface combatant and amphibious warfare ships between 2008 and 2012 based on INSURV data. The casualty report and INSURV data that we analyzed are consistent with the findings of the Navy’s Fleet Review Panel, which found that the material readiness of the Navy’s ships had been declining prior to 2010. Our analysis showed a statistically significant increase in casualty reports between 2008 and 2010 which would indicate a declining material condition. Although the statistical significance of the INSURV data from 2008 to 2010 could not be confirmed due to the small number of ships that were inspected during this time period, that data showed declines in both the percentage of satisfactory inspections and average Equipment Operational Capability scores. The Navy has taken steps intended to improve the readiness of its surface combatant and amphibious warfare ships. However, it faces risks to achieving full implementation of its recent strategy and has not assessed these risks or developed alternative implementation approaches to mitigate risks. The Navy has taken several steps to help remedy problems it has identified in regard to maintaining the readiness of its surface combatant and amphibious warfare ships. In the past, material assessments, maintenance, and training were carried out separately by numerous organizations, such as the Regional Maintenance Centers and Afloat Training Groups. According to the Navy, this sometimes resulted in overlapping responsibilities and duplicative efforts. Further, the Navy has deferred maintenance due to high operational requirements. The Navy recognizes that deferring maintenance can affect readiness and increase the costs of later repairs. For example, maintenance officials told us that Navy studies have found that deferring maintenance on ballast tanks to the next major maintenance period will increase costs by approximately 2.6 times, and a systematic deferral of maintenance may cause a situation where it becomes cost prohibitive to keep a ship in service. This can lead to early retirements prior to ships reaching their expected service lives. In the past few years the Navy has taken a more systematic and integrated approach to address its maintenance requirements and mitigate maintenance problems. For example, in November 2010 it established the Surface Maintenance Engineering Planning Program, which provides life cycle management of maintenance requirements, including deferrals, for surface ships and monitors life cycle repair work. Also, in December 2010 the Navy established Navy Regional Maintenance Center headquarters, and began increasing the personnel levels at its intermediate maintenance facilities in June 2011. More recently, in March 2012, the Navy set forth a new strategy in its Surface Force Readiness Manual. This strategy is designed to integrate material assessments, evaluations, and inspections with maintenance actions and training and ensure that surface ships are (1) ready to perform their current mission requirements and (2) able to reach their expected service lives.supporting ship readiness to take an integrated, systematic approach to eliminate redundancy, build training proficiency to deploy at peak readiness, and reduce costs associated with late identified work. The manual addresses the need for the organizations involved in According to the Surface Force Readiness Manual, readiness is based upon a foundation of solid material condition that supports effective training. In line with this integrated maintenance and training approach, the new strategy tailors the 27-month Fleet Response Plan by adding a fifth phase that is not included in the Fleet Response Plan, the shakedown phase. This phase allows time between the end of the maintenance phase and the beginning of the basic phase to conduct a material assessment of the ship to determine if equipment conditions are able to support training. In addition, the new strategy shifts the cycle’s starting point from the basic phase to the sustainment phase to support the deliberate planning required to satisfactorily execute the maintenance phase and integrate maintenance and training for effective readiness. Under the new strategy, multiple assessments, which previously certified ship readiness all throughout the Fleet Response Plan cycle, will now be consolidated into seven readiness evaluations at designated points within the cycle. Because each evaluation may have several components, one organization will be designated as the lead and will be responsible for coordinating the evaluation with the ship and other assessment teams, thereby minimizing duplication and gaining efficiencies through synchronization. Figure 3 shows the readiness evaluations that occur within each phase of the strategy’s notional 27-month cycle. As previously noted, development of the Navy’s new strategy began with a pilot program. The pilot was conducted on ships from both the East and West coasts beginning in March 2011. Initial implementation of the new strategy began in March 2012 and is currently staggered, with ships’ schedules being modified to support the strategy’s integration of training, manning, and maintenance efforts. Ships that were not involved in the pilot program will begin implementing the strategy when they complete the maintenance phase of the Fleet Response Plan cycle. The Navy plans to fully implement the new strategy in fiscal year 2015 (i.e. to have all surface ships operating under the strategy and resources needed to conduct the strategy’s required tasks in place). While the Surface Force Readiness Manual states that providing a standard, predictable path to readiness is one of the tenets of the Navy’s new strategy, it also acknowledges that circumstances may arise that will require a deviation from the notional 27-month cycle. Certain factors could affect the Navy’s ability to fully implement its strategy, but the Navy has not assessed the risks to implementation or developed alternatives. As we have previously reported,assessment can provide a foundation for effective program management. Risk management is a strategic process to help program managers make risk decisions about assessing risk, allocating finite resources, and taking actions under conditions of uncertainty. To carry out a comprehensive risk assessment, program managers need to identify program risks from both external and internal sources, estimate the significance of these risks, and decide what steps should be taken to best manage them. Although such an assessment would not assure that program risks are completely eliminated, it would provide reasonable assurance that such risks are being minimized. As the Navy implements its new surface force readiness strategy one risk we identified involves the tempo of operations. While the strategy acknowledges circumstances may arise that require a deviation from the 27-month Fleet Response Plan cycle, it also states that predictability is necessary in order to synchronize the maintenance, training, and operational requirements. However, the tempo of operations is currently higher than planned for in the Fleet Response Plan. According to Navy officials, this makes execution of the strategy challenging. High operational tempos pose challenges because they could delay the entry of some ships into the strategy as well as the movement of ships through the strategy. For example, some ships that have been operating at increased tempos, such as the Navy’s ballistic missile defense cruisers and destroyers, have not followed the Navy’s planned 27-month cycle. Navy officials told us that requirements for ballistic missile defense ships are very high leading to quick turnarounds between deployments. They said, in some cases, ships may not have time for the maintenance or full basic and integrated/advanced training phases. The manual notes that ships without an extended maintenance period between deployments will remain in the sustainment phase. According to Navy guidance, the maintenance phase is critical to the success of the Fleet Response Plan since this is the optimal period in which lifecycle maintenance activities— major shipyard or depot-level repairs, upgrades, and modernization installations—occur. Thus, ships with a high operational tempo that do not enter the maintenance phase as planned will have lifecycle maintenance activities deferred, which could lead to increased future costs. Further, ships that do not enter the maintenance phase may be delayed entering into the strategy. This delay would be another risk to the implementation of the Navy’s new readiness strategy and ships’ lifecycle readiness. In addition, the Navy’s plan to decrease the number of surface combatant and amphibious warfare ships through early retirements is likely to increase operational tempos even further for many ships that remain in the fleet. DOD’s fiscal year 2013 budget request proposes the early retirement of seven Aegis cruisers and two amphibious ships in fiscal years 2013 and 2014. When fewer ships are available to meet a given requirement, ships must deploy more frequently. Table 6 shows the ships that the Navy plans to retire early, their ages at retirement, and their homeports. Also, recent changes in national priorities, which call for an increased focus on the Asia-Pacific region that places a renewed emphasis on air and naval forces, make it unlikely that operational tempos will decline. At the same time, DOD will still maintain its defense commitments to Europe and other allies and partners. In addition to the risks posed by high operational tempos, several supporting organizations currently have staffing levels that are below the levels needed to fulfill their roles in the new integrated readiness strategy. For example, Navy Afloat Training Group officials have identified the staffing levels required to fully support the strategy, and reported that they need an additional 680 personnel to fully execute the new strategy. As of August 2012, the Navy plans to reflect its funding needs for 410 of the 680 personnel in its fiscal year 2014 budget request and for the remaining 270 in subsequent requests. Under the new strategy, the Afloat Training Groups provide subject matter experts to conduct both material, and individual and team training. Previously the Afloat Training Groups used a “Train the Trainer” methodology, which did not require the same number of trainers because ships’ crews included their own system experts to train the crew and the Afloat Training Groups just trained the ships’ trainers. Afloat Training Group Pacific officials told us that there are times when the training events that can be offered—to ships currently under the strategy and/or ships that have not yet implemented the strategy—are limited because of their staffing level gaps. Current staffing allows executing all portions of the Basic Phase in select mission areas only. Other mission areas are expected to gain full training capability as staffing improves over the next several years. Until then, the Afloat Training Group officials plan to schedule training events within the limited capability mission areas based on a prioritized hierarchy. Further, Surface Maintenance Engineering Planning Program officials told us they are also short of staff. They said they need 241 staff to perform their requirements, but currently have 183 staff. They stated that while current budget plans include funding to reach the 241 staffing level in 2013, it will be reduced below the 241 requirement in 2014. As with the Afloat Training Groups and Surface Maintenance Engineering Planning Program, officials at the Navy Regional Maintenance Center headquarters told us they currently lack the staff needed to fully execute the ship readiness assessments called for in the new strategy. Ship readiness assessments evaluate both long-term lifecycle maintenance requirements (e.g. preservation to prevent structural corrosion) and maintenance to support current mission requirements (e.g. preventative and corrective maintenance for the Aegis Weapons System). According to the officials, ship readiness assessments allow them to deliberately plan the work to be done during major maintenance periods and prioritize their maintenance funds. The goal is for ships to receive all the prescribed ship readiness assessments in fiscal year 2013. However, Navy officials stated that they are evaluating the impact of recent readiness assessment revisions on changes in the Regional Maintenance Center’s funding and personnel requirements. The Navy has not undertaken a comprehensive assessment of the impact of high operational tempos, staffing shortages, or any other risks it may face in implementing its new readiness strategy, nor has it developed alternatives to mitigate any of these risks. The Navy does recognize in its strategy that circumstances may arise that require ships to deviate from the 27-month Fleet Response Plan cycle and has considered the adjustments to training that would need to take place in such a case. However, the strategy does not discuss, nor identify plans to mitigate, maintenance challenges that could arise from delays in full implementation. We believe the risks we identified may delay full implementation, which could lead to continued deferrals of lifecycle maintenance, increasing costs and impacting the Navy’s ability to achieve expected service lives for its ships. Today’s fleet of surface combatant and amphibious warfare ships provides core capabilities that enable the Navy to fulfill its missions. In order to keep this fleet materially and operationally ready to meet current missions and sustain the force for future requirements, the Navy must maximize the effective use of its resources and ensure that its ships achieve their expected service lives. Full implementation of its new strategy, however, may be delayed if the Navy does not account for the risks it faces and devise plans to mitigate against those risks. Navy organizations have taken individual steps to increase their staffing levels, but the Navy has yet to consider alternatives if the integration of assessment, maintenance, and training under the strategy is delayed. Without an understanding of risks to full implementation and plans to mitigate against them, the Navy is likely to continue to face the challenges it has encountered in the past, including the increased costs that arise from deferring maintenance and the early retirement of ships. This could impact the Navy’s ability to meet its long-term commitments. Further, ongoing maintenance deferrals—and early retirements that increase the pace of operations for the remaining surface force—could potentially impact the Navy’s ability to meet current missions. To enhance the Navy’s ability to implement its strategy to improve surface force material readiness, we recommend that the Secretary of Defense direct the Secretary of the Navy to take the following two actions: Develop a comprehensive assessment of the risks the Navy faces in implementing its Surface Force Readiness Manual strategy, and alternatives to mitigate risks. Specifically, a comprehensive risk assessment should include an assessment of risks such as high operational tempos and availability of personnel. Use the results of this assessment to make any necessary adjustments to its implementation plan. In written comments on a draft of this report, DOD partially concurred with our recommendations. Overall, DOD stated it agrees that risk assessment is an important component of program management, but does not agree that a comprehensive assessment of the risks associated with implementation of the Navy’s Surface Force Readiness strategy is either necessary or desirable. It also stated that existing assessment processes are sufficient to enable adjustments to implementation of the strategy. DOD also noted several specific points. For example, according to DOD, a number of factors impact surface ship readiness and some of those factors, such as budgetary decisions, emergent operational requirements, and unexpected major ship repair events are outside of the Navy’s direct control. DOD further stated that the strategy, and the organizations that support the strategy, determine and prioritize the full readiness requirement through reviews of ship material condition and assess the risk of any gaps between requirements and execution, as real world events unfold. DOD also noted that the Surface Ship Readiness strategy has a direct input into the annual Planning, Programming, Budgeting, and Execution (PPBE) process. It stated that its position is that execution of the strategy and PPBE process adequately identify and mitigate risks. DOD further believes that a separate one-time comprehensive assessment of risks, over and above established tracking mechanisms, is an unnecessary strain on scarce resources. Moreover, DOD stated that the Navy now has the technical resources available, using a disciplined process, to inform risk-based decisions that optimize the balance between current operational readiness and future readiness tied to expected service life through the standup of its Surface Maintenance Engineering Planning Program and Commander Navy Regional Maintenance Centers. Specifically, DOD noted documenting and managing the maintenance requirement is now a fully integrated process. According to DOD, the Navy’s Surface Type Commanders identify and adjudicate risks to service life and this approach is consistent with fundamental process discipline and risk management executed by the submarine and carrier enterprises. Finally, according to DOD, the Navy is continually assessing progress in achieving the strategy and has the requisite tools in place to identify changes in force readiness levels that may result from resource constraints, and will adjust the process as necessary to ensure readiness stays on track. As described in our report, we recognize that the Navy has taken a more systematic and integrated approach to address its maintenance requirements and mitigate problems, and specifically cite the Surface Readiness strategy, and actions such as standing up Surface Maintenance Engineering Planning Program and Commander Navy Regional Maintenance Centers. We also recognize that the Navy conducts various assessments of ship readiness and considers resource needs associated with implementing the strategy as part of the budget process. However, we do not agree that any of the current assessments or analyses offer the type of risk assessment that our report recommends. For example, the PPBE process does not address the specific risk that high operational tempos pose to implementation of the strategy nor does it present alternatives for mitigating this risk. Also, despite the ongoing efforts by Surface Maintenance Engineering Planning Program and Commander Navy Regional Maintenance Centers officials to document and manage the maintenance requirement of the surface force in an integrated process, both organizations are currently under staffed. The challenges identified in our report, including high operational tempos and current organizational staffing levels, have hindered the Navy’s ability to achieve the desired predictability in ships’ operations and maintenance schedules, as called for in its strategy. Given factors such as the Navy’s plan to decrease the number of ships as well as changes in national priorities that place a renewed emphasis on naval forces in the Asia Pacific region, these challenges we identified are unlikely to diminish in the near future, and there could be additional risks to the strategy’s implementation. Without an understanding of the full range of risks to implementing its strategy and plans to mitigate them, the Navy is likely to continue to face the challenges it has encountered in the past, including increased costs that arise from deferring maintenance and the early retirement of ships. Therefore, we continue to believe that a comprehensive risk assessment is needed. We are sending copies of this report to appropriate congressional committees, the Secretary of Defense, the Secretary of the navy, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov If you or your staff have any questions about this report, please contact me at (202) 51209619. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To assess how the Navy evaluates the material readiness of its surface combatant and amphibious warfare ships and the extent to which data indicate trends or patterns in the material readiness of these ships, we interviewed officials from the Commander Naval Surface Force, U.S. Pacific Fleet, Commander Naval Surface Force, U.S. Atlantic Fleet, as well as visiting a number of ships, to include the USS Leyte Gulf (CG 55), USS Arleigh Burke (DDG 51), USS San Antonio (LPD 17), and USS Higgins (DDG-76). We obtained and analyzed Navy policies and procedures for determining surface force readiness, as well as various studies and reports on the Navy’s material readiness process. We obtained and analyzed material readiness data from the Navy’s Board of Inspection and Survey (INSURV) as well as the United States Fleet Forces Command (USFF). We also met with Navy officials from the Board of Inspection and Survey and the United States Fleet Forces Command to complement our data analysis, and observed the INSURV material inspection of the USS Cole (DDG 67). We limited our data analysis to the period from January 2008 to March 2012 in order to cover a period of approximately two years prior to, and two years following, publication of the Fleet Review Panel of Surface Force Readiness report. Specifically, we analyzed data for the Navy’s guided-missile cruisers (CG 47 class), guided-missile destroyers (DDG 51 class), frigates (FFG 7 class), amphibious assault ships (LHA 1 and LHD 1 classes), amphibious transport dock ships (LPD 4 and LPD 17 classes), and dock landing ships (LSD 41 and LSD 49 classes). We analyzed data from three of the primary data sources the Navy uses to provide information on the material condition of ships: casualty reports; Board of Inspection and Survey (INSURV) material inspection reports; and the Defense Readiness Reporting System – Navy (DRRS-N) reports. None of these individual data sources are designed to provide a complete picture of the overall material condition of the surface force. From the Board of Inspection and Survey we met with INSURV officials and observed an INSURV inspection onboard the USS Cole (DDG 67) conducted on December 12, 2011 and December 14, 2011. We obtained all INSURV initial material inspection reports dating from 2008 through 2012 for cruisers, destroyers, frigates, and amphibious warfare ships. We then extracted relevant data from those reports, including INSURV’s overall assessment of the material condition of these surface ships (satisfactory, degraded, unsatisfactory), Equipment Operational Capability scores for the different functional areas of ships systems (on a 0.00 to 1.00 scale), and dates when these ships were inspected. Although INSURV provides an overall assessment, we included Equipment Operational Capability scores to provide additional insight into the material condition of a ship’s systems. Overall assessments focus on a ship’s material readiness to perform primary missions. As such, while multiple individual systems may be in an unsatisfactory condition (Equipment Operational Capability scores below 0.80 are considered “degraded,” while those below 0.60 are considered “unsatisfactory”), the ship may receive an overall rating of “satisfactory” due to its material readiness to meet its primary missions. Figure 4 below shows the process for determining INSURV ratings, with that segment for determining Equipment Operational Capability scores highlighted. We analyzed both INSURV overall ratings and Equipment Operational Capability scores to identify differences in material readiness between types of ships. To determine if there were statistically significant differences in the Equipment Operational Capability scores among four types of ships (cruisers, destroyers, frigates, and amphibious ships), we took the average of the various Equipment Operational Capability scores for each ship and conducted a one-way analysis of variance (ANOVA). In addition, we conducted post-hoc multiple comparison means tests to determine which ship types, if any, differed. Based on the results of this analysis, we concluded that there were statistically significant differences in the average Equipment Operational Capability score between the four ship types (p-value < 0.0001). Specifically, the average for amphibious ships was significantly lower, at the 95 percent confidence level, than the average scores for cruisers, destroyers, and frigates and the average for cruisers was significantly lower than the average for destroyers. In presenting our results, we standardized relevant data where necessary in order to present a consistent picture. For example, in 2010, the Board of Inspection and Survey moved from rating those ships with the worst material condition as “unfit for sustained combat operations” to rating them as “unsatisfactory.” We have treated both these ratings as “unsatisfactory” in this report. We obtained casualty report data for the same set of ships from the United States Fleet Forces Command office responsible for the Navy’s Maintenance Figure of Merit program. Casualty report data provided average daily numbers of casualty reports per ship for cruisers, destroyers, frigates, and amphibious warfare ships. We then used these daily averages to identify differences between ship types and to calculate and analyze changes in these daily averages from month to month and quarter to quarter. We assessed the reliability of casualty report data presented in this report. Specifically, the Navy provided information based on data reliability assessment questions we provided, which included information on an overview of the data, data collection processes and procedures, data quality controls, and overall perceptions of data quality. We received documentation about how the systems are structured and written procedures in place to ensure that the appropriate material readiness information is collected and properly categorized. Additionally, we interviewed the Navy officials to obtain further clarification on data reliability and to discuss how the data were collected and reported into the system. After assessing the data, we determined that the data were sufficiently reliable for the purposes of assessing the material condition of Navy surface combatant and amphibious warfare ships, and we discuss our findings in the report. To determine if there were statistically significant differences in the daily averages among the four types of ships (cruisers, destroyers, frigates, and amphibious warfare ships), we conducted a one-way analysis of variance (ANOVA), followed by post-hoc multiple comparison means tests to determine which ship types, if any, differed. Based on the results of this analysis we concluded that there were statistically significant differences in the daily averages between the four ship types (p-value < 0.0001), and specifically, the daily average for amphibious warfare ships was significantly higher, at the 95 percent confidence level, than the daily average for cruisers, destroyers, and frigates. Next we analyzed the changes in the daily averages to determine if there was an increasing, decreasing, or stationary trend from month to month. We did this separately for surface combatant ships (cruisers, destroyers, and frigates) and amphibious warfare ships. To estimate the trends, we conducted a time-series regression analysis to account for the correlation in the average daily scores from month to month. We then tested the estimated trends for significant changes after February 2010 — when the Fleet Review Panel’s findings were published – using the Chow test for structural changes in the estimated parameters. We fit a time-series regression model with autoregressive errors (AR lag of 1) to monthly data for both surface combatants and amphibious ships to account for the autocorrelation between monthly observations. The total R-squared, a measure that reflects how well the model predicts the data, was 0.9641 for the surface combatant ships model and 0.9086 for the amphibious warfare ships model which indicate both models fit the data well. A summary of the model parameters is given in the table below. We observed statistically significant positive trends in the daily average for both models. Specifically, the estimated trend for the daily average number of casualty reports per ship increased at a rate of about 2 per year (0.1770 * 12 months) for surface combatant ships and about 3 per year (0.2438 * 12 months) for amphibious warfare ships. In addition, neither of the tests for significant structural changes in the model parameters after February 2010 were significant at the 95 percent confidence level. Based on this, we concluded that there is not enough evidence to suggest there were significant changes in the estimated trends after February 2010 for either ship type. We analyzed data from the Defense Readiness Reporting System-Navy (DRRS-N), which contains data that is normally reported monthly and focuses on current readiness by presenting information on broader capability and resource areas. We obtained classified DRRS-N readiness data for all surface combatant and amphibious warfare ships from January 2008 through March 2012. DRRS-N data showed upward and downward movements between 2008 and 2012, but we did not evaluate the statistical significance of these movements. To determine the extent to which the Navy has taken steps intended to improve the readiness of its surface combatant and amphibious warfare ships including efforts to implement its recent strategy, we reviewed relevant Navy instructions on Navy material readiness, including the strategy—the Surface Force Readiness Manual—to identify the policies and procedures required by the Navy to ensure its surface ships are ready to perform their current mission requirements and reach their expected service lives. We also reviewed prior GAO work on risk management and collected and analyzed data on the resources needed to implement the strategy, and interviewed relevant officials. To gain a better understanding of how the Navy’s independent maintenance, training, and manning initiatives will be integrated into the new strategy, we collected data on the staffing resources needed to implement the strategy and met with officials from the Commander Navy Regional Maintenance Center, the Surface Maintenance Engineering Planning Program, and the Afloat Training Group Pacific. We focused primarily on the Navy’s maintenance initiatives because we have previously reported on its training and manning initiatives. In addition, we met with personnel on board four Navy ships to obtain their views on the impact of the Navy’s maintenance initiatives, such as readiness assessments and material inspections, on the readiness of these ships. Specifically, we visited the USS Leyte Gulf (CG 55), USS Arleigh Burke (DDG 51), USS San Antonio (LPD 17), and USS Higgins (DDG 76). We also discussed initial implementation of the new strategy with personnel on board the USS Higgins. We also met with officials from the Commander Naval Surface Force, U.S. Pacific Fleet who are responsible for administering the strategy for surface ships on the West coast and in Hawaii and Japan to discuss timeframes for transitioning ships into the strategy, challenges implementing the strategy, and plans to address any risks that may occur during the strategy’s implementation. Additionally, we obtained written responses to our questions from these officials and from officials at the Commander Naval Surface Force, U.S. Atlantic Fleet who administer the strategy for surface ships on the East coast. Finally, we reviewed prior GAO work on risk assessment as well as Navy testimony on the readiness of its ships and aircraft and Department of Defense strategic guidance on the key military missions the department will prepare for and budget priorities for fiscal years 2013-2017. We conducted this performance audit from July 2011 to September 2012, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, key contributors to this report were Michael Ferren (Assistant Director), Jim Ashley, Mary Jo Lacasse, David Rodriguez, Michael Silver, Amie Steele, Nicole Volchko, Erik Wilkins-McKee, Nicole Willems, and Ed Yuen.
In 2010, the Navy concluded that decisions it made to increase efficiencies of its surface force had adversely affected ship readiness and service life. To improve ship readiness the Navy developed a new strategy, which includes several initiatives. House Report 112-78, accompanying a proposed bill for the Fiscal Year 2012 National Defense Authorization Act (H.R.1540), directed GAO to review the recent Navy initiatives. GAO assessed 1) how the Navy evaluates the material readiness of its surface combatant and amphibious warfare ships and the extent to which data indicate trends or patterns in the material readiness of these ships, and 2) the extent to which the Navy has taken steps to improve the readiness of its surface combatant and amphibious warfare ships, including implementing its new readiness strategy. GAO analyzed Navy policies, material and readiness data from January 2008—two years prior to the release of the Navy’s 2010 report on the degradation of surface force readiness—through March 2012, two years after the release of the report, and interviewed headquarters and operational officials and ship crews. Recent data show variations in the material readiness of different types of ships, but do not reveal any clear trends of improvement or decline for the period from 2008 to 2012. The Navy uses a variety of means to collect, analyze, and track the material readiness of its surface combatant and amphibious warfare ships. Three data sources the Navy uses to provide information on the material readiness of ships are: casualty reports, which reflect equipment malfunctions; Defense Readiness Reporting System-Navy (DRRS-N) reports; and Board of Inspection and Survey (INSURV) material inspection reports. These data sources can be viewed as complementary, together providing data on both the current and life cycle material readiness of the surface force. INSURV and casualty report data show that the material readiness of amphibious warfare ships is lower than that of frigates and destroyers. However, there is no clear upward or downward trend in material readiness across the entire Navy surface combatant and amphibious warfare ships. From 2010 to March 2012, INSURV data indicated a slight improvement in the material readiness of the surface combatant and amphibious warfare fleet, but over that period casualty reports from the ships increased, which would indicate a decline in material readiness. DRRS-N data also show differences in material readiness between ship types, but the precise differences are classified and therefore are not included in this report. The Navy has taken steps to improve the readiness of its surface combatant and amphibious warfare ships, including a new strategy to better integrate maintenance actions, training, and manning, but it faces risks to fully implementing its strategy and has not assessed these risks or developed alternatives to mitigate them. In March 2012, near the end of a year-long pilot, the Navy issued its Surface Force Readiness Manual, which calls for integrating and synchronizing maintenance, training and manning among multiple organizations. The Navy expects this strategy to provide a standard, predictable path for ships to achieve and sustain surface force readiness, but certain factors, such as high operational tempos and supporting organizations’ staffing levels, could delay the entry of some ships into the strategy and the execution of the strategy. For example, one supporting organization reported needing an additional 680 personnel to fully execute the strategy. As of August 2012, the Navy plans to reflect its funding needs for 410 personnel in its fiscal year 2014 budget request and the remaining 270 in subsequent requests. Also, due to high operational tempos the phased implementation of some ships into the strategy may be delayed. Furthermore, ships that do not execute the strategy’s maintenance periods as planned will have lifecycle maintenance actions deferred. GAO has previously reported that risk assessment can inform effective program management by helping managers make decisions about the allocation of finite resources, and alternative courses of action. However, the Navy has not undertaken a comprehensive assessment of risks to the implementation of its strategy, nor has it developed alternatives to mitigate its risks. GAO believes operational tempo, supporting organizations’ staffing levels, and other risks may hinder the Navy’s full implementation of its surface force readiness strategy. If not addressed, this could lead to deferrals of lifecycle maintenance, which have in the past contributed to increased maintenance costs, reduced readiness, and shorter service lives for some ships. GAO recommends that the Navy conduct a comprehensive assessment of the risks the new strategy faces and develop alternatives to mitigate these risks. DOD partially concurred, but felt that current assessments sufficiently identify risks. GAO continues to believe that a comprehensive assessment that takes into account the full range of risk to the overall strategy is needed.
Customs’ responsibility includes (1) enforcing the laws governing the flow of goods and persons across the borders of the United States and (2) assessing and collecting duties, taxes, and fees on imported merchandise. To speed the processing of imports and improve compliance with trade laws, the Congress in 1993 enacted legislation that enabled Customs to streamline import processing through automation. The legislation also eliminated certain legislatively mandated paper requirements, allowing Customs to move from a paper-intensive to an automated import environment. Further, it required Customs to establish NCAP and specified critical functions that this program must provide, including the ability to electronically file import entries at remote locations and process drawback claims. In response to the authorizing legislation, Customs launched a major initiative in 1994 to reorganize the agency, streamline operations, and modernize the automated systems that support operations. In the process, Customs identified its core business processes as trade compliance (imports), outbound goods (exports), and passengers. In 1992, prior to redesigning its operations, Customs decided to move from centralized to distributive computing and selected a suite of hardware, software, and telecommunications products to enable it to do so. Customs refers to its effort to move to decentralized computing using these products as the Customs Distributed Computing for the Year 2000 (CDC-2000) project. The agency plans to implement ACE and its other modernized systems applications on these products. According to Customs, as of October 1, 1995, it had spent $63 million purchasing these products including upgrading its personal computers, installing local area networks, and acquiring minicomputers and related peripherals. Although no detailed analysis has been prepared, the CDC-2000 project director estimated that when completed, total purchases could reach $500 million. In January 1995, Customs hired Gartner Group Consulting Services to review the adequacy of this approach, and the contractor issued its report in April 1995. About this same time, Customs engaged another contractor—IBM Consulting Group—to determine whether the agency was technically capable of developing ACE. IBM reported its findings in February 1995. Customs’ strategy for implementing NCAP consists of three initiatives. First, Customs is redesigning the import process to better meet customer needs and improve operational efficiency and effectiveness. In doing so, the agency identified and prioritized the needs of its internal and external customers involved in import processing. Using this information, Customs determined how the new import process will work and is testing this new process at selected ports of entry. Customs plans to complete the definition of its redesigned import process by September 1997. Second, Customs is developing its new automated import processing system (ACE) applications to support the new import process and comply with NCAP-mandated functions. Customs is in the early stages of system development. Specifically, the agency has recently issued user requirements and is in the process of determining functional requirements. Customs estimates that when completed, the system will cost $125 million over its 10-year planned life. As of March 1996, Customs had spent $25 million on ACE. Customs plans to begin deploying ACE in October 1998. Finally, until ACE is deployed, Customs plans to enhance its existing import processing system—the Automated Commercial System—which operates in the existing centralized computing environment, to provide selected NCAP-mandated functions critical to meeting agency and trade community needs. For example, Customs is modifying this system to allow importers to file documentation at a port of entry other than where the goods are to arrive or be examined. Rather than wait for this function to be deployed with ACE, Customs plans to add this function to (1) facilitate inspections and import processing and (2) reduce the importers’ administrative burden by eliminating the need of having importer staff at the port of entry. Customs is currently testing this capability with seven importers at selected locations. Customs is also enhancing its current Automated Commercial System to provide electronic filing capabilities for drawback claims. To date, Customs has modified the system to enable electronic (1) filing of such claims by the trade community and (2) comparison of key information on drawback claims to the original import entries. Customs also plans to improve its controls over duplicate and excessive drawback payments, which we previously noted were a problem, by enhancing this system to maintain a cumulative record of drawback amounts paid against individual line items on import entries. This enhancement is scheduled to be completed by October 1997. In implementing its NCAP strategy, Customs has not adhered to strategic information management best practices that help organizations (1) mitigate the risks associated with modernizing automated systems and (2) better position themselves to achieve success. Specifically, Customs did not (1) conduct the requisite analyses (e.g., cost-benefit, feasibility, alternatives) before committing to the CDC-2000 project, (2) redesign its import and other business processes before the agency selected the hardware for ACE and other systems, (3) manage ACE as an investment, and (4) designate strict accountability for ensuring that it successfully incorporates all NCAP-mandated functions into the agency’s modernization effort. Organizations that have successfully modernized operations and systems use a structured approach to identify the architecture that most efficiently and effectively meets their information needs. First, they redesign their old business processes. Then they analyze the new processes to identify (1) the information needs of the entire organization and (2) alternative ways of meeting them, including consideration of costs and benefits. Finally, the organizations use this analysis to select an optimal businesswide configuration, which specifies where and how processing will occur and identifies the hardware, software, telecommunications, and other elements needed to support new automated systems. This configuration is commonly referred to as an architecture and serves as a guide for modernizing automated systems. Organizations that do not follow this disciplined approach risk (1) automating the wrong processes and (2) developing systems that do not function well or that cannot be readily integrated with other systems. Consequently, the agency may develop systems that do not enhance the agency’s mission performance or that reach only a fraction of their potential to do so. However, Customs selected its CDC-2000 approach for ACE and other systems without using this disciplined approach. Specifically, the agency began buying minicomputers, software, and other equipment to support decentralized processing in 1993, but did not start to redesign its first critical business process (imports) until late 1994 and the other two processes (passenger, exports) until January and August 1995. In addition, Customs does not plan to complete these redesign efforts until September 1997, October 1996, and December 1996, respectively. In formulating the CDC-2000 project, Customs did not identify the information needs of the entire organization and consider alternative ways of meeting them as well as the respective costs and benefits. These shortcomings were also reported by Gartner. In this regard, the contractor stated that Customs’ selected products were primarily a “buy list” and were largely identified without taking into consideration the information needs of agency processes and systems. While Gartner stated that “the CDC-2000 architecture is, in general, valid and reasonable,” Gartner recommended that Customs use a disciplined approach to fully identify its needs and only then select products to meet those needs. Customs officials said they had selected the products included in the CDC-2000 initiative before the import process was redesigned because they needed to move from their current centralized system to decentralized processing and believed that the products selected would meet any future system needs. They also said that, at the time of selection, they did not believe a rigorous supporting analysis was needed because the products chosen were widely used by industry. Further, although CDC-2000 was adopted over 4 years ago, Customs does not believe it has wasted its time and resources because, according to the agency, only $4 million of the $63 million CDC-2000 funds spent to date have been used to buy minicomputers, software, and other equipment to support decentralized processing. Customs officials noted that, to date, $59 million has been used to upgrade and install personal computers and local area networks, which needed to be acquired regardless of the architecture that was ultimately formulated. We recognize Customs’ need to improve office automation using personal computers and local area networks. However, Customs’ rationale for purchasing minicomputers, software, and other equipment is based on several faulty assertions. First, Customs risks wasting hundreds of millions of dollars it plans to spend in the future on the CDC-2000 project should it continue purchasing hardware and software to support decentralized processing without conducting a thorough analysis. Second, while decentralized processing and the products Customs selected may be widely used, this has no bearing on whether they are a cost-effective approach to meeting Customs’ needs. Further, since the agency does not yet know how it plans to conduct its business in the future or what automated systems would best support these new business processes, it is in no position to commit to CDC-2000. Third, the Federal Information Resources Management Regulation and Office of Management and Budget Circular A-130 require thorough analyses to justify major systems efforts such as CDC-2000. Finally, best practice organizations have learned that using a structured approach can help them effectively use resources and lead to order-of-magnitude gains in productivity. Successful organizations manage information system projects as investments rather than expenses. This includes (1) creating an investment review board of senior program and automated systems managers to select, monitor, and evaluate system projects, (2) establishing explicit criteria to assess the merits of each project relative to others, including the use of cost, benefit, and risk analyses, and (3) following structured systems development methodologies throughout the system’s life. Such disciplined control processes are required by the Office of Management and Budget to help federal agencies decide which planned systems are worthwhile investments and ensure that the risks associated with building those systems are adequately controlled. Although its annual automated systems expenditures total about $150 million, Customs does not manage ACE and its other systems as investments. First, while Customs has a systems steering committee, composed of senior officials who meet periodically to monitor automation projects such as ACE, the committee functions primarily as a sounding board that addresses concerns raised by project managers as well as committee members rather than as an investment review board. For example, the committee has not developed explicit decision criteria to assess mission cost, benefits, and risk of both ongoing and planned projects. Instead, the committee makes decisions on ACE and other systems, including Automated Commercial System enhancements, without considering such critical information as the merits of each project relative to others, how well these systems will contribute to improving mission performance, if their value will exceed their cost, and how likely they are to succeed. Customs officials acknowledged the steering committee’s shortcomings and told us that, while they had initiated an effort in January 1995 to redefine the steering committee’s role, including managing systems as investments, not much progress has been made since then. Customs’ Deputy Commissioner said he intends to restart efforts to establish an investment subcommittee under the steering committee but has not established a target date to do so. Second, although Customs’ system development policies require cost-benefit analyses to be performed prior to developing critical and costly systems, we found that Customs had not performed such analyses for ACE and the CDC-2000 project. Gartner and IBM also reported that such analyses were lacking. In this regard, Gartner stated that Customs needed to assess the cost and benefits for CDC-2000 because (1) the agency had only a limited understanding of what it will ultimately cost and (2) if Customs waited much longer, the cost of purchases of selected products could mushroom beyond the agency’s ability to control it. Similarly, IBM stated that to be successful with ACE, Customs needed to identify and continuously monitor the cost and benefits of this system. Customs officials told us they recognize that until the agency conducts these analyses, it will not know whether these major system investments are worthwhile. In response to these findings, Customs hired contractors to help perform these analyses, but it continues to develop ACE on CDC-2000 hardware and plans to continue making CDC-2000 purchases. These analyses are scheduled to be completed by July 1996. Third, in developing ACE, Customs also skipped or has not completed other required system development steps necessary to control development risks. Specifically, Customs has not resolved how to incorporate into ACE critical functions mandated over 2 years ago in NCAP. These functions include reconciling adjustments to importers’ duties and processing drawback claims. It also did not prepare a security plan, although Customs has had problems in the past implementing effective internal controls to protect systems and data. Customs officials acknowledged that, given where they are in the ACE development process, they should have determined how to deliver NCAP-mandated functions and completed their security plan. In addition, they told us that it is their intention to complete the security plan in July 1996 and update the user requirements in June 1996. Assigning clear accountability and responsibility for information management decisions and results is another important practice identified by successful organizations. As we pointed out in our January 1995 testimony on Customs’ plan to modernize the agency, Customs is in the midst of a major reorganization and during this time of change, it needs to clarify roles and responsibilities to reinforce accountability and facilitate mission success. We found, however, that clear accountability for meeting NCAP requirements is lacking. Customs has established a board called the Trade Compliance Board of Directors to redesign its import process. This board consists of senior officials who represent the import process and related systems. However, while the board’s charter makes it accountable for the redesigned import process, it does not establish accountability for successfully implementing NCAP. Customs’ Deputy Commissioner agreed that the agency needs to assign accountability and requisite authority to ensure that the functions mandated in NCAP are successfully implemented. Customs recognizes that it (1) cannot afford to fail in its effort to redesign and automate critical NCAP processes and (2) needs to make a more concentrated effort to implement best practices. However, Customs has not assigned responsibility for ensuring that NCAP is successfully implemented. Further, Customs has no assurance that continued buying of CDC-2000 equipment is the best way to accomplish its mission or that the hardware selected for ACE and other systems is appropriate. Customs is in the early stages of its modernization and has time to implement these best practices. While Customs is starting to take corrective action, the agency is at serious risk and vulnerable to failure until such action is completed. We recommend that, prior to additional CDC-2000 equipment purchases (except those for office automation needs) and before beginning to develop any applications software that will run on this equipment, the Commissioner of Customs should: Assign accountability and responsibility for implementing NCAP. Ensure that the export and passenger business processes are completed and the requirements generated from these two tasks, along with those of the import process requirements, are used to determine how Customs should accomplish its mission in the future, including who will perform operations and where they will be performed, what functions must be performed as part of these operations, what information is needed to perform these functions, and where data should be created and processed to produce such information, what alternative processing approaches could be used to satisfy Customs’ requirements, and what are the costs, benefits, and risks of each approach, and what processing approach is optimal, and not resume CDC-2000 purchases unless CDC-2000 is determined to be the optimal approach. Complete the agency’s effort to redefine the role of the systems steering committee to include managing systems as investments as required by the Office of Management and Budget’s Circular A-130 and information technology investment guide. This effort should include developing and using explicit criteria to guide system development decisions and using the criteria to revisit whether Customs’ planned investments, including ACE and Automated Commercial System enhancements, are appropriate. Direct the steering committee to ensure that all systems being developed strictly adhere to Customs’ system development steps. As part of this oversight, we recommend that before applications are developed for ACE, the steering committee ensure that Customs resolves how to incorporate NCAP-mandated functions into ACE and prepares a security plan. In commenting on a draft of this report, Customs agreed with all of our recommendations and said it plans to or has acted to implement them. First, Customs agreed to clarify and document accountability and responsibility for implementing NCAP. Second, Customs agreed to perform the requisite analyses to determine the optimal architecture and to cease CDC-2000 purchases, except those for office automation needs and prototyping, until this determination is made, which is fully responsive to our recommendation. Third, according to Customs, the agency has formally established its investment subcommittee and is studying best investment practices of federal and private sector organizations, which the investment subcommittee plans to use to develop operating procedures and investment criteria for reviewing system decisions. Finally, Customs agreed to have the systems steering committee address compliance with agency system development procedures at the committee’s next meeting. We are sending copies of this letter to the Chairmen and the Ranking Minority Members of the Senate Committee on Finance; the Subcommittees on Treasury, Postal Service and General Government of the Senate and House Appropriations Committees; the Senate Committee on Governmental Affairs; and the House Committee on Government Reform and Oversight. We are also sending copies to the Secretary of the Treasury, Commissioner of Customs, and Director of the Office of Management and Budget. Copies will also be available to others upon request. If you have questions about this letter, please contact me at (202) 512-6240. Major contributors are listed in appendix III. To determine the status of Customs’ strategy for implementing the National Customs Automation Program (NCAP), we reviewed the law—and its legislative history—establishing NCAP. We interviewed key Customs program and information system officials regarding process improvement and systems modernization efforts for the import process. We examined Customs’ People, Processes, and Partnerships report of September 1994, which outlines the agency’s vision for organizational and process change, and examined the 5-year information systems plan of April 1995 for fiscal years 1997-2001. We also reviewed background information on Customs’ existing automated import processing system and documents supporting current enhancements to that system as well as the (1) annual business plan, (2) project plan, and (3) user requirements documents for Customs’ planned ACE system. To assess the adequacy of Customs’ strategy for implementing NCAP, we assessed Customs’ strategic information management processes for developing ACE. In analyzing Customs’ processes, we applied fundamental best practices used by successful private and public sector organizations as discussed in our report, Executive Guide: Improving Mission Performance Through Strategic Information Management and Technology (GAO/AIMD-94-115, May 1994), and our related guide Strategic Information Management (SIM) Self-Assessment Toolkit (GAO/Version 1.0, October 28, 1994, exposure draft). We also made our assessment using the (1) Office of Management and Budget’s Circular A-130 Revised, Transmittal 2 (July 1994) and investment guide Evaluating Information Technology Investments, A Practical Guide (Version 1.0, November 1995) and (2) General Services Administration’s guide Critical Success Factors for Systems Modernization (October 1988). Specifically, to determine if information resources management plans supported the agency mission and customer needs for imports, we interviewed planning officials and examined 5-year and annual business and information management plans. To assess whether the business process is being considered in developing ACE, we conducted interviews and examined documentation for the redesigned import process, including the structured methodology used to conduct this initiative. At user conferences held by Customs, we also interviewed internal and external users of the current import system to determine whether customer information requirements are being identified in developing ACE. To determine whether ACE was guided by an architecture, we reviewed internal studies evaluating Customs’ distributed computing environment. We also analyzed commissioned studies, interviewed the contractors performing the studies, and obtained Customs’ response to the technical studies. In assessing whether CDC-2000 meets agencywide information needs, we examined agency documents and interviewed all three core business process owners as well as information systems officials. To determine if ACE is managed as an investment, we interviewed members of Customs’ systems steering committee and examined its minutes and an agenda book with background information for a committee meeting. Also, we reviewed Customs’ systems development life cycle procedures and compared ACE to applicable procedures to determine if required steps were completed at this initial stage of ACE development. Finally, to determine whether a single official was designated to ensure that NCAP requirements are met we interviewed members of the Trade Compliance Board of Directors which provides oversight of the redesign of the import process. We also examined the board’s charter, identified which Customs organizations were represented on the board, and reviewed minutes of meetings. Our work was performed at Customs headquarters in Washington, D.C., and its Data Center in Newington, Virginia. Mark E. Heatwole, Senior Assistant Director Antionette Cattledge, Assistant Director Brian C. Spencer, Technical Assistant Director Agnes I. Spruill, Senior Information Systems Analyst Gary N. Mountjoy, Senior Information Systems Analyst Cristina T. Chaplain, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Customs Service's efforts to modernize its automated systems, focusing on: (1) the status and adequacy of Customs' implementation of the National Customs Automation Program (NCAP); and (2) whether Customs is using a best-practices approach to improve mission performance through strategic information management and technology in implementing NCAP. GAO found that: (1) Customs is redesigning its import process and plans to develop a new automated import system while it enhances its present system to meet NCAP mandates in the interim; (2) the new import process will serve customer needs better and improve operational efficiency and effectiveness; (3) Customs plans to deploy its new import system in October 1998; (4) Customs' modernization efforts are vulnerable to failure because it has not effectively applied best practices to the implementation of its NCAP strategy; (5) Customs selected new systems before it redesigned its key business processes and is not applying specific criteria in assessing projects, alternatives, costs and benefits, and systems architecture; (6) Customs has not managed its new automated system acquisition as an investment nor planned how to incorporate NCAP requirements into it; (7) two contractors' studies have highlighted the weaknesses in Customs' modernization plans and recommended ways to improve its efforts; (8) Customs plans to hire additional contractors to perform the needed modernization analyses, but it also intends to continue with system development and equipment purchases before these analyses are completed; and (9) Customs has not established clear accountability for ensuring that NCAP requirements are successfully implemented.
Category I special nuclear materials are present at the three design laboratories—the Los Alamos National Laboratory in Los Alamos, New Mexico; the Lawrence Livermore National Laboratory in Livermore, California; and the Sandia National Laboratory in Albuquerque, New Mexico—and two production sites—the Pantex Plant in Amarillo, Texas, and the Y-12 Plant in Oak Ridge, Tennessee, operated by NNSA. Special nuclear material is also present at former production sites, including the Savannah River Site in Savannah River, South Carolina, and the Hanford Site in Richland, Washington. These former sites are now being cleaned up by DOE’s Office of Environmental Management (EM). Furthermore, NNSA’s Office of Secure Transportation transports these materials among the sites and between the sites and DOD bases. Contractors operate each site for DOE. NNSA and EM have field offices collocated with each site. In fiscal year 2004, NNSA and EM expect to spend nearly $900 million on physical security at their sites. Physical security combines security equipment, personnel, and procedures to protect facilities, information, documents, or material against theft, sabotage, diversion, or other criminal acts. In addition to NNSA and EM, DOE has other important security organizations. DOE’s Office of Security develops and promulgates orders and policies, such as the DBT, to guide the department’s safeguards and security programs. DOE’s Office of Independent Oversight and Performance Assurance supports the department by, among other things, independently evaluating the effectiveness of contractors’ performance in safeguards and security. It also performs follow-up reviews to ensure that contractors have taken effective corrective actions and appropriately addressed weaknesses in safeguards and security. Under a recent reorganization, these two offices were incorporated into the new Office of Security and Safety Performance Assurance. Each office, however, retains its individual missions, functions, structure, and relationship to the other. The risks associated with Category I special nuclear materials vary but include the nuclear detonation of a weapon or test device at or near design yield, the creation of improvised nuclear devices capable of producing a nuclear yield, theft for use in an illegal nuclear weapon, and the potential for sabotage in the form of radioactive dispersal. Because of these risks, DOE has long employed risk-based security practices. The key component of DOE’s well-established, risk-based security practices is the DBT, a classified document that identifies the characteristics of the potential threats to DOE assets. The DBT has been traditionally based on a classified, multiagency intelligence community assessment of potential terrorist threats, known as the Postulated Threat. The DBT considers a variety of threats in addition to the terrorist threat. Other adversaries considered in the DBT include criminals, psychotics, disgruntled employees, violent activists, and spies. The DBT also considers the threat posed by insiders, those individuals who have authorized, unescorted access to any part of DOE facilities and programs. Insiders may operate alone or may assist an adversary group. Insiders are routinely considered to provide assistance to the terrorist groups found in the DBT. The threat from terrorist groups is generally the most demanding threat contained in the DBT. DOE counters the terrorist threat specified in the DBT with a multifaceted protective system. While specific measures vary from site to site, all protective systems at DOE’s most sensitive sites employ a defense-in- depth concept that includes sensors, physical barriers, hardened facilities and vaults, and heavily armed paramilitary protective forces equipped with such items as automatic weapons, night vision equipment, body armor, and chemical protective gear. Depending on the material, protective systems at DOE Category I special nuclear material sites are designed to accomplish the following objectives in response to the terrorist threat: Denial of access. For some potential terrorist objectives, such as the creation of an improvised nuclear device, DOE may employ a protection strategy that requires the engagement and neutralization of adversaries before they can acquire hands-on access to the assets. Denial of task. For nuclear weapons or nuclear test devices that terrorists might seek to steal, DOE requires the prevention and/or neutralization of the adversaries before they can complete a specific task, such as stealing such devices. Containment with recapture. Where the theft of nuclear material (instead of a nuclear weapon) is the likely terrorist objective, DOE requires that adversaries not be allowed to escape the facility and that DOE protective forces recapture the material as soon as possible. This objective requires the use of specially trained and well-equipped special response teams. The effectiveness of the protective system is formally and regularly examined through vulnerability assessments. A vulnerability assessment is a systematic evaluation process in which qualitative and quantitative techniques are applied to detect vulnerabilities and arrive at effective protection of specific assets, such as special nuclear material. To conduct such assessments, DOE uses, among other things, subject matter experts, such as U.S. Special Forces; computer modeling to simulate attacks; and force-on-force performance testing, in which the site’s protective forces undergo simulated attacks by a group of mock terrorists. The results of these assessments are documented at each site in a classified document known as the Site Safeguards and Security Plan. In addition to identifying known vulnerabilities, risks, and protection strategies for the site, the Site Safeguards and Security Plan formally acknowledges how much risk the contractor and DOE are willing to accept. Specifically, for more than a decade, DOE has employed a risk management approach that seeks to direct resources to its most critical assets—in this case Category I special nuclear material—and mitigate the risks to these assets to an acceptable level. Levels of risk—high, medium, and low—are assigned classified numerical values and are derived from a mathematical equation that compares a terrorist group’s capabilities with the overall effectiveness of the crucial elements of the site’s protective forces and systems. Historically, DOE has striven to keep its most critical assets at a low risk level and may insist on immediate compensatory measures should a significant vulnerability develop that increases risk above the low risk level. Compensatory measures could include such things as deploying additional protective forces or curtailing operations until the asset can be better protected. In response to a September 2000 DOE Inspector General’s report recommending that DOE establish a policy on what actions are required once high or moderate risk is identified, in September 2003, DOE’s Office of Security issued a policy clarification stating that identified high risks at facilities must be formally reported to the Secretary of Energy or Deputy Secretary within 24 hours. In addition, under this policy clarification, identified high and moderate risks require corrective actions and regular reporting. Through a variety of complementary measures, DOE ensures that its safeguards and security policies are being complied with and are performing as intended. Contractors perform regular self-assessments and are encouraged to uncover any problems themselves. DOE Orders also require field offices to comprehensively survey contractors’ operations for safeguards and security every year. DOE’s Office of Independent Oversight and Performance Assurance provides yet another check through its comprehensive inspection program. All deficiencies identified during surveys and inspections require the contractors to take corrective action. In the immediate aftermath of September 11, 2001, DOE officials realized that the then current DBT, issued in April 1999 and based on a 1998 intelligence community assessment, was obsolete. The September 11, 2001, terrorist attacks suggested larger groups of terrorists, larger vehicle bombs, and broader terrorist aspirations to cause mass casualties and panic than were envisioned in the 1999 DOE DBT. However, formally recognizing these new threats by updating the DBT was difficult and took 21 months because of delays in issuing the Postulated Threat, debates over the size of the future threat and the cost to meet it, and the DOE policy process. As mentioned previously, DOE’s new DBT is based on a study known as the Postulated Threat, which was developed by the U.S. intelligence community. The intelligence community originally planned to complete the Postulated Threat by April 2002; however, the document was not completed and officially released until January 2003, about 9 months behind the original schedule. According to DOE and DOD officials, this delay resulted from other demands placed on the intelligence community after September 11, 2001, as well as from sharp debates among the organizations developing the Postulated Threat over the size and capabilities of future terrorist threats and the resources needed to meet these threats. While waiting for the new Postulated Threat, DOE developed several drafts of its new DBT. During this process, debates, similar to those that occurred during the development of the Postulated Threat, emerged in DOE. Like the participants responsible for developing the Postulated Threat, during the development of the DBT, DOE officials debated the size of the future terrorist threat and the costs to meet it. DOE officials at all levels told us that concern over resources played a large role in developing the 2003 DBT, with some officials calling the DBT the “funding basis threat,” or the maximum threat the department could afford. This tension between threat size and resources is not a new development. According to a DOE analysis of the development of prior DBTs, political and budgetary pressures and the apparent desire to reduce the requirements for the size of protective forces appear to have played a significant role in determining the terrorist group numbers contained in prior DBTs. Finally, DOE developed the DBT using DOE’s policy process, which emphasizes developing consensus through a review and comment process by program offices, such as EM and NNSA. However, many DOE and contractor officials found that the policy process for developing the new DBT was laborious and not timely, especially given the more dangerous threat environment that has existed since September 11, 2001. As a result, during the time it took DOE to develop the new DBT, its sites were only required to defend against the terrorist group defined in the 1999 DBT, which, in the aftermath of September 11, 2001, DOE officials realized was obsolete. While the May 2003 DBT identifies a larger terrorist group than did the previous DBT, the threat identified in the new DBT, in most cases, is less than the terrorist threat identified in the intelligence community’s Postulated Threat. The Postulated Threat estimated that the force attacking a nuclear weapons site would probably be a relatively small group of terrorists, although it was possible that an adversary might use a greater number of terrorists if that was the only way to attain an important strategic goal. In contrast to the Postulated Threat, DOE is preparing to defend against a significantly smaller group of terrorists attacking many of its facilities. Specifically, only for its sites and operations that handle nuclear weapons is DOE currently preparing to defend against an attacking force that approximates the lower range of the threat identified in the Postulated Threat. For its other Category I special nuclear material sites, all of which fall under the Postulated Threat’s definition of a nuclear weapons site, DOE is requiring preparations to defend against a terrorist force significantly smaller than was identified in the Postulated Threat. DOE calls this a graded threat approach. Some of these other sites, however, may have improvised nuclear device concerns that, if successfully exploited by terrorists, could result in a nuclear detonation. Nevertheless, under the graded threat approach, DOE requires these sites only to be prepared to defend against a smaller force of terrorists than was identified by the Postulated Threat. Officials in DOE’s Office of Independent Oversight and Performance Assurance disagreed with this approach and noted that sites with improvised nuclear device concerns should be held to the same requirements as facilities that possess nuclear weapons and test devices since the potential worst-case consequence at both types of facilities would be the same—a nuclear detonation. Other DOE officials and an official in DOD’s Office of the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence disagreed with the overall graded threat approach, believing that the threat should not be embedded in the DBT by adjusting the number of terrorists that might attack a particular target. DOE Office of Security officials cited three reasons for why the department departed from the Postulated Threat’s assessment of the potential size of terrorist forces. First, these officials stated that they believed that the Postulated Threat only applied to sites that handled completed nuclear weapons and test devices. However, both the 2003 Postulated Threat, as well as the preceding 1998 Postulated Threat, state that the threat applies to nuclear weapons and special nuclear material without making any distinction between them. Second, DOE Office of Security officials believed that the higher threat levels contained in the 2003 Postulated Threat represented the worst potential worldwide terrorist case over a 10-year period. These officials noted that while some U.S. assets, such as military bases, are located in parts of the world where terrorist groups receive some support from local governments and societies thereby allowing for an expanded range of capabilities, DOE facilities are located within the United States, where terrorists would have a more difficult time operating. Furthermore, DOE Office of Security officials stated that the DBT focuses on a nearer-term threat of 5 years. As such, DOE Office of Security officials said that they chose to focus on what their subject matter experts believed was the maximum, credible, near-term threat to their facilities. However, while the 1998 Postulated Threat made a distinction between the size of terrorist threats abroad and those within the United States, the 2003 Postulated Threat, reflecting the potential implications of the September 2001 terrorist attacks, did not make this distinction. Finally, DOE Office of Security officials stated that the Postulated Threat document represented a reference guide instead of a policy document that had to be rigidly followed. The Postulated Threat does acknowledge that it should not be used as the sole consideration to dictate specific security requirements and that decisions regarding security risks should be made and managed by decision makers in policy offices. However, DOE has traditionally based its DBT on the Postulated Threat. For example, the prior DBT, issued in 1999, adopted exactly the same terrorist threat size as was identified by the 1998 Postulated Threat. Finally, the department’s criteria for determining the severity of radiological, chemical, and biological sabotage may be insufficient. For example, the criterion used for protection against radiological sabotage is based on acute radiation dosages received by individuals. However, this criterion may not fully capture or characterize the damage that a major radiological dispersal at a DOE site might cause. For example, according to a March 2002 DOE response to a January 23, 2002, letter from Representative Edward J. Markey, a worst-case analysis at one DOE site showed that while a radiological dispersal would not pose immediate, acute health problems for the general public, the public could experience measurable increases in cancer mortality over a period of decades after such an event. Moreover, releases at the site could also have environmental consequences requiring hundreds of millions to billions of dollars to clean up. Contamination could also affect habitability for tens of miles from the site, possibly affecting hundreds of thousands of residents for many years. Likewise, the same response showed that a similar event at a NNSA site could result in a dispersal of plutonium that could contaminate several hundred square miles and ultimately cause thousands of cancer deaths. For chemical sabotage standards, the 2003 DBT requires sites to protect to industry standards. However, we reported March 2003 year that such standards currently do not exist. Specifically, we found that no federal laws explicitly require chemical facilities to assess vulnerabilities or take security actions to safeguard their facilities against a terrorist attack. Finally, the protection criteria for biological sabotage are based on laboratory safety standards developed by the U.S. Centers for Disease Control and not physical security standards. While DOE issued the final DBT in May 2003, it has only recently resolved a number of significant issues that may affect the ability of its sites to fully meet the threat contained in the new DBT in a timely fashion and is still addressing other issues. Fully resolving all of these issues may take several years, and the total cost of meeting the new threats is currently unknown. Because some sites will be unable to effectively counter the higher threat contained in the new DBT for up to several years, these sites should be considered to be at higher risk under the new DBT than they were under the old DBT. In order to undertake the necessary range of vulnerability assessments to accurately evaluate their level of risk under the new DBT and implement necessary protective measures, DOE recognized that it had to complete a number of key activities. DOE only recently completed three of these key activities. First, in February 2004, DOE issued its revised Adversary Capabilities List, which is a classified companion document to the DBT, that lists the potential weaponry, tactics, and capabilities of the terrorist group described in the DBT. This document has been amended to include, among other things, heavier weaponry and other capabilities that are potentially available to terrorists who might attack DOE facilities. DOE is continuing to review relevant intelligence information for possible incorporation into future revisions of the Adversary Capabilities List. Second, DOE also only recently provided additional DBT implementation guidance. In a July 2003 report, DOE’s Office of Independent Oversight and Performance Assurance noted that DOE sites had found initial DBT implementation guidance confusing. For example, when the Deputy Secretary of Energy issued the new DBT in May 2003, the cover memo said the new DBT was effective immediately but that much of the DBT would be implemented in fiscal years 2005 and 2006. According to a 2003 report by the Office of Independent Oversight and Performance Assurance, many DOE sites interpreted this implementation period to mean that they should, through fiscal year 2006, only be measured against the previous, less demanding 1999 DBT. In response to this confusion, the Deputy Secretary issued further guidance in September 2003 that called for the following, among other things: DOE’s Office of Security to issue more specific guidance by October 22, 2003, regarding DBT implementation expectations, schedules, and requirements. DOE issued this guidance January 30, 2004. Quarterly reports showing sites’ incremental progress in meeting the new DBT for ongoing activities. The first series of quarterly progress reports may be issued in July 2004. Immediate compliance with the new DBT for new and reactivated operations. A third important DBT-related issue was just completed in early April 2004. A special team created in the 2003 DBT, composed of weapons designers and security specialists, finalized its report on each site’s improvised nuclear device vulnerabilities. The results of this report were briefed to senior DOE officials in March 2004 and the Deputy Secretary of Energy issued guidance, based on this report, to DOE sites in early April 2004. As a result, some sites may be required under the 2003 DBT to shift to enhanced protection strategies, which could be very costly. This special team’s report may most affect EM sites because their improvised nuclear device potential had not previously been explored. Finally, DOE’s Office of Security has not completed all of the activities associated with the new vulnerability assessment methodology it has been developing for over a year. DOE’s Office of Security believes this methodology, which uses a new mathematical equation for determining levels of risk, will result in a more sensitive and accurate portrayal of each site’s defenses-in-depth and the effectiveness of sites’ protective systems (i.e., physical security systems and protective forces) when compared with the new DBT. DOE’s Office of Security decided to develop this new equation because its old mathematical equation had been challenged on technical grounds and did not give sites credit for the full range of their defenses-in-depth. While DOE’s Office of Security completed this equation in December 2002, officials from this office believe it will probably not be completely implemented at the sites for at least another year for two reasons. First, site personnel who implement this methodology will require additional training to ensure they are employing it properly. DOE’s Office of Security conducted initial training in December 2003, as well as a prototype course in February 2004, and has developed a nine-course vulnerability assessment certification program. Second, sites will have to collect additional data to support the broader evaluation of their protective systems against the new DBT. Collecting these data will require additional computer modeling and force-on-force performance testing. Because of the slow resolution of some of these issues, DOE has not developed any official long-range cost estimates or developed any integrated, long-range implementation plans for the May 2003 DBT. Specifically, neither the fiscal year 2003 nor 2004 budgets contained any provisions for DBT implementation costs. However, during this period, DOE did receive additional safeguards and security funding through budget reprogramming and supplemental appropriations. DOE is using most of these additional funds to cover the higher operational costs associated with the increased security condition (SECON) measures. DOE has gathered initial DBT implementation budget data and has requested additional DBT implementation funding in the fiscal year 2005 budget: $90 million for NNSA, $18 million for the Secure Transportation Asset within the Office of Secure Transportation, and $26 million for EM. However, DOE officials believe the budget data collected so far has been of generally poor quality because most sites have not yet completed the necessary vulnerability assessments to determine their resource requirements. Consequently, the fiscal year 2006 budget may be the first budget to begin to accurately reflect the safeguards and security costs of meeting the requirements of the new DBT. Reflecting these various delays and uncertainties, in September 2003, the Deputy Secretary changed the deadline for DOE program offices, such as EM and NNSA, to submit DBT implementation plans from the original target of October 2003 to the end of January 2004. NNSA and EM approved these plans in February 2004. DOE’s Office of Security has reviewed these plans and is planning to provide implementation assistance to sites that request it. DOE officials have described these plans as being ambitious in terms of the amount of work that has to be done within a relatively short time frame and dependent on continued increases in safeguards and security funding, primarily for additional protective force personnel. However, some plans may be based on assumptions that are no longer valid. Revising these plans could require additional resources, as well as add time to the DBT implementation process. A DOE Office of Budget official told us that current DBT implementation cost estimates do not include items such as closing unneeded facilities, transporting and consolidating materials, completing line item construction projects, and other important activities that are outside of the responsibility of the safeguards and security program. For example, EM’s Security Director told us that for EM to fully comply with the DBT requirements in fiscal year 2006 at one of its sites, it will have to close and de-inventory two facilities, consolidate excess materials into remaining special nuclear materials facilities, and move consolidated Category I special nuclear material, which NNSA’s Office of Secure Transportation will transport, to another site. Likewise, the EM Security Director told us that to meet the DBT requirements at another site, EM will have to accelerate the closure of one facility and transfer special nuclear material to another facility on the site. The costs to close these facilities and to move materials within a site are borne by the EM program budget and not by the EM safeguards and security budget. Similarly, the costs to transport the material between sites are borne by NNSA’s Office of Secure Transportation budget and not by EM’s safeguards and security budget. A DOE Office of Budget official told us that a comprehensive, department-wide approach to budgeting for DBT implementation that includes such important program activities as described above is needed; however, such an approach does not currently exist. The department plans to complete DBT implementation by the end of fiscal year 2006. However, most sites estimate that it will take 2 to 5 years, if they receive adequate funding, to fully meet the requirements of the new DBT. During this time, sites will have to conduct vulnerability assessments, undertake performance testing, and develop Site Safeguards and Security Plans. Consequently, full DBT implementation could occur anywhere from fiscal year 2005 to fiscal year 2008. Some sites may be able to move more quickly and meet the department’s deadline of the end of fiscal year 2006. Because some sites will be unable to effectively counter the threat contained in the new DBT for a period of up to several years, these sites should be considered to be at higher risk under the new DBT than they were under the old DBT. For example, the Office of Independent Oversight and Performance Assurance has concluded in recent inspections that at least two DOE sites face fundamental and not easily resolved security problems that will make meeting the requirements of the new DBT difficult. For other DOE sites, their level of risk under the new DBT remains largely unknown until they can conduct the necessary vulnerability assessments. In closing, while DOE struggled to develop its new DBT, the DBT that DOE ultimately developed is substantially more demanding than the previous one. Because the new DBT is more demanding and because DOE wants to implement it by end of fiscal year 2006—a period of about 29 months—DOE must press forward with a series of additional actions to ensure that it is fully prepared to provide a timely and cost effective defense of its most sensitive facilities. First, because the September 11, 2001, terrorist attacks suggested larger groups of terrorists with broader aspirations for causing mass casualties and panic, we believe that the DBT development process that was used requires reexamination. While DOE may point to delays in the development of the Postulated Threat as the primary reason for the almost 2 years it took to develop a new DBT, DOE was also working on the DBT itself for most of that time. We believe the difficulty associated with developing a consensus using DOE’s traditional policy-making process was a key factor in the time it took to develop a new DBT. During this extended period, DOE’s sites were only being defended against what was widely recognized as an obsolete terrorist threat level. Second, we are concerned about two aspects of the resulting DBT. We are not persuaded that there is sufficient difference, in its ability to achieve the objective of causing mass casualties or creating public panic, between the detonation of an improvised nuclear device and the detonation of a nuclear weapon or test device at or near design yield that warrants setting the threat level at a lower number of terrorists. Furthermore, while we applaud DOE for adding additional requirements to the DBT such as protection strategies to guard against radiological, chemical, and biological sabotage, we believe that DOE needs to reevaluate its criteria for terrorist acts of sabotage, especially in the chemical area, to make it more defensible from a physical security perspective. Finally, because some sites will be unable to effectively counter the threat contained in the new DBT for a period of up to several years, these sites should be considered to be at higher risk under the new DBT than they were under the old DBT. As a result, DOE needs to take a series of actions to mitigate these risks to an acceptable level as quickly as possible. To accomplish this, it is important for DOE to go about the hard business of a comprehensive department-wide approach to implementing needed changes in its protective strategy. Because the consequences of a successful terrorist attack on a DOE site could be so devastating, we believe it is important for DOE to better inform Congress about what sites are at high risk and what progress is being made to reduce these risks to acceptable levels. Mr. Chairman, this concludes our prepared statement. We would be happy to respond to any questions that you or Members of the Subcommittee may have. For further information on this testimony, please contact Robin M. Nazzaro at (202) 512-3841. James Noel and Jonathan Gill also made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
A successful terrorist attack on Department of Energy (DOE) sites containing nuclear weapons or the material used in nuclear weapons could have devastating consequences for the site and its surrounding communities. Because of these risks, DOE needs an effective safeguards and security program. A key component of an effective program is the design basis threat (DBT), a classified document that identifies, among other things, the potential size and capabilities of terrorist forces. The terrorist attacks of September 11, 2001, rendered the then-current DBT obsolete, resulting in DOE issuing a new version in May 2003. GAO (1) identified why DOE took almost 2 years to develop a new DBT, (2) analyzed the higher threat in the new DBT, and (3) identified remaining issues that need to be resolved in order for DOE to meet the threat contained in the new DBT. DOE took a series of actions in response to the terrorist attacks of September 11, 2001. While each of these has been important, in and of themselves, they are not sufficient to ensure that all of DOE's sites are adequately prepared to defend themselves against the higher terrorist threat present in the post September 11, 2001 world. Specifically, GAO found that DOE took almost 2 years to develop a new DBT because of (1) delays in developing an intelligence community assessment--known as the Postulated Threat--of the terrorist threat to nuclear weapon facilities, (2) DOE's lengthy comment and review process for developing policy, and (3) sharp debates within DOE and other government organizations over the size and capabilities of future terrorist threats and the availability of resources to meet these threats. While the May 2003 DBT identifies a larger terrorist threat than did the previous DBT, the threat identified in the new DBT, in most cases, is less than the threat identified in the intelligence community's Postulated Threat, on which the DBT has been traditionally based. The new DBT identifies new possible terrorist acts such as radiological, chemical, or biological sabotage. However, the criteria that DOE has selected for determining when facilities may need to be protected against these forms of sabotage may not be sufficient. For example, for chemical sabotage, the 2003 DBT requires sites to protect to "industry standards;" however, such standards currently do not exist. DOE has been slow to resolve a number of significant issues, such as issuing additional DBT implementation guidance, developing DBT implementation plans, and developing budgets to support these plans, that may affect the ability of its sites to fully meet the threat contained in the new DBT in a timely fashion. Consequently, DOE's deadline to meet the requirements of the new DBT by the end of fiscal year 2006 is probably not realistic for some sites.
cases, the files contained no evidence of OIRA changes, and we could not tell if that meant that there had been no such changes to the rules or whether the changes were just not documented. Also, the information in the dockets for some of the rules was quite voluminous, and many did not have indexes to help the public find the required documents. Therefore, we recommended that the OIRA Administrator issue guidance to the agencies on how to implement the executive order’s transparency requirements and how to organize their rulemaking dockets to best facilitate public access and disclosure. The OIRA Administrator’s comments in reaction to our recommendations appeared at odds with the requirements and intent of the executive order. Her comments may also signal a need for ongoing congressional oversight and, in some cases, greater specificity as Congress codifies agencies’ public disclosure responsibilities and OIRA’s role in the regulatory review process. For example, in response to our recommendation that OIRA issue guidance to agencies on how to improve the accessibility of rulemaking dockets, the Administrator said that “it is not the role of OMB to advise other agencies on general matters of administrative practice.” However, section 2(b) of the executive order states that “o the extent permitted by law, OMB shall provide guidance to agencies...,” and that OIRA “is the repository of expertise concerning regulatory issues, including methodologies and procedures that affect more than one agency....” We believe that OIRA has a clear responsibility under the executive order to exercise leadership and provide the agencies with guidance on such crosscutting regulatory issues, so we retained our recommendation. The OIRA Administrator also indicated in her comments that she believed the executive order did not require agencies to document changes made at OIRA’s suggestion before a rule is formally submitted to OIRA. However, the Administrator also said that OIRA can become deeply involved in important agency rules well before they are submitted to OIRA for formal review. Therefore, adherence to her interpretation of the order would result in agencies’ failing to document OIRA’s early involvement in the rulemaking process. These transparency requirements were put in place because of earlier congressional concerns regarding how rules were changed during the regulatory review process. Congress was clearly interested in making OIRA’s role in that process as transparent as possible. In response to the Administrator’s comments, we retained our original recommendation but specified that OIRA’s guidance should require agencies to document changes made at OIRA’s suggestion whenever they occur. Finally, the OIRA Administrator said that “an interested individual” could identify changes made to a draft rule by comparing drafts of the rule. This position seems to change the focus of responsibility in Executive Order 12866. The order requires agencies to identify for the public changes made to draft rules. It does not place the responsibility on the public to identify changes made to agency rules. Also, comparison of a draft rule submitted for review with the draft on which OIRA concluded review would not indicate which of the changes were made at OIRA’s suggestion, which is a specific requirement of the order. We believe that enactment of the public disclosure requirements in S. 981 would provide a statutory foundation for the public’s right to regulatory review information. In particular, the bill’s requirement that these rule changes be described in a single document would make it easier for the public to understand how rules change during the review process. We are also pleased to see that the new version of S. 981 requires agencies to document when no changes are suggested or recommended by OIRA. As I said earlier, the absence of documentation could indicate that either no changes were made to the rule or that the changes were not documented. Additional refinements to the bill may be needed in light of the OIRA Administrator’s comments responding to our report. For example, S. 981 may need to state more specifically that agencies must document the changes made to rules at the suggestion or recommendation of OIRA whenever they occur, not just the changes made during the period of OIRA’s formal review. Similarly, if Congress wants OIRA to issue guidance on how agencies can structure rulemaking dockets to facilitate public access, S. 981 may need to specifically instruct the agency to do so. During last September’s hearing on S. 981, one of the witnesses indicated that Congress should determine the effectiveness of previously enacted regulatory reforms before enacting additional reforms. We recently completed a broad review of one of the most recent such reform efforts—title II of the Unfunded Mandates Reform Act of 1995 (UMRA).Title II of UMRA is similar to S. 981 in that it requires agencies to take a number of analytical and procedural steps during the rulemaking process. Therefore, analysis of UMRA’s implementation may prove valuable in determining both the need for further reform and how agency requirements should be crafted. We concluded that UMRA’s title II requirements had little effect on agencies’ rulemaking actions because those requirements (1) did not apply to many large rulemaking actions, (2) permitted agencies not to take certain actions if the agencies determined they were duplicative or unfeasible, and (3) required agencies to take actions that they were already required to take. For example, title II of UMRA requires agencies to prepare “written statements” containing information on regulatory costs, benefits, and other matters for any rule (1) for which a proposed rule was published, (2) that includes a federal mandate, and (3) that may result in the expenditure of $100 million or more in any 1 year by state, local, or tribal governments, in the aggregate, or the private sector. We examined the 110 economically significant rules that were promulgated during the first 2 years of UMRA (March 22, 1995, until March 22, 1997) by agencies covered by the Act and concluded that UMRA’s written statement requirements did not apply to 78 of these 110 rules. Some of the rules had no associated proposed rule. Others were not technically “mandates”—i.e., “enforceable duties” unrelated to a voluntary program or federal financial assistance. Some rules were “economically significant” in that they would have a $100 million effect on the economy, but did not require “expenditures” by state, local, or tribal governments or the private sector of $100 million in any 1 year. Certain sections of UMRA permitted agencies to decide what actions to take. For example, subsection 202(a)(3) says agencies’ written statements must contain estimates of future compliance costs and any disproportionate budgetary effects “if and to the extent that the agency determines that accurate estimates are reasonably feasible.” UMRA also permitted agencies to prepare the written statement as part of any other statement or analysis. Because the agencies’ rules commonly contain the information required in the written statements (e.g., the provision of federal law under which the rule is being promulgated), the agencies only rarely prepared a separate UMRA written statement. development of regulatory proposals containing significant federal intergovernmental mandates. However, Executive Order 12875 required almost exactly the same sort of process when it was issued in 1993. Like UMRA, S. 981 contains some of the same requirements contained in Executive Orders 12866 and 12875, and in previous legislation. However, the requirements in the bill are also different from existing requirements in many respects. For example, S. 981 appears to cover all of the economically significant rules that UMRA did not cover, as well as rules by many independent regulatory agencies that were not covered by the executive orders. S. 981 would also address a number of topics that are not addressed by either UMRA or the executive orders, including risk assessments and peer review. These requirements could have the effect of improving the quality of the cost-benefit analyses that agencies are currently required to perform under Executive Order 12866. The new version of S. 981 contains one set of requirements that was not in the bill introduced last year—that agencies develop a plan for the periodic review of rules issued by the agency that have or will have a significant economic impact on a substantial number of small entities. Each agency is also required to publish in the Federal Register a list of rules that will be reviewed under the plan in the succeeding fiscal year. In one sense, these requirements are not really “new.” They are a refinement and underscoring of requirements originally put in place by section 610 of the Regulatory Flexibility Act (RFA) of 1980. Our recent work related to the RFA suggests that at least some of the RFA’s requirements are not being properly implemented. In 1997, we reported that only three agencies identified regulations that they planned to review within the next year in the November 1996 edition of the Unified Agenda of Federal Regulatory and Deregulatory Action. Of the 21 entries in that edition of the Unified Agenda that these 3 agencies listed, none met the requirements in the RFA. For example, although section 610 requires agencies to notify the public about an upcoming review of an existing rule to determine whether and, if so, what changes to make, many of the “section 610” entries in the Agenda announced regulatory actions that the agencies had taken or planned to take. Earlier this month we updated our 1997 report by reviewing agencies’ use of the October 1997 Unified Agenda. We reported that seven agencies had used the Agenda to identify regulations that they said they planned to review. However, of the 34 such entries in that edition of the Agenda, only 3 met the requirements of the statute. Although the Unified Agenda is a convenient and efficient mechanism by which agencies can satisfy the notice requirements in section 610 of the RFA, agencies can print those notices in any part of the Federal Register. We did an electronic search of the 1997 Federal Register to determine whether it contained any other references to a “section 610 review.” We found no such references. There is no way to know with certainty how many regulations in the Code of Federal Regulations have a “significant economic impact on a substantial number of small entities,” or how many of those regulations the issuing agencies have reviewed pursuant to section 610. Agencies differ in their interpretation of this phrase, and we have recommended that a governmentwide definition be developed. Nevertheless, the relatively small number of section 610 notices in the Unified Agendas, combined with the fact that nearly all of those notices did not meet the requirements of the statute, suggests that agencies may not be conducting the required section 610 rule reviews. Although many federal agencies reviewed all of their regulations as part of the administration’s “page-by-page review” effort to eliminate and revise regulations, those reviews would not meet the requirements of section 610 unless the agencies utilized the steps delineated in that section of the RFA that were designed to allow the public to be part of the review process. Therefore, we believe that the reaffirmation and refinement of the section 610 rule review process in S. 981 can serve to underscore Congress’ commitment to periodic review of agencies’ rules and the public’s involvement in that process. Another critical element of S. 981 is its emphasis on cost-benefit analysis for major rules in the rulemaking process. Mr. Chairman, at your and Senator Glenn’s request, we have been examining 20 economic analyses at 5 agencies to determine the extent to which those analyses contain the “best practices” elements recommended in OMB’s January 1996 guidance for conducting cost-benefit analyses. We are also attempting to determine the extent to which the analyses are used in the agencies’ decisionmaking processes. Although our review is continuing, we have some tentative results that are relevant to this Committee’s consideration of S. 981. The 20 economic analyses varied significantly in the extent to which they contained the elements that OMB recommended. For example, although the guidance encourages agencies to monetize the costs and benefits of a broad range of regulatory alternatives, about half of the analyses did not monetize the costs of all alternatives and about two-thirds did not monetize the benefits. Several of the analyses did not discuss any alternatives other than the proposed regulatory action. The OMB guidance also stresses the importance of explicitly presenting the assumptions, limitations, and uncertainties in economic analyses. However, the 20 analyses that we reviewed frequently did not explain why certain assumptions or values were used, such as the discount rates used to determine the present-value of costs and benefits and the values assigned to a human life. Also, about a third of the analyses did not address the uncertainties associated with the analyses. For the most part, the analyses played a somewhat limited role in the agencies’ decisionmaking process—examining the cost-effectiveness of various approaches an agency could use within a relatively narrow range of alternatives, or helping the agency define the regulations’ coverage or implementation date. The analyses did not fundamentally affect agencies’ decisions on whether or not to regulate, nor did they cause the agencies to select significantly different regulatory alternatives than the ones that had been originally considered. decisionmaking was the need to issue the regulations quickly due to emergencies, statutory deadlines, and court orders. Enactment of the analytical transparency and executive summary requirements in S. 981 would extend and underscore Congress’ previous statutory requirements that agencies identify how regulatory decisions are made. We believe that Congress and the public have a right to know what alternatives the agencies considered and what assumptions they made in deciding how to regulate. Although those assumptions may legitimately vary from one analysis to another, the agencies should explain those variations. Mr. Chairman, S. 981 contains a number of provisions designed to improve regulatory management. These provisions strive to make the regulatory process more intelligible and accessible to the public, more effective, and better managed. Passage of S. 981 would provide a statutory foundation for such principles as openness, accountability, and sound science in rulemaking. This Committee has been diligent in its oversight of the federal regulatory process. However, our reviews of current regulatory requirements suggest that, even if S. 981 is enacted into law, Congress will need to carefully oversee its implementation to ensure that the principles embodied in the bill are faithfully implemented. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed its work on the Regulatory Improvement Act of 1998, focusing on federal agencies' implementation of: (1) the transparency requirements in Executive Order 12866; (2) title II of the Unfunded Mandates Reform Act (UMRA) of 1995; (3) the public notification in section 610 of the Regulatory Flexibility Act (RFA) of 1980; and (4) Office of Management and Budget's (OMB) best practices guide for economic analyses used in rulemaking. GAO noted that: (1) GAO reviewed four major rulemaking agencies' public dockets and concluded that it was usually very difficult to locate the documentation that the executive order required; (2) in many cases, the dockets contained some evidence of changes made during or because of the Office of Information and Regulatory Affairs (OIRA) review, but GAO could not be sure that all such changes had been documented; (3) in other cases, the files contained no evidence of OIRA changes, and GAO could not tell if there had been no such changes to the rule or whether the changes were just not documented; (4) UMRA's title II requirements had little effect on agencies' rulemaking actions because those requirements: (a) did not apply to many large rulemaking actions; (b) permitted agencies not to take certain actions if the agencies determined they were duplicative or unfeasible; and (c) required agencies to take actions that they were already required to take; (5) the new version of S. 981 contains one set of requirements that was not in the bill introduced last year--that agencies develop a plan for the periodic review of rules issued by the agency that have or will have a significant economic impact on a substantial number of small entities; (6) each agency is also required to publish in the Federal Register a list of rules that will be reviewed under the plan in the succeeding fiscal year; (7) although the Unified Agenda is a convenient and efficient mechanism by which agencies can satisfy the notice requirements in section 610 of the RFA, agencies can print those notices in any part of the Federal Register; (8) GAO believes that the reaffirmation and refinement of the section 610 rule review process in S. 981 can serve to underscore Congress' commitment to periodic review of agencies' rules and the public's involvement in that process; (9) another critical element of S. 981 is its emphasis on cost-benefit analysis for major rules in the rulemaking process; (10) GAO has been examining 20 economic analyses at 5 agencies to determine the extent to which those analyses contain the best practices elements recommended in OMB's January 1996 guidance for conducting cost-benefit analyses; (11) the 20 economic analyses varied significantly in the extent to which they contained the elements that OMB recommended; and (12) agency officials stated that the variations in the degree to which the economic analyses followed OMB guidance and the limited use of the economic analyses were primarily caused by the limited degree of discretion that the underlying statutes permitted.
In July 2002, President Bush issued the National Strategy for Homeland Security. The strategy set forth overall objectives to prevent terrorist attacks within the United States, reduce America’s vulnerability to terrorism, and minimize the damage and assist in the recovery from attacks that occur. The strategy further identified a plan to strengthen homeland security through the cooperation and partnering of federal, state, local, and private sector organizations on an array of functions. It also specified a number of federal departments, as well as nonfederal organizations, that have important roles in securing the homeland, with DHS having key responsibilities in implementing established homeland security mission areas. This strategy was updated and reissued in October 2007. In November 2002, the Homeland Security Act of 2002 was enacted into law, creating DHS. The act defined the department’s missions to include preventing terrorist attacks within the United States; reducing U.S. vulnerability to terrorism; and minimizing the damages, and assisting in the recovery from, attacks that occur within the United States. The act further specified major responsibilities for the department, including the analysis of information and protection of infrastructure; development of countermeasures against chemical, biological, radiological, and nuclear, and other emerging terrorist threats; securing U.S. borders and transportation systems; and organizing emergency preparedness and response efforts. DHS began operations in March 2003. Its establishment represented a fusion of 22 federal agencies to coordinate and centralize the leadership of many homeland security activities under a single department. We have evaluated many of DHS’s management functions and programs since the department’s establishment, and have issued over 400 related products. In particular, in August 2007, we reported on the progress DHS had made since its inception in implementing its management and mission functions. We also reported on broad themes that have underpinned DHS’s implementation efforts, such as agency transformation, strategic planning, and risk management. Over the past five years, we have made over 900 recommendations to DHS on ways to improve operations and address key themes, such as to develop performance measures and set milestones for key programs and implement internal controls to help ensure program effectiveness. DHS has implemented some of these recommendations, taken actions to address others, and taken other steps to strengthen its mission activities and facilitate management integration. DHS has made progress in implementing its management functions in the areas of acquisition, financial, human capital, information technology, and real property management. Overall, DHS has made more progress in implementing its mission functions—border security; immigration enforcement; immigration services; and aviation, surface transportation, and maritime security; for example—than its management functions, reflecting an initial focus on implementing efforts to secure the homeland. DHS has had to undertake these critical missions while also working to transform itself into a fully functioning cabinet department—a difficult undertaking for any organization and one that can take, at a minimum, 5 to 7 years to complete even under less daunting circumstances. As DHS continues to mature as an organization, we have reported that it will be important that it works to strengthen its management areas since the effectiveness of these functions will ultimately impact its ability to fulfill its mission to protect the homeland. Acquisition Management. DHS’s acquisition function includes managing and overseeing nearly $16 billion in acquisitions to support its broad and complex missions, such as information systems, new technologies, aircraft, ships, and professional services. DHS has recognized the need to improve acquisition outcomes and taken some positive steps to organize and assess the acquisition function, but continues to lack clear accountability for the outcomes of acquisition dollars spent. A common theme in our work on acquisition management is DHS’s struggle to provide adequate support for its mission components and resources for departmentwide oversight. DHS has not yet accomplished its goal of integrating the acquisition function across the department. For example, the structure of DHS’s acquisition function creates ambiguity about who is accountable for acquisition decisions because it depends on a system of dual accountability and cooperation and collaboration between the Chief Procurement Officer (CPO) and the component heads. In June 2007, DHS officials stated that they were in the process of modifying the lines of business management directive, which exempts the Coast Guard and the Secret Service from complying, to ensure that no contracting organization is exempt. This directive has not yet been revised. In September 2007, we reported on continued acquisition oversight issues at DHS, identifying that the department has not fully ensured proper oversight of its contractors providing services closely supporting inherently government functions. The CPO has established a departmentwide program to improve oversight; however, DHS has been challenged to provide the appropriate level of oversight and management attention to its service contracting and major investments, and we continue to be concerned that the CPO may not have sufficient authority to effectively oversee the department’s acquisitions. DHS still has not developed clear and transparent policies and processes for all acquisitions. Concerns have been raised about how the investment review process has been used to oversee its largest acquisitions, and the investment review process in still under revision. We have ongoing work reviewing oversight of DHS’s major investments which follows-up on our prior recommendations. Regarding the acquisition workforce, our work and the work of the DHS IG has found acquisition workforce challenges across the department; we have ongoing work in this area as well. Financial Management. DHS’s financial management efforts include consolidating or integrating component agencies’ financial management systems. DHS has made progress in addressing financial management and internal control weaknesses and has designated a Chief Financial Officer, but the department continues to face challenges in these areas. However, since its establishment, DHS has been unable to obtain an unqualified or “clean” audit opinion on its financial statements. For fiscal year 2007, the independent auditor issued a disclaimer on DHS’s financial statements and identified eight significant deficiencies in DHS’s internal control over financial reporting, seven of which were so serious that they qualified as material weaknesses. DHS has taken steps to prepare corrective action plans for its internal control weaknesses by, for example, developing and issuing a departmentwide strategic plan for the corrective action plan process and holding workshops on corrective action plans. While these are positive steps, DHS and its components have not yet fully implemented corrective action plans to address all significant deficiencies—including the material weaknesses—identified by previous financial statement audits. According to DHS officials, the department has developed goals and milestones for addressing these weaknesses in its internal control over financial reporting. Until these weaknesses are resolved, DHS will not be in position to provide reliable, timely, and useful financial data to support day-to-day decision making. Human Capital Management. DHS’s key human capital management areas include pay, performance management, classification, labor relations, adverse actions, employee appeals, and diversity management. DHS has significant flexibility to design a modern human capital management system, and in October 2004 DHS issued its human capital strategic plan. DHS and the Office of Personnel Management jointly released the final regulations on DHS’s new human capital system in February 2005. Although DHS intended to implement the new personnel system in the summer of 2005, court decisions enjoined the department from implementing certain labor management portions of the system. DHS has since taken actions to implement its human capital system. In July 2005 DHS issued its first departmental training plan, and in April 2007, it issued its Fiscal Year 2007 and 2008 Human Capital Operational Plan. This plan identifies five department priorities—hiring and retaining a talented and diverse workforce; creating a DHS-wide culture of performance; creating high-quality learning and development programs for DHS employees; implementing a DHS-wide integrated leadership system; and being a model of human capital service excellence. DHS has met some of the goals identified in the plan, such as developing a hiring model and a communication plan. However, more work remains for DHS to fully implement its human capital system. For example, DHS has not yet taken steps to fully link its human capital planning to overall agency strategic planning nor has it established a market-based and more performance- oriented pay system. DHS has also faced difficulties in developing and implementing effective processes to recruit and hire employees. Although DHS has developed its hiring model and provided it to all components, we reported in August 2007 that DHS had not yet assessed components’ practices against the model. Furthermore, employee morale at DHS has been low, as measured by the results of the 2006 U.S. Office of Personnel Management Federal Human Capital Survey. DHS has taken steps to seek employee feedback and involve them in decision making by, for example, expanding its communication strategy and developing an overall strategy for addressing employee concerns reflects in the survey results. In addition, although DHS has developed a department-level training strategy, it has faced challenges in fully implementing this strategy. Information Technology Management. DHS’s information technology management efforts should include: developing and using an enterprise architecture, or corporate blueprint, as an authoritative frame of reference to guide and constrain system investments; defining and following a corporate process for informed decision making by senior leadership about competing information technology investment options; applying system and software development and acquisition discipline and rigor when defining, designing, developing, testing, deploying, and maintaining systems; establishing a comprehensive, departmentwide information security program to protect information and systems; having sufficient people with the right knowledge, skills, and abilities to execute each of these areas now and in the future; and centralizing leadership for extending these disciplines throughout the organization with an empowered Chief Information Officer. DHS has undertaken efforts to establish and institutionalize the range of information technology management controls and capabilities noted above that our research and past work have shown are fundamental to any organization’s ability to use technology effectively to transform itself and accomplish mission goals. For example, DHS has organized roles and responsibilities for information technology management under the Chief Information Officer. DHS has also developed an information technology human capital plan that is largely consistent with federal guidance and associated best practices. In particular, we reported that the plan fully addressed 15 and partially addressed 12 of 27 practices set forth in the Office of Personnel Management’s human capital framework. However, we reported that DHS’s overall progress in implementing the plan had been limited. With regard to information technology investment management, DHS has established a management structure to help manage its investments. However, DHS has not always fully implemented any of the key practices our information technology investment management framework specifies as being needed to actually control investments. Furthermore, DHS has developed an enterprise architecture, but we have reported that major DHS information technology investments have not been fully aligned with DHS’s enterprise architecture. In addition, DHS has not fully implemented a comprehensive information security program. While it has taken actions to ensure that its certification and accreditation activities are completed, the department has not shown the extent to which it has strengthened incident detection, analysis, and reporting and testing activities. Real Property Management. DHS’s responsibilities for real property management are specified in Executive Order 13327, “Federal Real Property Asset Management,” and include the establishment of a Senior Real Property Officer, development of an asset inventory, and development and implementation of an asset management plan and performance measures. In June 2006, the Office of Management and Budget upgraded DHS’s Real Property Asset Management Score from red to yellow after DHS developed an Asset Management Plan, developed a generally complete real property data inventory, submitted this inventory for inclusion in the governmentwide real property inventory database, and established performance measures consistent with Federal Real Property Council standards. DHS also designated a Senior Real Property Officer. However, in August 2007 we reported that DHS had yet to demonstrate full implementation of its asset management plan and full use of asset inventory information and performance measures in management decision making. Our work has identified various cross-cutting issues that have hindered DHS’s progress in its management areas. We have reported that while it is important that DHS continue to work to strengthen each of its core management functions, it is equally important that these key issues be addressed from a comprehensive, departmentwide perspective to help ensure that the department has the structure and processes in place to effectively address the threats and vulnerabilities that face the nation. These issues include agency transformation, strategic planning and results management, and accountability and transparency. Agency Transformation. In 2007 we reported that DHS’s implementation and transformation remained high risk because DHS had not yet developed a comprehensive management integration strategy and its management systems and functions⎯especially related to acquisition, financial, human capital, and information technology management⎯were not yet fully integrated and wholly operational. We have recommended, among other things, that agencies on the high-risk list produce a corrective action plan that defines the root causes of identified problems, identifies effective solutions to those problems, and provides for substantially completing corrective measures in the near term. Such a plan should include performance metrics and milestones, as well as mechanisms to monitor progress. In March 2008 we received a draft of DHS’s corrective action plan and have provided the department with some initial feedback. We will continue to review the plan and expect to be able to provide additional comments on the plan in the near future. Strategic Planning and Results Management. DHS has not always implemented effective strategic planning efforts, has not yet issued an updated strategic plan, and has not yet fully developed adequate performance measures or put into place structures to help ensure that the agency is managing for results. DHS has developed performance goals and measures for some of its programs and reports on these goals and measures in its Annual Performance Report. However, some of DHS’s components have not developed adequate outcome-based performance measures or comprehensive plans to monitor, assess, and independently evaluate the effectiveness of their plans and performance. Since issuance of our August 2007 report, DHS has begun to develop performance goals and measures for some areas in an effort to strengthen its ability to measures its progress in key management and mission areas. We commend DHS’s efforts to measure its progress in these areas and have agreed to work with the department to provide input to help strengthen established measures. Accountability and Transparency. Accountability and transparency are critical to the department effectively integrating its management functions and implementing its mission responsibilities. We have reported that it is important that DHS make its management and operational decisions transparent enough so that Congress can be sure that it is effectively, efficiently, and economically using the billions of dollars in funding it receives annually. We have encountered delays at DHS in obtaining access to needed information, which have impacted our ability to conduct our work in a timely manner. Since we highlighted this issue last year to this subcommittee, our access to information at DHS has improved. For example, TSA has worked with us to improve its process for providing us with access to documentation. DHS also provided us with access to its national level preparedness exercise. Moreover, in response to the provision in the DHS Appropriations Act, 2008, that restricts a portion of DHS’s funding until DHS certifies and reports that it has revised its guidance for working with GAO. DHS has provided us with a draft version of its revised guidance. We have provided DHS with comments on this draft and look forward to continuing to collaborate with the department. DHS is now 5 years old, a key milestone for the department. Since its establishment, DHS has had to undertake actions to secure the border and the transportation sector and defend against, prepare for, and respond to threats and disasters while simultaneously working to transform itself into a fully functioning cabinet department. Such a transformation is a difficult undertaking for any organization and can take, at a minimum, 5 to 7 years to complete even under less daunting circumstances. Nevertheless, DHS’s 5-year anniversary provides an opportunity for the department to review how it has matured as an organization. As part of our broad range of work reviewing DHS management and mission programs, we will continue to assess in the coming months DHS’s progress in addressing high-risk issues. In particular, we will continue to assess the progress made by the department in its transformation and information sharing efforts, and assessing whether any progress made is sustainable over the long term. Further, as DHS continues to evolve and transform, we will review its progress and performance and provide information to Congress and the public on its efforts. This concludes my prepared statement. I would be pleased to answer any questions you and the Subcommittee Members may have. For further information about this testimony, please contact Norman J. Rabkin, Managing Director, Homeland Security and Justice, at 202-512- 8777 or rabkinn@gao.gov. Other key contributors to this statement were Cathleen A Berrick, Anthony DeFrank, Rebecca Gambler, and Thomas Lombardi. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Homeland Security (DHS) began operations in March 2003 with missions that include preventing terrorist attacks from occurring within the United States, reducing U.S. vulnerability to terrorism, minimizing damages from attacks that occur, and helping the nation recover from any attacks. GAO has reported that the implementation and transformation of DHS is an enormous management challenge. GAO's prior work on mergers and acquisitions found that successful transformations of large organizations, even those faced with less strenuous reorganizations than DHS, can take at least 5 to 7 years to achieve. This testimony addresses (1) the progress made by DHS in implementing its management functions; and (2) key issues that have affected the department's implementation efforts. This testimony is based on GAO's August 2007 report evaluating DHS's progress between March 2003 and July 2007; selected reports issued since July 2007; and GAO's institutional knowledge of homeland security and management issues. Within each of its management areas--acquisition, financial, human capital, information technology, and real property management--DHS has made some progress, but has also faced challenges. DHS has recognized the need to improve acquisition outcomes and taken some positive steps to organize and assess the acquisition function, but continues to lack clear accountability for the outcomes of acquisition dollars spent. The department also has not fully ensured proper oversight of its contractors providing services closely supporting inherently government functions. DHS has designated a Chief Financial Officer and taken actions to prepare corrective action plans for its internal control weaknesses. However, DHS has been unable to obtain an unqualified audit opinion of its financial statements, and for fiscal year 2007 the independent auditor identified significant deficiencies in DHS's internal control over financial reporting. DHS has taken actions to implement its human capital system by, for example, issuing a departmental training plan and human capital operational plan. Among other things, DHS still needs to implement a human capital system linked to its strategic plan, establish a market-based and more performance-oriented pay system, and seek more routine feedback from employees. DHS has taken actions to develop information technology management controls, such as developing an information technology human capital plan and developing policies to ensure the protection of sensitive information. However, DHS has not yet fully implemented a comprehensive information security program or a process to effectively manage information technology investments. DHS has developed an Asset Management Plan and established performance measures consistent with Federal Real Property standards. However, DHS has yet to demonstrate full implementation of its Asset Management Plan or full use of asset management inventory information. Various cross-cutting issues have affected DHS's implementation efforts. For example, DHS has not yet updated its strategic plan and put in place structures to help it manage for results. Accountability and transparency are critical to effectively implementingDHS's management functions. GAO has experienced delays in obtaining access to needed information from DHS, though over the past year, GAO's access has improved. GAO is hopeful that planned revisions to DHS's guidance for working with GAO will streamline our access to documents and officials. DHS's 5 year anniversary provides an opportunity for the department to review how it has matured as an organization. As part of our broad range of work, GAO will continue to assess DHS's progress in addressing high-risk issues. In particular, GAO will continue to assess the progress made by the department in its transformation efforts and whether any progress made is sustainable over the long term.
The United States is committed to building foreign partner capacity to, among other things, (1) encourage effective and mutually beneficial relations to support international peace and security efforts and (2) enable partner countries to use their own resources with maximum effectiveness. Rule of law assistance is an important component of U.S. efforts to build foreign partner capacity, and according to officials, includes defense institution-building and foreign partner training focused on military justice, human rights, anticorruption, and professionalization of the military. Officials noted that countries with an established rule of law system have more stability and fewer people trying to flee to other countries or turning to terrorism. DOD integrates rule of law concepts into various types of training and assistance carried out under Title 10 and Title 22 of the United States Code. Title 10 covers authorities for the armed forces, focusing primarily on DOD, whereas Title 22 covers authorities regarding foreign relations, which primarily fall under State. According to officials, most of this assistance is carried out through military-to-military events. DOD schools, such as the U.S. Army War College, may also cover some rule of law concepts as part of other training. (See app. II for examples of rule of law assistance that various DOD entities implement.) DIILS is located in Newport, Rhode Island; its predecessor, the International Training Detachment of the Naval Justice School, was created in 1992 to provide foreign militaries with training on the rule of law, including human rights and the law of armed conflict, such as rules for the use of force. In response to DOD and State initiatives in the past decade, DIILS has expanded its curriculum to include rule of law issues related to peacekeeping and combating terrorism. Since 1992, DIILS’s primary focus has been to provide rule of law assistance. According to officials, DIILS has reached over 30,000 participants in over 140 countries since it was created. DIILS is funded entirely through reimbursements from DOD and State. Demand for DIILS assistance may change from year to year. For fiscal years 2013 through 2016, DOD and State used five primary funding accounts to fund DIILS’s rule of law assistance each year, with various other accounts used less consistently. During this period, DIILS disbursed about $6.2 million a year, on average, for rule of law events, $24.6 million in total. Figure 1 shows the funding sources and the proportion of DIILS disbursements attributed to each source for fiscal years 2013 through 2016. The funding sources for DIILS assistance are described below, in order of the amount of total disbursements for fiscal years 2013 through 2016, from largest to smallest. 1. International Military Education and Training (IMET) funds account for 48 percent, $11.9 million, of DIILS disbursements for fiscal years 2013 through 2016 and include DIILS operating expenses. IMET funds are appropriated to the President. State prioritizes assistance and develops policy guidance, while DOD has the primary responsibility for implementing IMET activities. Primarily through IMET, DIILS currently provides 17 mobile and 8 resident courses on topics ranging from military justice and peacekeeping operations to legal aspects of combating corruption and terrorism. Most mobile courses are taught within a week, while resident courses are generally 3 weeks. 2. Operation and Maintenance Defense-Wide funds account for 24 percent, $5.9 million, of DIILS disbursements for fiscal years 2013 through 2016 and cover overhead and planning requirements for DIILS to develop and implement long-term global strategies for rule of law security cooperation, including defense institution-building assistance. DOD’s Office of the Under Secretary of Defense for Policy is responsible for setting priorities and distributing these funds. For example, DIILS has completed legal assessments in Armenia, Indonesia, and Ukraine; supported military legislative reform in Botswana; and conducted defense institution-building scoping visits in Cambodia, Guatemala, and Thailand, among other efforts. 3. Global Train and Equip Program funds account for about 11 percent, $2.6 million, of DIILS disbursements for fiscal years 2013 through 2016. DOD’s Office of the Under Secretary of Defense for Special Operations and Low Intensity Conflict is responsible for policy guidance and oversight of the Global Train and Equip Program, with State concurrence. These funds provide military equipment, supplies, and training to build the capacity of foreign military forces to conduct counterterrorism operations, among other things. The legislation authorizing these activities includes a provision requiring each project to include elements that promote human rights and respect for civilian authority within that country, a requirement that DOD has designated DIILS to fulfill. According to DIILS, it meets this requirement in a 2-day, in-country seminar. 4. Peacekeeping Operations funds account for about 9 percent, $2.2 million, of DIILS disbursements for fiscal years 2013 through 2016 and are appropriated to State, and managed by its Bureau of Political- Military Affairs, for use in regional security peacekeeping operations and other programs carried out to further U.S. national security interests. According to DOD, these funds can be transferred to DOD to support specific requirements identified by a U.S. embassy or geographic combatant command. Peacekeeping Operations funding provided to DIILS since fiscal year 2013 has primarily focused on building the legal capacity of magistrates in the Armed Forces of the Democratic Republic of the Congo. 5. Foreign Military Financing funds account for 4 percent, $1.0 million, of DIILS disbursements for fiscal years 2013 through 2016. Foreign Military Financing funds are appropriated to the President. State determines policy priorities for the use of the account, while DOD’s DSCA has overall responsibility for implementing Foreign Military Financing funded programs. Foreign Military Financing funds allow for the United States to sell defense articles and services to foreign countries and international organizations if doing so will strengthen the security of the United States and promote world peace. DIILS is one DOD school, among others, that uses these funds to provide training, such as resident and mobile courses, as well as defense institution-building efforts. Since fiscal year 2013, these funds have been used in Belize, Colombia, Ghana, Mexico, and Ukraine, among others. 6. Other sources account for 4 percent, $0.9 million, of DIILS disbursements for fiscal years 2013 through 2016 and include the Combating Terrorism Fellowship Program and the Global Security Contingency Fund, among others. The purpose of the Combating Terrorism Fellowship Program is to provide foreign partners’ mid- and senior-level defense and security officials with operational and strategic-level education on combating terrorism while reinforcing partner-nation capabilities. The Global Security Contingency Fund aims to address key security challenges in partner countries in the near- to midterm. DIILS provides three types of rule of law assistance, each with a different DOD or State entity requesting the assistance. The three assistance types are core rule of law training, defense institution-building, and statutorily required human rights training. For fiscal years 2013 through 2016, DIILS conducted over 500 rule of law events in almost 100 countries, with U.S. Africa Command followed by U.S. European Command as the U.S. geographic combatant commands holding the most DIILS events. DIILS assesses the quality of its assistance in various ways, including implementing student surveys, conducting pre- and post- course tests, and instituting real-time classroom assessments of participants’ comprehension. In some cases, DIILS course facilitators also produce after action reports that may include observations of the assistance provided and challenges, if any, in providing the assistance. Our interviews with foreign partner recipients of DIILS assistance and U.S. officials working to provide it generally reflected positive views about the quality of DIILS’s services. DIILS assistance includes three types of rule of law assistance, each with different DOD or State entities generating requests. A summary of these assistance types, along with the primary funding source of each, is shown in table 1. Figure 2 illustrates the various DOD and State entities involved in requesting and funding the different types of DIILS assistance. For fiscal years 2013 through 2016, DIILS provided over 500 training and institution-building events to about 100 countries; that includes over 200 core rule of law events, about 200 defense institution-building events, and about 100 statutorily required human rights training events. Of DOD’s six geographic combatant commands, U.S. Africa Command had the greatest number of DIILS events, with 164, followed by U.S. European Command, with 101. Figure 3 further illustrates the comparison in number of DIILS events by geographic location. Figure 4 breaks out the DIILS assistance within each geographic combatant command and DIILS headquarters by assistance type for fiscal years 2013 through 2016. This figure shows that U.S. Africa Command and U.S. Pacific Command have held the most defense institution-building assistance events, while U.S. Africa Command and U.S. European Command have held the most statutorily required human rights training events. In addition, the figure shows that DIILS’s school in Newport, Rhode Island, holds the most core rule of law training events. DIILS officials provided examples of several steps that they take to assess the quality of DIILS’s assistance. These actions generally aligned with leading practices for assessing the quality of training, based on the information officials provided. These include using multiple assessment methods, such as the following types of activities: Student surveys, pre- and post-course tests, and instructor feedback. To improve course content, DIILS facilitators solicit feedback on their effectiveness and on the relevance of the course materials through student surveys and pre- and post-course tests. In addition, DIILS encourages facilitators to provide feedback on how courses can be improved. For example, during fieldwork in Uganda, we reviewed student feedback surveys from a 2-day course providing statutorily required human rights training. The survey responses expressed student opinions about the usefulness of the course content and the topics that were most valuable to them. The responses were generally positive, and participants most frequently noted that human rights, international humanitarian law, and rules of engagement were the most valuable topics covered in the course. Real-time assessments of participant comprehension. DIILS facilitators generally encourage participation and gather feedback from participants in real time to assess understanding, generally making adjustments to their teaching as needed. For example, in Uganda, we observed facilitators using classroom voting technology as a basis for discussion and to signal when students were struggling with the course content, allowing instructors to identify difficult concepts and spend more time clarifying those concepts. During fieldwork in Botswana, we observed DIILS officials updating planned topics daily and corresponding preparatory materials nightly based on how the meetings progressed and on daily feedback from foreign partners as to what topics would be most helpful to discuss. After action reports. DIILS facilitators prepare after action reports for certain events to record a summary of the assistance they provided, outcomes of the training, any challenges they encountered, and further actions to be taken. DIILS officials stated that after action reports provide continuity and capture lessons learned for DIILS facilitators who will be traveling to those countries in the future. According to DIILS officials’ estimates, due to limited staff resources, facilitators had completed only a little more than half of the after action reports required during fiscal years 2013 through 2016, an issue we discuss later in this report. Photographs in figure 5 show classroom voting technology that DIILS facilitators use to assess training participants’ understanding in real time, a typical post-course student feedback survey, and DIILS facilitators soliciting student participation and feedback during a training session in Uganda. Officials from the five geographic combatant commands we spoke with as well as other DOD and State officials and foreign partners who participated in DIILS’s events lauded DIILS for its high-quality expertise and assistance. U.S. Southern Command officials noted that its service components— Army, Navy, Air Force, and Marines—provide rule of law assistance when DIILS cannot provide it. However, according to these officials, the training its service components provide generally lacks the same level of expertise as the training provided by DIILS. Senior foreign military officials in Botswana referred to DIILS’s assistance as “invaluable” and told us that although they examined the best practices of several countries for modernizing defense legislation, they chose to work most closely with DIILS. Further, the foreign officials noted that the progress they have made could not have been achieved without DIILS’s assistance. Foreign military officials in Uganda and Newport told us that they would rate the DIILS facilitators a 10 out of 10 and said they wanted more DIILS training. A participant specifically reported learning new material that he could apply during his upcoming deployment. According to DIILS officials, senior foreign military officials reacted positively to DIILS’s resident courses. For example, a participant praised the course content and told DIILS officials that a lecture on gender-based violence had been particularly thought-provoking and had changed the way he viewed the topic. Another foreign official told DIILS staff that prior to attending the course he did not have a high opinion of U.S. military personnel but that he left the course with positive opinions of the United States and U.S. military officials. Underscoring the value that U.S. officials hold for DIILS institution- building assistance, DIILS officials cited the following example. DIILS was planning to provide defense institution-building assistance to military officials in Belize, but when the country team there learned that funding from the Office of the Under Secretary of Defense for Policy was no longer available for use in Belize, the U.S. officials in Belize found another funding source to bring DIILS facilitators in- country to provide the assistance. In addition, several DIILS officials noted that they view defense institution-building—more than any other type of assistance DIILS provides—as contributing to concrete, measurable outcomes, such as developing an instruction manual for magistrates in the Democratic Republic of the Congo, and to long- term relationships with partner nations. DOD considers foreign partner needs in planning for DIILS’s rule of law assistance in accordance with Presidential Policy Directive 23. This directive provides policy guidance for U.S. security sector assistance, including rule of law, and highlights that U.S. agencies should consider key factors such as foreign partner needs in planning and implementing assistance. In-country U.S. officials are primarily responsible for working with foreign partners to identify and propose efforts to address their needs, which get incorporated into DIILS’s rule of law assistance, among other military training. For its core rule of law training, DIILS participates in regional and annual interagency Security Cooperation Education and Training Working Groups to identify and address foreign partner needs. For its defense institution-building assistance, DIILS conducts in-depth assessments in-country to identify and address foreign partner needs. In-country DOD security cooperation officials are primarily responsible for identifying and addressing foreign partner needs that get incorporated into DIILS’s rule of law assistance, among other military training; for planning this training and assistance; and for ensuring that it is aligned with overall U.S. strategy as well as with regional and country-specific military strategies. In particular, they are required to know the partner nation’s requirements and capabilities for rule of law assistance, and they regularly advise and assist host-nation counterparts in identifying and programming training requirements. In addition to regular communication with the partner nation, in-country DOD security cooperation officials coordinate regularly with their respective geographic combatant commands and country teams and with DOD and State headquarter officials to ensure that the assistance is aligned with U.S. strategy. Additionally, in-country DOD security cooperation officials develop integrated country strategies with country teams to further address partner needs. State requires all embassies to complete an integrated country strategy—a 3-year strategy to be developed collaboratively under chief-of-mission leadership and authority with the input of DOD and any other in-country U.S. agencies under chief-of-mission authority. These strategies aim to articulate the U.S.’s priorities in a given country by setting mission goals and objectives, including those pertaining to rule of law assistance, among other security sector assistance programs. State’s Bureau of Political-Military Affairs officials use these integrated country strategies, as well as DOD’s theater campaign plans and country plans, as the basis for resource allocation for core rule of law assistance. The country team consults with the host-nation government and military officials, when appropriate, regarding the nation’s needs, priorities, and constraints. In reviewing integrated country strategies of the top 10 partner countries receiving DIILS assistance, we determined that eight of the 10 strategies specifically noted the importance of foreign military rule of law assistance. These eight strategies mentioned several aspects of rule of law, including professionalization of the military, human rights, and military justice. Additionally, seven of these eight strategies noted one or more specific actions to be taken to address rule of law assistance to foreign military partners, such as creating anticorruption task forces and holding human rights-themed discussions with foreign military officials. For its core rule of law training, DIILS participates in regional, annual interagency Security Cooperation Education and Training Working Groups to identify and address foreign partner needs. For defense institution-building assistance, DIILS conducts in-depth assessments to identify and address foreign partner needs. DIILS works closely with various entities to plan and deliver statutorily required human rights training. However, since the law requires that DOD provide this training as part of its Global Train and Equip Program assistance, we do not include a discussion about planning this type of assistance in this review. DIILS staff coordinate with geographic combatant commands, DOD security cooperation officials, and State officials, among others, at annual interagency working group conferences to identify and plan for training opportunities in line with foreign partner needs. DOD requires each geographic combatant command to host such a conference each year— known as a Security Cooperation Education and Training Working Group. In addition to addressing foreign partner needs, these conferences also allow interagency collaboration on reviewing, coordinating, and finalizing each country’s rule of law education and training plan for the upcoming budget year and ensuring that activities are within policy guidelines, among other things. DIILS—along with other DOD schools—attends these conferences to educate in-country DOD security cooperation officials, and sometimes locally employed staff from host-nations, about the core rule of law training and defense institution-building assistance it can offer and to identify opportunities to match U.S. courses with partner- country needs (see fig. 6). In advance of a Security Cooperation Education and Training Working Group, in-country DOD security cooperation officials develop country- specific education and training plans to address foreign partner and U.S. strategic needs, often with input from foreign partners. These plans include the partner country’s rule of law assistance and other education goals and identify the individuals who will participate, with input from the host-nation’s military officials. DOD in-country security cooperation officials then select training courses to address the mix of partner-country and U.S. goals outlined in the combatant command and country plans. During the 2016 U.S. Africa Command Security Cooperation Education and Training Working Group in Germany, we observed a DIILS official discussing assistance options with security cooperation officials to determine the best way to respond to identified foreign partner needs. According to DIILS, State headquarters, and geographic combatant command officials, holding these conferences in person has various benefits in addition to planning the rule of law assistance that DIILS provides. For example, the conferences facilitate discussions between DIILS officials and DOD security cooperation officials, which can clarify for DIILS officials the needs of foreign partners and enable the DIILS officials to make better recommendations on what courses are best suited to meet those needs. The conferences also help to ensure that country teams stay within their allotted IMET budgets and assist in-country DOD security cooperation officials in adhering to State policy and any legal requirements. For example, we observed State officials asking several in- country DOD security cooperation officials about the availability of female foreign military officials to participate in training, a State policy priority, and State officials informing the security cooperation officials about the requirement to allocate a certain percentage of their IMET funds for Expanded-IMET. In addition to assisting with U.S. efforts to meet the needs of foreign partners, the Security Cooperation Education and Training Working Group conferences also provide an opportunity for U.S. officials to ensure that training aligns with U.S. strategic goals, among other benefits. State officials noted that DOD security cooperation officials benefit from taking time to think strategically and deepen their understanding of how their efforts fit into the broader context of U.S. strategy. We observed State officials discussing U.S. strategic goals and making adjustments to several countries’ training plans with these goals in mind. According to officials from State’s Bureau of Political-Military Affairs, holding these conferences in person also helps in-country DOD security cooperation officials build relationships among agencies and country teams and offers opportunities for security cooperation officials to learn from each other about what is working well and what is not. Moreover, they noted that it is easier to hold officials accountable to identify and address problems when the working group is held in person than when it is held virtually. For example, a State official said that he has removed courses from a country list that were not consistent with the priorities expressed in the integrated country strategy; he noted that identifying and addressing such concerns is more challenging in the absence of the real-time, person-to-person communication that the conferences facilitate. For defense institution-building assistance, DIILS conducts in-depth assessments to ensure that foreign partner needs are addressed and consonant with U.S. goals. To identify candidates for these assessments, DIILS leverages its familiarity with countries’ needs and capacity for rule of law assistance—knowledge gained through regular communication with in-country DOD security cooperation officials, geographic combatant commands, and foreign military officials who communicate the assistance needs of their countries, according to DIILS officials. DIILS and other defense institution-building providers then share recommendations for countries they would like to assist, contingent on the availability of resources, with the Office of the Under Secretary of Defense for Policy. That office considers the recommendations in setting priorities for defense institution-building assistance, including the assistance that the office will request of DIILS. According to State officials, for State-funded defense institution-building assistance, in-country State officials identify needs for defense institution-building assistance and work through State to provide funding to DOD for DIILS to implement the assistance. Once the Office of the Under Secretary of Defense for Policy and State select a country to receive defense institution-building assistance, DIILS will travel to the individual countries to meet with in-country DOD security cooperation officials and foreign military and government officials to assess that country’s needs. DIILS officials take these needs into account to formulate an individualized assistance plan for each country. For example, according to a DIILS official, to conduct an assessment of a country’s legal system, it could take about 2 months of study for DIILS officials to learn everything they can about the country before traveling there, including whom they will need to meet with while in-country. In addition, for such an assessment, DIILS would research the country’s legal authorities beforehand to generate questions. DIILS would send a team of up to four people to conduct the assessment, each with one or more areas to focus on, such as the penal system. Typically, the DIILS team would be in-country for 3 to 5 days and would produce a 40- to 70- page assessment, including a history of the foreign government, information on its key military officials and military-related legal authorities, and a roadmap for helping the partner nation move forward. In-country DOD security cooperation officials could use this assessment to decide whether and to what extent to pursue the course of action outlined in the assessment. According to DIILS officials, DIILS does not provide assistance to a country if that country is not receptive to it. The demand for DIILS’s assistance has increased over time, as summarized in table 2. However, DOD has not assessed whether the size of DIILS’s workforce aligns with the scope of its mission, which has been expanded in response to statutory requirements and U.S. strategic goals. Federal internal control standards highlight the need for management to conduct reviews at the functional or activity level, which may include conducting a workforce review or developing a corrective action plan, if needed. Amid increasing demand and interest in DIILS’s provision of defense- institution building assistance and statutorily required human rights training, the number of events that DIILS has provided has increased since fiscal year 2013, while its workforce has increased by a single full- time equivalent (FTE) staff, going from 28 FTE staff in 2013 to 29 FTE staff in 2016, as illustrated by figure 7. Because the demand for DIILS’s assistance is generated externally, DIILS does not have full control of its workload. As figure 7 further shows, the number of events that DIILS has provided for all rule of law assistance has increased since fiscal year 2013—with various DOD and State sources requesting that DIILS complete 52 more events in fiscal year 2016 than it completed in fiscal year 2013. Currently, to respond to requests for assistance, DIILS employs 29 FTE staff, which consist of 20 civilian staff and 9 military Staff Judge Advocates who represent the Army, the Navy, the Air Force, the Marines, and the Coast Guard. To meet increasing demands for assistance and requests for specialized subject matter expertise, DIILS augments its FTE staff with military reserve Staff Judge Advocates and civilian experts. DIILS officials said that its adjunct database lists about 700 individuals and that multiple adjunct staff generally participate in each DIILS event. Although DIILS is funded entirely through reimbursements from DOD and State, its FTE staff must be approved by DSCA. In fiscal year 2015, DSCA authorized one additional military FTE staff to address increasing partner countries’ maritime security needs. DIILS also requested an additional civilian FTE staff in fiscal year 2015 in part to help meet the increasing demands of defense institution-building assistance; however, according to DOD officials, DSCA did not approve this request. DOD officials told us that the decision was based on budget considerations rather than on an assessment of whether the size of DIILS’s workforce aligns with the scope of its mission to provide rule of law assistance. Although DIILS officials expressed the need for more than one additional FTE staff, they noted that they have not requested additional FTE staff because DSCA has communicated that additional requests would not be approved. According to DIILS officials, DIILS faces staffing constraints in part because of increased demands and in part because defense institution- building assistance is more resource intensive than DIILS core rule of law and statutorily required human rights training. As a result of the staffing constraints, DIILS has expended only 55 percent—$5.9 of $10.7 million— of the defense institution-building funds that the Office of the Under Secretary of Defense for Policy has made available since fiscal year 2013. In addition, DIILS has not completed all required after action reports for the training and assistance it conducted because DIILS has not had the staff capacity to do so, according to DIILS officials. These reports are necessary, according to DIILS officials, for conveying lessons learned and providing consistency for military officials that serve on rotational assignments and others who serve as adjunct faculty. According to DIILS officials, only about 55 percent of required after action reports since fiscal year 2013 had been completed as of August 2016 because the staff assigned to complete them spend up to three-quarters of their time on assignments requiring travel abroad. These staff often travel several weeks in a row to multiple events on different continents where they are presenting course materials and executing logistics arrangements for the teams of facilitators and the participants. Moreover, as a result of staffing constraints, DIILS has potentially missed opportunities to provide additional training and assistance to further address foreign partner needs. For example, an official from U.S. Southern Command told us that they would hire DIILS to provide additional assistance if they thought DIILS had the staff capacity to provide it. However, because of the perceived lack of resources possibly limiting DIILS’s ability to provide the assistance, the official at U.S. Southern Command said that resources within the combatant command are used to provide assistance instead, often by officials that lack the same level of expertise provided through DIILS. During fieldwork in Germany, we also observed a DIILS official explaining to a DOD in- country official that he could only guarantee providing the country with core rule of law training if the country official used IMET funds. He noted that this is because DIILS’s priority is to deliver IMET-funded core rule of law training and DIILS may lack staff resources to provide assistance through other funds. In addition, according to DIILS officials, although DIILS has the physical space to increase the capacity for the courses it holds in Newport, Rhode Island, from 50 to 75 participants, DIILS is facing logistical challenges in increasing capacity as a result of staffing constraints. For example, DIILS does not have sufficient administrative staff to process travel documents and provide the necessary transportation for that number of resident course participants, according to DIILS officials. Federal internal control standards highlight the need for management to conduct reviews at the functional or activity level, which may include conducting a workforce review or developing a corrective action plan, if needed. Applying this standard would help DOD address its Fiscal Years 2013–2018 Strategic Workforce Plan Report, which states that the success of the DOD mission requires a well-maintained, properly sized, and highly capable civilian workforce that aligns to mission and workload requirements. Upholding the rule of law is critical to U.S. peace- and security-building efforts abroad. DIILS’s efforts to train foreign partners in respect for human rights, the laws of armed conflict, and rules on the use of force, among other rule of law concepts, is an essential element of U.S. efforts to build stronger coalitions to combat international threats. Since fiscal year 2013, the demand for DIILS’s assistance has increased by nearly 50 percent. According to DIILS officials, because of its expanded mission and the static size of its workforce, DIILS’s ability to meet demands for training and assistance has been constrained. These officials further note that DIILS’s ability to complete its after action reports—internal records necessary for conveying lessons learned—and to increase the capacity of its resident courses has also been constrained. DOD, however, has not assessed the extent to which the size of DIILS’s workforce aligns with the scope of its mission. As such, DOD may not be assured that DIILS’s existing workforce is properly aligned with its expanded mission. Ensuring a properly aligned workforce is essential for enabling DIILS to achieve its mission of training foreign partners on rule of law concepts, including greater respect for human rights, while also maximizing the efficiency of providing this training. To help ensure that DOD successfully achieves the goal of supporting foreign nations in upholding the rule of law, we recommend that the Secretary of Defense assess the extent to which the size of DIILS’s workforce is aligned with the scope of its mission, including whether DIILS has sufficient staff to complete required after action reports and to increase its resident course capacity. We provided a draft of this report to DOD and State for comment. In its written comments, which are reproduced in appendix III, DOD concurred with our recommendation. DOD also provided technical comments, which we incorporated as appropriate. State had no comments on the draft. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of State, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7331, or johnsoncm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. House Report 114-102 includes a provision for us to review Department of Defense (DOD) efforts to work with DOD’s foreign military partners to build rule of law capacity. This report examines the extent to which, for fiscal years 2013 through 2016, (1) the Defense Institute of International Legal Studies (DIILS) has provided rule of law assistance to foreign partners and assessed its quality, (2) planning processes for DIILS’s rule of law efforts have considered foreign partner needs, and (3) DOD has considered whether the size of DIILS’s workforce aligns with the scope of its mission. To address our objectives, we analyzed documents and data and interviewed officials from DOD and the Department of State (State), including DIILS, the Defense Security Cooperation Agency (DSCA), and the Office of the Under Secretary of Defense for Policy, as well as in- country DOD security cooperation and State officials, DIILS training facilitators, and foreign partner officials. We focused on rule of law assistance that DIILS provided and excluded rule of law assistance provided by other DOD entities, the U.S. Agency for International Development, and the Departments of Homeland Security and Justice. We also excluded State assistance not covered under Title 22, such as assistance provided by State’s Bureau of International Narcotics and Law Enforcement Affairs. To examine the extent to which DIILS has provided rule of law assistance to foreign partners and assessed its quality for fiscal years 2013 through 2016, we reviewed relevant authorities and identified the processes that DOD and State have in place to plan rule of law assistance. We focused on DIILS and the primary processes used to implement (1) core rule of law training, determined through a series of interagency working groups; (2) defense institution-building assistance, determined by the Office of the Under Secretary of Defense for Policy and State’s Bureau of Political- Military Affairs in coordination with DOD; and (3) statutorily required human rights training, delivered pursuant to the authority to build the capacity of foreign security forces as part of DOD’s Global Train and Equip Program. We also analyzed event data from DIILS for fiscal years 2013 through 2016. We used these data to determine the number of rule of law events in various countries and geographic combatant commands. To determine the reliability of DIILS’s funding data, we compared and corroborated the information, requested information from DIILS officials regarding the processes they use to collect and verify the data, and checked the data sets for reasonableness and completeness. When we found discrepancies or missing data fields, we brought them to the attention of relevant agency officials and worked with the officials to correct the discrepancies and missing fields. The funding and activity data we received from DIILS are current as of October 2016. We did not conduct a financial audit of the funding data and are not expressing a financial opinion on them. Based on the checks we performed, we determined that the funding data we collected from DIILS were sufficiently reliable for the purposes of our reporting objectives. Additionally, we obtained perspectives on the steps DIILS takes to make its assistance effective by observing DIILS training in the United States, Botswana, and Uganda and by reviewing select DIILS after action reports to gather information about the contents of these reports. We also interviewed DIILS officials and recipients of DIILS assistance. For the purposes of this review, we did not examine the extent to which DIILS takes these steps consistently. To examine the extent to which DIILS assessed the quality of its assistance to foreign partners, we interviewed DIILS officials about the steps they take to assess the quality of their assistance and foreign recipients about their perspectives on the assistance they received. We compared DIILS officials’ statements with leading practices that we have identified for assessing the quality of strategic training. To examine the extent to which the planning processes for DIILS’s rule of law efforts have considered the needs of foreign partners since fiscal year 2013, we observed a U.S. Africa Command interagency working group conference for developing this assistance in Garmisch, Germany. We selected this conference to observe because it was scheduled during the period from March through May 2016 when we were available to conduct fieldwork and did not conflict with other planned fieldwork. Additionally, we reviewed integrated country strategies for the 10 countries for which DIILS was reimbursed the most money for assistance in fiscal years 2013 through 2015. Of country-specific disbursements in over 70 countries between fiscal years 2013 through 2015, we selected integrated country strategies for the 10 countries that spent the most money on DIILS assistance between fiscal years 2013 and 2015, accounting for over 60 percent of country-specific funds disbursed during that timeframe, or $3.6 million of $5.8 million. DIILS does not track DOD Operation and Maintenance Defense-Wide defense institution-building funds on a country-by-country basis, so funding for this type of assistance was not considered in selecting the top 10 countries. Ranked from most to least amount of DIILS assistance, these countries are (1) the Democratic Republic of the Congo, (2) Colombia, (3) Botswana, (4) the Czech Republic, (5) Lebanon, (6) Mexico, (7) India, (8) Bosnia and Herzegovina, (9) Burma, and (10) Niger. Additionally, the countries represent all six Geographic Combatant Commands: (1) U.S. Africa Command, (2) U.S. Central Command, (3) U.S. European Command, (4) U.S. Northern Command, (5) U.S. Pacific Command, and (6) U.S. Southern Command. We reviewed those country strategies that covered 2015 and 2016 because the strategies were created on a 3-year rolling basis, and strategies for 2015 and 2016 were the ones available for all 10 countries in our scope. The results of our analysis are not generalizable to all integrated country strategies. We selected two countries—Uganda and Botswana—for observation based on the nature and timing of assistance and the assistance that DIILS provided from March through May 2016, when we were available to conduct overseas fieldwork. We also conducted fieldwork at DIILS’s headquarters in Newport, Rhode Island. In Uganda, we observed DIILS officials providing statutorily required human rights training to military personnel in the Uganda People’s Defense Force, which received $12.7 million in vehicles, weapons, and communications equipment, among other supplies, in support of the African Union Mission in Somalia and its counterterrorism operations against al-Shabaab. In Botswana, we observed DIILS officials providing a week of technical assistance to representatives of the Botswana Defence Force in support of its effort to revise and update the foundational military legislative authority of Botswana. These discussions included best practices and U.S. experiences related to gender integration; compensation for various employee groups, including recruits and retirees; and nonpunitive measures in the military. This was the seventh of eight planned exchanges since 2013, with the final exchange planned to occur prior to when Botswana’s legislature would consider the revised legislation. In Newport, Rhode Island, we observed part of a 3-week military justice resident course held at DIILS that included an exercise on nonjudicial punishment attended by foreign military lawyers from various military services and one civilian official. Additionally, we reviewed theater campaign plans for U.S. Africa Command, U.S. Central Command, U.S. Northern Command, U.S. Pacific Command, and U.S. Southern Command but did not include an analysis of these documents because DOD deems them sensitive. We focused on core rule of law training and defense institution-building assistance for this discussion. We excluded statutorily required human rights training because although DIILS works closely with various entities to plan and deliver this training, the law requires that DOD provide this training pursuant to the authority to build the capacity of foreign security forces as part of the Global Train and Equip Program. We compared these processes with the terms of Presidential Policy Directive 23, which provides policy guidelines for U.S. security sector assistance. To examine the extent to which DOD has considered whether DIILS’s workforce aligns with its mission since fiscal year 2013, we examined DIILS’s organizational structure and work requirements and actions DOD has taken to address DIILS’s organizational needs. Specifically, we reviewed DIILS’s full-time equivalent allocations from fiscal years 2013 through 2016, and we compared these to the number of events DIILS provided in the three assistance types during this timeframe. We worked closely with DIILS officials to identify the events for which after action reports were required during this time period. In addition, we reviewed a DIILS official’s analysis of the number of after action reports that were completed, as well as his methodology for completing this analysis. We found that the data were generally reliable for the purposes of our review. We compared DOD’s actions with federal internal control standards and guiding principles articulated in DOD’s Fiscal Years 2013– 2018 Strategic Workforce Plan Report. We conducted this performance audit from September 2015 to December 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 3 provides some examples of rule of law assistance that was implemented by Department of Defense (DOD) entities other than the Defense Institute of International Legal Studies (DIILS), but this is not an exhaustive list. Charles Michael Johnson, Jr., (202) 512-7331, or johnsoncm@gao.gov. In addition to the contact named above, Hynek Kalkus (Assistant Director), Jaime Allentuck (Analyst-in-Charge), Kendal Robinson, David Dayton, Neil Doherty, Mark Dowling, and Jill Lacey made key contributions to this report. Gustavo Crosetto and Peter del Toro provided technical assistance.
Rule of law assistance is an important component of U.S. efforts to build the capacity of foreign partners to support international peace and security. DIILS is DOD's lead global resource for providing professional legal education and assistance to foreign military personnel and civilian defense officials on core rule of law issues. Such issues include military justice, anticorruption, and professionalization of the military. In addition, DIILS provides defense institution-building assistance and statutorily required human rights training to foreign partners. In fiscal years 2013 through 2016, DIILS disbursed over $24 million. House Report 114-102 includes a provision for GAO to review DOD's efforts to build partner capacity in the rule of law. This report examines, among other things, the extent to which (1) DIILS has provided rule of law assistance to foreign partners and assessed its quality and (2) DOD has considered whether the size of DIILS's workforce aligns with the scope of its mission. GAO reviewed and analyzed agency funding, planning, and organizational structure documents for fiscal years 2013 through 2016. GAO interviewed U.S. officials in Washington, D.C., and conducted fieldwork in Newport, Rhode Island; Botswana; Germany; and Uganda. Locations were selected on the basis of the nature and timing of assistance. For fiscal years 2013 through 2016, the Defense Institute of International Legal Studies (DIILS) conducted over 500 rule of law events in almost 100 countries, and it assessed the quality of its assistance in a variety of ways. DIILS provides three types of assistance: (1) core rule of law training in the United States and abroad, (2) defense institution-building, and (3) statutorily required human rights training. DIILS takes steps to assess the quality of its assistance, such as by conducting student feedback surveys and real-time classroom assessments of students' learning. Foreign partner recipients of DIILS assistance and U.S. officials working to provide it told us they generally held positive views about the quality of DIILS's services. Although the demand for DIILS's assistance has increased since 2013, the Department of Defense (DOD) has not assessed whether the size of DIILS's workforce aligns with the scope of its mission. Federal internal control standards highlight the need for management to conduct reviews at the functional or activity level, including conducting a workforce review, if needed. Since fiscal year 2013, the demand for DIILS's assistance has grown by nearly 50 percent, while its workforce has increased by one full-time equivalent (FTE) staff position. According to DIILS officials, as a result of staffing constraints, DIILS staff have completed 55 percent of required after action reports, which capture lessons learned for future events; expended 55 percent of all defense institution-building funds that DOD has made available since fiscal year 2013; and faced challenges in increasing capacity of residence courses to help meet demand. According to DOD officials, DOD has declined DIILS's requests for additional FTE staff based on budget considerations and not on an assessment of whether the size of DIILS's workforce aligns with the scope of its mission. Without a clear understanding provided by such an assessment, DOD cannot adequately ensure that DIILS is effectively meeting demands for its training and assistance to foreign military officials, which may contribute to more robust rule of law systems, more accountable governments, and greater respect for human rights. The Secretary of Defense should assess the extent to which the size of DIILS's workforce is aligned with the scope of its mission. DOD concurred with the recommendation.
Within FEMA, the Federal Insurance and Mitigation Administration (FIMA) manages NFIP and receives support from other divisions or offices (see fig. 1). In fiscal year 2013, about 390 FEMA employees, assisted by contractor employees, managed and oversaw NFIP from FEMA’s headquarters in Washington, D.C., and in field locations. FEMA management responsibilities included establishing and updating NFIP regulations, analyzing data to determine flood insurance rates, and training insurance agents and adjusters. As shown in figure 1 above, FIMA comprises three divisions: Risk Analysis, Risk Insurance, and Risk Reduction. Risk Analysis is responsible for flood mapping activities and develops flood mapping policy and guidance. FIMA established its 5-year Risk Mapping Assessment and Planning (Risk MAP) program in 2009 to, among other things, improve the quality of flood data used for mapping and enhance public acceptance of flood maps. FIMA maintains and updates data through Flood Insurance Rate Maps (FIRM) and risk assessments. FIRMs include statistical information such as data for river flow, storm tides, hydrologic/hydraulic analyses, and rainfall and topographic surveys. Risk Analysis staff in the 10 regional offices manage development of flood maps for their geographic areas. Headquarters and regional staff monitor and report flood hazard mapping progress based on program management data provided by FIMA’s three national Production and Technical Services (PTS) contractors and other flood mapping partners. The PTS contractors are private engineering firms working under contract to FIMA and are each responsible for a regional portfolio of flood study projects. The Risk MAP contract is a cost-plus-award-fee (CPAF) contract. Under the contract, FIMA has obligated approximately $410 million from fiscal year 2008 to the end of fiscal year 2013 to the two PTS contractors that we selected for review. Risk Insurance works with private insurers to provide flood insurance for property owners and also encourages communities to adopt and enforce floodplain management regulations. Private insurance companies largely are responsible for insurance sales and claims adjustment under FIMA’s Write Your Own (WYO) program. NFIP’s BSA contractor serves as the focal point of support operations for the WYO insurers. The BSA contractor liaises between the government and independent property and casualty insurance companies that issue federally guaranteed NFIP policies. Among the services the BSA contractor provides are financial and statistical reporting based upon data submissions from the WYO companies, development of forms and information related to NFIP, and various data analyses. The current BSA contract went into effect in January 2008 and expired in December 2013. The BSA contract is cost- plus-fixed-fee (CPFF). FIMA had obligated approximately $80 million to the BSA contractor as of the end of fiscal year 2013. Contract management responsibilities for NFIP are shared between OCPO and FIMA divisions. OCPO officials stated that they are primarily involved in the acquisition planning and contract formation phases of FIMA’s typical contracting process that is illustrated in figure 2. FIMA divisions (which for NFIP contracts we reviewed either would be FIMA’s Risk Analysis or Risk Insurance divisions) are responsible for defining the need for contracting support. Once the need is identified, OCPO works with FIMA divisions on developing statements of work and reporting requirements for contracts. OCPO is represented in the contracting process by a contracting officer, who has the delegated authority to negotiate, enter into, administer, modify, and terminate contracts on behalf of the government. Once the contract is awarded, the contracting officer receives support from the COR during the contract administration phase. The division acquiring the contracted goods and services selects a COR from its staff. The responsibilities of the COR include administering and directing daily operations within the scope of the contract, monitoring contractor performance, ensuring that requirements meet the terms of the contract, and partnering with the contracting officer. The COR is authorized to monitor the contract on behalf of the contracting officer. However, the COR is not authorized to make any contractual commitments or changes that may affect the contract price, terms, or conditions without the approval of the contracting officer. Generally, the COR role can be either part-time or full-time, and is often a collateral duty. FEMA has improved its contract management process since our prior reports. In particular, FEMA has developed policies and procedures that address our previous recommendation on implementing a DHS directive for contract management. Additionally, FEMA divisions have developed contract management guidance that follows best practices for contract administration (see app. I for the key best practices used in our review). Lastly, FEMA has developed guidance for performance reporting. FEMA has made progress in establishing or revising its guidance on contract management and oversight since we reported on these issues in 2008 and 2011. To address weaknesses in its oversight and management of acquisitions, we recommended in 2011 that FEMA complete the development and implementation of its revised acquisition process to be consistent with a DHS directive that sets overall policy and structure for acquisition management for DHS agencies. In May 2011, FEMA published its own directive on the review of contracts that was consistent with the departmental directive. FEMA also updated guidance in October 2011 that explains its acquisition process. The updated guidance was intended as a tool for program offices to use when initiating an acquisition. In particular, the guidance states that the program office is responsible for developing the acquisition plan that includes a description of requirements. If the program office decides not to define how the contractor is to accomplish the desired results of the contract, it can develop a performance work statement that defines the required results in measurable, mission-related terms absent details about how to do the work. The guidance requires that the performance work statement be accompanied by a performance requirements summary that documents the services and performance standards expected and a quality assurance surveillance plan that defines the agency’s plan for ensuring that the contractor has performed in accordance with the performance standards. The guidance includes a template for the performance work statement and an intranet link to a quality assurance surveillance plan template. In addition, FEMA published a revised version of its COR Handbook in March 2012 in response to a DHS directive. According to FEMA, the handbook is intended to provide practical guidance for CORs as they perform their daily contract administration duties. The handbook incorporates best practices in contract management as described in the OFPP guide. For example, the handbook incorporates language that encourages CORs to develop a contract administration plan along with a quality assurance surveillance plan—both OFPP best practices. Contract administration plans are intended to provide a broad overview of the contract, including areas of risk and concern. Quality assurance plans are meant to be used to evaluate the quality and timeliness of the products and services produced by the contractor, and include performance standards, the methods of assessment, and an assessment schedule. Along with policy development, FEMA has adopted an intranet-based system to centralize and electronically store NFIP contracting-related documents, including monitoring reports. In 2008, we recommended that FEMA implement a process to better ensure that CORs submit monitoring reports on time, systematically review these reports, and retain the reports in a quality assurance file. Several FEMA divisions use this intranet-based system. In part to address our 2008 report, FEMA staff in Risk Insurance involved in contract administration began using the system in November 2010. The Risk Insurance division intranet-based system is used by CORs to store and share COR-generated reports. The Risk Analysis division uses the intranet-based system for storing contract documents, particularly contract modifications and task orders submitted by the regional offices. The Risk Insurance and Risk Analysis divisions developed contract management guidance that follows best practices and implements the COR Handbook. More specifically, in December 2012 Risk Insurance finalized the Contracts Management Reference Guide. The guide closely follows the COR handbook and provides Risk Insurance CORs with a high-level overview of FEMA, DHS, and federal acquisition guidance and directs CORs to relevant reference materials. It also follows many of the practices recommended in the OFPP guide for contract administration. For instance, the Contracts Management Reference Guide lists implementation of a contract administration plan and a quality assurance surveillance plan as critical contract administration activities. Furthermore, FEMA’s Risk Insurance division developed additional guidance intended to help ensure consistency in handling the failure of a contractor to meet performance standards. Specifically, Discrepancy Report Procedures guide CORs in such instances. In 2008, we recommended that FEMA develop such guidance after finding that the agency did not coordinate information and actions related to deficiencies for certain contractors. The Discrepancy Report Procedures include guidance for the COR to coordinate with the contracting officer to ensure that the contractor is notified and corrective action is taken. FEMA’s Risk Analysis division has guidance for contract management, but took a different approach from the Risk Insurance division for its design and implementation because of the nature of the Risk MAP program. Risk Analysis established a Risk MAP Contracts Council to help ensure coordination, integration, and alignment of contracts across the Risk MAP program. The council comprises senior contracting officials and all CORs in the Risk MAP program. The Risk Analysis division follows FEMA’s COR Handbook, but has not developed further guidance for CORs like the Risk Insurance Contracts Management Reference Guide. Instead, Risk Analysis developed a number of plans that provide general guidance for contract management staff across the Risk MAP program. FEMA’s COR Handbook recommends such planning documents, which also are consistent with best practices as described in the OFPP guide. For instance, the Program Management Plan, which is a narrative summary of the program’s processes and plans, provides overall contract management guidance for Risk MAP. Several other planning documents provide guidance on specific issues including risk management, quality assurance management, product delivery, and changing task orders. Unlike the Risk Insurance division, Risk Analysis has not developed a distinct deficiency report policy. According to officials, the Risk Analysis division follows an award fee plan for all of the related task orders. The fee plan sets performance metrics, such as timeliness and cost efficiency of community data collection, against which the contractor is evaluated, either quarterly or annually. If the performance metrics are not met, a portion of the award fee is to be deducted. If a performance problem at the project level becomes persistent and is not addressed by the contractor, Risk Analysis is to file a deficiency report similar to that used by Risk Insurance. We discuss these award fee plans and deficiency reports in greater detail in the next section. DHS and FEMA have developed guidance to implement federal requirements to report their assessment of contractor performance in a government-wide database—the Contractor Performance Assessment Reporting System (CPARS). In July 2009, the Office of Management and Budget (OMB) required executive branch agencies to report assessment information about contractor performance into a central repository of contractor performance information, called the Past Performance Information Retrieval System. We previously found that DHS was transitioning to using CPARS to comply with this requirement. The primary purpose of CPARS is to better ensure that current, complete, and accurate information on contractor performance is available for use in procurement source selections. The CPARS guidance states that each assessment must include detailed and complete statements about the contractor’s performance and be based on objective data (or measurable, subjective data when objective data are not available) supported by program and contract/task order management data. Such performance assessment can significantly reduce the risk to the government on future awards. As a result, OMB urges agencies to ensure that all critical performance assessments are made available in a timely manner. Both DHS and FEMA have incorporated this reporting requirement into acquisition regulations and program office guidance. For example, the regulations state that assessments should clearly, objectively, and completely describe the contractor’s performance in the narrative statement, in sufficient detail to justify the rating. Furthermore, the narratives should include a level of detail and documentation that provides evidence and establishes a basis for the assigned rating, an explanation of how problems were resolved, and the extent to which solutions were effective. FEMA developed standard operating procedures for CPARS that establish appointments and responsibilities for FEMA staff responsible for entering information into the system. In particular, the procedures state that the branch chiefs within each program office are ultimately responsible for oversight of its CPARS program. Also, the COR is responsible for oversight of the assigned contract and assisting the contracting officer with drafting the CPARS assessment. The standard operating procedures mandate basic CPARS training for CORs. The FEMA COR Handbook and Risk Insurance Contracts Management Reference Guide include recording contractor performance in CPARS as a critical contract administration activity. The Handbook states that CPARS evaluations must be completed annually and at the conclusion of each contract. In particular, contracts lasting 2 or more years are usually to be evaluated annually and evaluations should be performed at least 120 days before the exercise of an option to extend the contract. The Contracts Management Reference Guide states that CPARS evaluations are due before the end of a task order, and includes reference to FEMA’s standard operating procedures for CPARS. For the three largest contracts we selected, FEMA generally followed its procedures for contract management in the three key areas that we identified for review—COR training and ethics requirements, performance measures, and ongoing monitoring and reporting (see table 1). But we found some opportunity for improvement related to quality assurance for one of the contracts and consistent reporting of contractor performance in CPARS for other two contracts. For each of the three contractors we reviewed, FEMA’s COR training and certification program requirements were met at the time of appointment. In 2010, the DHS Inspector General recommended that FEMA ensure NFIP staff received annual training on the roles and responsibilities of the contracting officer and COR. Because of the size and complexity of the three contracts we reviewed, the CORs had to maintain the highest level of certification. Initial and ongoing training requirements for FEMA CORs are clearly detailed in agency guidance. Initially staff needs 40 hours of training to assume basic COR responsibilities and 40 hours bi-annually of refresher training to maintain certification. Based upon data FEMA provided, each of the CORs completed training requirements at the time of appointment, and also satisfactorily documented their completed refresher training requirements for the contracts we reviewed. For contracting officers, FEMA also has implemented procedures that provide detailed guidance for contracting officers during each phase of the procurement process. FEMA has continued to update the guidance, and has made it available on its intranet. For the contracts that we reviewed, we found documentation that the CORs had fulfilled the ethics requirement for 2013. The DHS Inspector General recommended in March 2010 that all FEMA employees involved in the procurement process receive annual procurement-specific ethics training and file financial disclosure forms. Initial ethics and procurement ethics training courses are included in the COR curriculum, and completion was validated by COR certification being awarded. Annual ethics training is mandatory for FEMA employees. Federal regulations require CORs and contracting officers to file a financial disclosure form. FEMA’s Office of the Chief Counsel requires FEMA CORs and contracting officers to file a financial disclosure form within 30 days of being appointed to a new position and annually. Supervisors review financial disclosures for potential conflicts of interests that could arise from stock holdings, investments, or personal relationships with a contractor. Any potential conflicts discovered by the supervisor during their review are referred to the chief counsel’s office for further determination. The filer and supervisor are notified of the chief counsel’s determination, which may include restrictions or precautionary measures that should be taken. Based upon our review of data provided by FEMA, no disqualifying conflicts of interest were reported for 2013 by the chief counsel’s office for any CORs or contracting officers working on the contracts we reviewed. For each of the three contracts we reviewed, FEMA had established performance measures that were relevant and measureable to the contracts’ objectives and goals. According to federal government internal control standards, appropriate and relevant performance measures and indicators must be established and continually compared against program goals and objectives. The standards also recommend incorporating performance data into operating reports that inform management of inaccuracies or internal control problems. Our review of contract documentation for each of the three contracts that we selected indicated FEMA has established performance measures that met these standards. FEMA periodically compared and analyzed actual performance data against goals for each of the three contracts through operating reports, which would allow management to review the status of deliverables and milestones and be aware of inaccuracies or exceptions that could indicate internal control problems. The BSA contract, which FEMA awarded in 2008, contains clearly defined appropriate performance areas that link to contract objectives. Many of the performance areas measure whether the contractor rendered the service within the required time frames. Specifically, the contract objectives cover services such as coordinating disaster response; maintaining the WYO Financial Control Plan; analyzing actuarial, claims, and underwriting data; coordinating communication; and maintaining NFIP information technology systems. The performance areas used to measure contractor performance address the timeliness of disaster response, the management of materials, such as manuals and rate tables, and the operation of the NFIP accounting system. The performance areas also included the submission of program management plans and communication services on a timely basis. A contractor that did not meet a performance requirement could have its annual target fee reduced. For example, the disaster response performance area required that the contractor deploy disaster teams within 24 hour notice by the COR. If the contractor did not deploy teams within that time, the COR could reduce the annual target fee by one-fifth. FEMA staff stated that they had never reduced the fee for the BSA contractor over the life of the contract we reviewed. Risk Insurance has tracked the performance data for the BSA contract through monthly monitoring reports submitted by the COR and through operating reports, referred to as “dashboard reports” (see fig. 3). The monthly monitoring reports, which the COR compiles using records of activities and interactions from the reporting month, as well as technical progress reports submitted by the contractor, tracked the accomplishment of each performance area established in the contract. Using information from the contractor’s financial reports, the COR also submits dashboard reports on a monthly basis, which summarized financial and budgetary data, the status of the contract’s schedule, and any risk issues, such as the need for additional field and office support during Superstorm Sandy. By identifying risk areas, the dashboard reports are intended to make management aware of any issues. As shown in figure 3, the Program Management Office reviewed the monthly reports submitted by the COR for completeness and conducted oversight to identify any issues requiring management action. The contracting officer for the BSA contract said the COR copied her on correspondence with the contractor. We reviewed the COR monthly reports for April 2012 to August 2013 and found that the BSA contractor had met all the identified performance measures for each month. Additionally, we reviewed operating reports for the BSA contract from January 2012 until December 2012 and found that the BSA program was consistently within budget and schedule constraints during the period reviewed. Similarly, the two Risk MAP contracts we reviewed contained clearly defined and appropriate performance metrics that were linked to the contract objectives and program goals. FEMA established overall program objectives for the Risk MAP program, and then established contractor-specific performance measures through the award fee plan. The overall objectives of the Risk MAP program include (1) producing Risk MAP products that are aligned with user needs; (2) strengthening partnerships to develop improved understanding of flood risk and hazard mitigation; and (3) promoting the use of Risk MAP products at the local level for loss reduction. For the contractor-specific performance measures established in the award fee plans, FEMA set similar performance metrics for each mapping contractor because they conduct similar work in the various FEMA regions. The award fee plan for both contracts contained metrics related to Risk MAP deployment and mitigation action, timeliness of map packages and percentage of communities adopting maps, timeliness and cost efficiency of community data collection, customer satisfaction, and the quality of the administrative process. The plan also outlined performance metrics for regional task orders. Additionally, the award fee plan described the performance assessment process and established the award amount to be earned based on the contractors’ performance. More specifically, for each of the performance metrics, the plan included standards for performance, the award-fee amounts available, and descriptions of each standard. As illustrated in figure 4, the COR reviewed quarterly self assessments submitted by the contractors to ensure continuous comparison of performance to goals. The COR also reviewed monthly progress reports, which contained performance data tied to contract objectives. As part of the performance assessment process, the COR and the contractor then reached agreement on the COR’s assessment. The COR then submitted the assessment to the Award Fee Review Board, which consisted of voting members familiar with the mapping program. The Award Fee Review Board recommended an amount to be awarded and the fee determining official approved the amount. FEMA officials stated that the contractors usually received between 90 and 95 percent of the available award fee. According to the contracting officer for the two mapping contracts, the contracting officers assigned to mapping contracts meet weekly with the CORs to discuss upcoming milestones. We found through our review of award fee determinations by FEMA officials that the two Risk MAP contractors generally collected most of the award fees available during fiscal year 2012. In addition, FEMA’s Risk Analysis division management regularly reviewed operating reports, referred to as Joint Program Reviews (JPR), to monitor the performance of the Risk MAP program (see fig. 4). The contractors submitted the monthly technical reports to the project management contractor, which then compiled the reports for the JPRs and submitted them to Risk MAP management for review. We reviewed fiscal year 2012 JPRs for the mapping contractors and found that the JPRs were organized by program objectives, and contractor performance metrics provided data on progress made in meeting the objectives. The reviews were produced monthly for headquarters contracts and quarterly for regional contracts based on data contained in an earned value management system, into which the contractors enter deliverables, and monthly technical reports. The reviews also summarized performance results for all contractors and showed results for individual contractors. Since we reported on FEMA’s monitoring of NFIP contracts in 2008, the agency has improved its ongoing monitoring of contractors based on our review of the three largest contracts. In 2008 we found that FEMA staff did not consistently follow key monitoring procedures, including maintaining a quality assurance file and consistently implementing performance deficiency procedures. We recommended that FEMA implement a process to ensure that monitoring reports are submitted, reviewed, and maintained in a quality assurance file, and implement written guidance on how to consistently handle a contractor’s failure to meet performance standards. Contract monitoring best practices that we identified recommend that agencies conduct ongoing monitoring by maintaining contract files, fully document modifications, develop contract administration plans, develop quality assurance surveillance plans, consistently implement deficiency guidance, and conduct annual performance assessments (see app.I). For each of the three contracts we reviewed, we examined (1) the completeness of the quality assurance files maintained by the CORs, (2) documentation of modifications made to the contracts, and (3) deficiency reports issued related to the contracts since 2011. We found that overall FEMA maintained the appropriate files for the three contracts. Specifically, we found that the files contained proper documentation on the modifications made to the contracts. The BSA contract had few modifications and included changes such as the assignment of new contracting officers, the revision of the performance work statement, and extension of the terms of the contract. The two mapping contracts have regional subcontracts, which are set up through task orders. The mapping contracts had multiple modifications and task orders, with varying degrees of complexity. For example, a contract was modified to allow a contractor to adopt new mapping technology, and another contract was modified to adjust reporting deadlines. FEMA actively used deficiency reports to identify and resolve performance problems for the three contracts we reviewed. For example, in 2011, FEMA issued deficiency reports for each of the three contracts. The purpose of deficiency reports is to record when a contractor does not meet a performance objective. FEMA filed one deficiency report for the BSA contract in 2011 and documented the contractor’s corrective action to resolve the issue. FEMA also issued three deficiency reports in 2011 for the mapping contracts we reviewed. One contractor received two deficiency reports and provided corrective action plans that FEMA accepted. The other contractor received one deficiency report and provided a corrective action plan. We found that FEMA followed the respective divisions’ deficiency report guidance and resolved the performance problems. FEMA officials stated that they have not issued any deficiency reports related to these contracts since 2011. Although FEMA has generally improved its ongoing monitoring of contractors, the agency did not develop a quality assurance surveillance plan for one of the contracts. FEMA developed surveillance plans for the two mapping contracts we reviewed, but agency officials did not develop one for the BSA contract. As discussed earlier, quality assurance surveillance plans are required for performance-based contracts according to regulation, agency guidance, and best practices. According to these sources, the plan should be prepared in conjunction with the statement of work and should include, among other things, the performance standards, the methods of assessment, an assessment schedule, a description of corrective actions to be taken if required performance standards are not met, and incentives to be applied. FEMA officials stated that they relied on a brief reference to quality assurance in the BSA contract and other guides rather than develop a detailed surveillance plan. Specifically, the current BSA contract mentions a “Government Quality Assurance Plan” and includes a performance requirements summary with general performance metrics. The contract, however, does not specify how often deliverables would be monitored and evaluated. Risk Insurance division officials also stated that they used the general Contracts Management Reference Guide as the quality assurance plan for the BSA contract. However, the Contracts Management Reference Guide was intended to provide an overview of the COR’s responsibilities and activities, and does not discuss the method of assessment the COR will undertake, how often it with be done, or actions the COR will take if performance is not met. The guide itself calls for the development of a quality assurance surveillance plan. In contrast, our 2008 report found that FEMA had a quality assurance surveillance plan for a past BSA contract. Further, FEMA developed a detailed quality assurance surveillance plan for the two mapping contracts. For example, the quality assurance surveillance plan for the mapping contracts outlined management roles and responsibilities, quality standards, and the process for ensuring quality control. Quality assurance surveillance plans that focus on the quality, quantity, and timeliness of performance outputs would likely avoid challenges monitoring the contractor’s performance. Additionally, without detailed quality assurance surveillance plans, the expectations of the COR and contractor can be misaligned during performance assessments. For example, in 2010 and 2011, FEMA identified persistent issues with the BSA contractor’s deliverables, including quality and timeliness, and faced challenges in resolving those issues, which may have been avoidable if a quality assurance surveillance plan had been developed and used. FEMA officials acknowledged the need to develop quality assurance surveillance plans. They also stated that various actions are under consideration that may help ensure that quality assurance surveillance plans are in place for future contracts. However, FEMA officials did not provide specifics about these actions or provide a date for when they would determine which actions, if any, to implement. Until FEMA develops specific actions and implements them, it is unclear as to whether FEMA has effective internal controls in place to ensure that quality assurance surveillance plans are developed for future FEMA contracts. FEMA did not report performance assessments in CPARS for two of the three contracts we reviewed. While FEMA filed CPARS reports for the BSA contract for 2011 and 2012, the agency did not file any for the two mapping contracts. As discussed earlier in the report, DHS uses CPARS reports to assess and report performance on previous contracts, and applicable regulations and contract management guidance require entry of contract performance assessments in CPARS. The Risk MAP contracts we reviewed were multiyear contracts active in fiscal years 2011, 2012, and 2013. Per the guidance in FEMA’s COR Handbook, the CORs should have reported performance assessments at the end of the contract period in 2011 and 2012, at least 120 days prior to exercising the next option. Agency officials acknowledged that the CORs were supposed to report such information, but had not done so. FEMA staff responsible for the mapping contracts stated that CPARS assessments were a low priority at that time. FEMA staff responsible for the mapping contracts have acknowledged the importance of CPARS and stated that completing past CPARS assessments is now a higher priority. As discussed earlier, internal control standards require that management emphasize to program managers their responsibility for internal controls and ongoing monitoring. Not submitting CPARS assessments is disadvantageous to both the contractor and the government. Receiving a positive CPARS assessment can enhance a contractor’s reputation when bidding on future contracts, and as such, the assessments provide an incentive for the contractor to perform as expected. For example, FEMA officials stated that for the BSA contract, the contractor’s performance improved after receiving a low rating in CPARS in 2011. Furthermore, not submitting assessments reduces the information available to other acquisition professionals in government when determining contractor selection. When selecting contractors, the FAR requires agencies to consider past performance as one assessment factor in most competitive procurements. FEMA relies heavily on contractors to accomplish the objectives of NFIP. FEMA has made progress in developing contract management guidance that addresses weaknesses identified in our past reports and that is consistent with best practices in contract management. However, the inconsistent application of the guidance that we identified in our review of the three largest contracts indicated that continued attention to internal controls is warranted. Specifically, FEMA did not develop a quality assurance surveillance plan for the current BSA contract, which is critical to assessing the quality and timeliness of the products and services produced by the contractor. Regulations and agency guidance direct that such plans be used. FEMA officials stated that they are considering various options to ensure that the plans are in place for future contracts. However, without detailed surveillance plans, the expectations of the COR and contractor can be misaligned during performance evaluations and may not focus on the quality, quantity, and timeliness of performance outputs. Moreover, by taking actions to determine the extent to which quality assurance surveillance plans have not been prepared and the reasons why, FEMA would be better positioned to take action, as needed, to address these reasons and help improve contract management, which we have identified as a high-risk area. In addition, the CORs for the Risk MAP contracts we reviewed have not entered ratings for the contractors into CPARS, the contractor assessment reporting database used by DHS that feeds into a government-wide contractor past performance database. CPARS data are key to informing federal contracting decisions and applicable regulations and agency guidance requires the use of CPARS to assess contractor performance. Not submitting CPARS assessments disadvantages contractors and the federal government because it reduces the information available to federal acquisition staff when determining contractor selection. As with the surveillance plan, by determining the extent to which this situation exists, identifying the root cause, and implementing steps to address the root cause, as appropriate, FEMA can help ensure that its—and other agencies’—contracting decisions and management draw on complete, relevant, and timely performance information. While FEMA has begun to consider how to address the lack of a quality assurance surveillance plan for the BSA contract and to enter Risk MAP contractor performance assessments into CPARS, we are making two recommendations aimed at improving monitoring and reporting of contractor performance. To help ensure FEMA’s contract monitoring provides a consistent, structured, and transparent method to assess contractor services, the FEMA Administrator should determine the extent to which quality assurance surveillance plans have not been developed for FEMA contracts; identify the reasons why quality assurance surveillance plans were not developed; and develop additional actions as needed to address the reasons to help ensure that quality assurance surveillance plans are developed for its future awards. To help ensure that federal contracting officials have complete and timely information about the performance of contractors, the FEMA Administrator should determine the extent to which CPARS assessments have not been completed for FEMA contracts; identify the reasons why CPARS assessments were not completed; and develop additional actions as needed to address the reasons to help ensure that assessments (ratings) for FEMA contractors are reported in CPARS on a timely and consistent basis. We provided a draft of this report to FEMA for review and comment. In written comments, FEMA concurred with our recommendations. FEMA’s comments are reprinted in appendix II. FEMA also provided technical comments, which we incorporated in the report as appropriate. In response to our first recommendation, FEMA informed us that they have reviewed all active NFIP contracts and have directed all COR staff that do not have quality assurance plans in place to develop such plans. FEMA also noted that quality assurance surveillance plans are required by DHS acquisition guidelines and stated that they are rapidly working to improve compliance in this area with an estimated completion date of January 31, 2014. In response to our second recommendation, FEMA stated that reduced COR staff for the Risk MAP program resulted in the inability to complete CPARS assessments in a timely manner. FEMA stated that they are in the process of identifying additional COR staff. In the meantime, CORs in the Risk Analysis division have been working to complete outstanding assessments and they planned to complete the assessments by December 31, 2013. FEMA also stated that as part of its governance plan, it plans to document the CPARS assessment process and commit to completing future reports within 30 days of receiving notification that assessments are due. FEMA’s actions in response to our recommendations are steps in the right direction to improve compliance with contract oversight requirements. Effective controls are necessary to help ensure that FEMA fully implements required monitoring and reporting of contractor performance. For instance, while FEMA is in the process of developing quality assurance surveillance plans for all contracts, as we also recommended, FEMA will need to make certain that it has determined the reason(s) that a quality assurance surveillance plan had not been developed for the BSA contract to provide greater assurance that such plans are prepared as required going forward. We are sending copies of this report to the Secretary of Homeland Security. In addition, the report is also available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have questions concerning this report, please contact me at (202) 512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Section 100231 of the Biggert-Waters Flood Insurance Reform Act of 2012 mandates GAO to review the three largest contractors used in administering the National Flood Insurance Program (NFIP) of the Federal Emergency Management Agency (FEMA). We examined (1) FEMA’s progress in updating its process for monitoring NFIP contractors since our prior reports, and (2) the extent to which FEMA followed its monitoring process for the largest NFIP contractors. To determine FEMA’s progress in updating its process for monitoring NFIP contractors, we analyzed Department of Homeland Security (DHS) and FEMA policies, directives, guidance, and other materials related to contract management, including the use of a government-wide contractor assessment database and FEMA’s electronic document management system. Additionally, we compared FEMA’s process for monitoring NFIP contractors with relevant legislation, Standards for Internal Control in the Federal Government, contract management best practices, and related reports on NFIP from GAO and Office of the Inspector General at DHS. We assessed the extent to which FEMA followed its monitoring procedures for three large NFIP contracts—the Bureau and Statistical Agent (BSA) contract and two flood mapping contracts. Using data provided by the FEMA Office of the Chief Procurement Officer, we identified the 10 largest contractors for NFIP based on the amount of funds obligated to NFIP contractors over the previous 5 fiscal years (from 2008 through 2012). We verified the accuracy of the data by comparing them with data in the DHS Service Contract Inventories database and the Federal Procurement Data System. We determined that the contracting cost data provided by Office of the Chief Procurement Officer were sufficiency reliable for our purposes. To narrow the number of contractors selected for review, we determined whether they had an active NFIP contract during the course of our review. Additionally, we selected contractors responsible for administering different NFIP functions, such as flood mapping and insurance management. As shown in table 2, the dollars obligated to the three contractors we selected represented approximately 45 percent ($405 million out of a total of $909 million) of the overall active contracting dollars obligated during the 5-year period from fiscal years 2008 through 2012. To assess the extent to which FEMA followed best practices and procedures for contract management, we compared FEMA’s contract management practices to criteria outlined in the following sources: Standards for Internal Control in the Federal Government; A Guide to Best Practices for Contract Administration; FEMA, Contracting Officer’s Representative Handbook; FEMA Risk Insurance Division, Contracts Management Reference FEMA Risk Insurance Division, Discrepancy Report Procedures; and FEMA Risk Analysis Division, Risk MAP Award Fee Plan. We grouped key contract management processes into three key areas using the sources listed above: training and ethics requirements for contracting officer’s representative (COR), performance measures, and ongoing monitoring and reporting. As part of our assessment, we reviewed relevant plans and documentation and interviewed FEMA staff. We determined whether the information was fully, partially, or not present, and recorded the presence of this information by accordingly marking the answers “yes,” “partially,” “no,” or “not applicable.” The following table identifies the specific best practices and procedures that we used and our evaluation of the three contracts we reviewed: the BSA contract and the two Risk Mapping Assessment and Planning (Risk MAP) contracts. For each of the selected contracts, we reviewed the statements of work, monthly contractor monitoring reports, and discrepancy reports. We collected available data from FEMA and conducted interviews with representatives from FEMA on their contract management roles and responsibilities. We also consulted with representatives of DHS’s Office of the Inspector General and interviewed the contractors we selected for further insight on the extent to which FEMA followed monitoring policies and procedures. We conducted this performance audit from January 2013 through January 2014, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Harry Medina, Assistant Director; Philip Curtin; Juliann Gorse; Josie Benavidez; William R. Chatlos; Julia Kennon; Marc Molino; Barbara Roesmann; Jessica Sandler; and William T. Woods, made key contributions to this report.
In operating NFIP, FEMA spends hundreds of millions of dollars annually on contractors that perform critical functions. The Biggert-Waters Flood Insurance Reform Act of 2012 mandates GAO to review the three largest contractors used in administering NFIP. In prior reports, GAO found problems with FEMA's oversight of contractors responsible for performing key NFIP functions. This report examines (1) FEMA's progress in updating its process for monitoring NFIP contractors since GAO's prior reports, and (2) the extent to which FEMA followed its monitoring process for the largest NFIP contractors. To address these objectives, GAO analyzed FEMA data on funds obligated to contractors from fiscal years 2008 through to 2012, reviewed information from FEMA on contract management policies and procedures, and assessed data covering fiscal years 2011 to 2013 on the implementation of these policies and procedures as they pertained to the three largest contractors. GAO also interviewed FEMA contracting staff and contractors. The Federal Emergency Management Agency (FEMA) has made progress in improving its processes for monitoring NFIP contracts since GAO last reported on these issues in 2008 and 2011. For example, GAO recommended in 2011 that FEMA complete the development and implementation of its revised acquisition process to be consistent with a Department of Homeland Security (DHS) directive. FEMA updated its contract management guidance and revised its handbook for contracting officer's representatives to be consistent with DHS directives. The updated handbook also contained many of the elements identified in a federal guide to best practices for contract administration. Furthermore, the FEMA division that manages the National Flood Insurance Program (NFIP) developed a contract management reference guide that followed FEMA's handbook and federal best practices guidance. With some exceptions, FEMA largely followed its contract monitoring procedures for the three largest NFIP contractors GAO reviewed. For example, FEMA ensured that relevant staff overseeing selected contracts received appropriate training. Also, FEMA periodically compared and analyzed actual performance data against goals for each of the three contracts through operating reports, which would allow management to review the status of deliverables and milestones and be aware of inaccuracies or exceptions that could indicate internal control problems. However, FEMA did not develop a quality assurance surveillance plan for one of the contractors--a best practice and a key requirement identified in regulations and guidance. In 2010 and 2011, FEMA identified persistent issues with the contractor's deliverables, including quality and timeliness, and faced challenges in resolving those issues, which might have been avoidable if a quality assurance surveillance plan had been developed and used. FEMA officials stated that they are considering options to ensure that the plans are in place for future contracts, but did not provide specifics on those options or when they plan to implement them. Without detailed quality assurance surveillance plans, the expectations of the agency and the contractor can be misaligned during performance evaluations. Separately, for two of the contracts FEMA staff did not enter performance evaluations in the Contractor Performance Assessment Reporting System (CPARS), a database DHS uses to record assessments of performance of government contractors. Federal and DHS regulations and FEMA contract management guidance require entry of contract performance information in CPARS within certain time frames. By not reporting such information, FEMA disadvantaged the contractors and the government by not providing data that could be used in evaluating the contractors for future contract awards. For instance, receiving a positive CPARS assessment can enhance a contractor's reputation when bidding on future contracts, and as such, the assessments provide an incentive for the contractor to perform as expected. FEMA officials have acknowledged the issue. By determining the extent to which performance evaluations have not been entered into CPARS for its contracts, identifying the reasons why, and addressing those reasons, as needed, FEMA can help ensure that its--and other agencies'--contracting decisions and management draw on complete, relevant, and timely performance information. To improve monitoring and reporting of contractor performance, we are recommending that FEMA (1) determine the extent to which quality assurance surveillance plans and CPARS assessments have not been prepared, (2) identify the reasons why, and (3) take steps, as needed, to address those reasons. FEMA concurred with GAO's recommendations.
DOD faces a number of longstanding and systemic challenges that have hindered its ability to achieve more successful acquisition outcomes— obtaining the right goods and services, at the right time, at the right cost. These challenges include addressing the issues posed by DOD’s reliance on contractors, ensuring that DOD personnel use sound contracting approaches, and maintaining a workforce with the skills and capabilities needed to properly manage the acquisitions and oversee contractors. The issues encountered in Iraq and Afghanistan are emblematic of these systemic challenges, though their significance and effect are heightened in a contingency environment. Our concerns about DOD’s acquisition of services, including the department’s reliance on contractors and the support they provide to deployed forces, predate the operations in Iraq and Afghanistan. We identified DOD contract management as a high-risk area in 1992 and since then we continued to identify a need for DOD to better manage services acquisitions at both the strategic and individual contract levels. Similarly, in 1997 we raised concerns about DOD’s management and use of contractors to support deployed forces in Bosnia. We issued a number of reports on operational contract support since that time, and our recent high-risk update specifically highlighted the need for increased management attention to address operational contract support. Contractors can provide many benefits, such as unique skills, expertise, and flexibility to meet unforeseen needs, but relying on contractors to support core missions can place the government at risk of transferring government responsibilities to contractors. In 2008, we concluded that the increased reliance on contractors required DOD to engage in a fundamental reexamination of when and under what circumstances it should use contractors versus civil servants or military personnel. Earlier this year, we reported that the department lacked good information on the roles and functions fulfilled by contractors. Our work has concluded that DOD’s reliance on contractors is still not fully guided by either an assessment of the risks using contractors may pose or a systematic determination of which functions and activities should be contracted out and which should be performed by civilian employees or military personnel. The absence of systematic assessments of the roles and functions that contractors should perform is also evident in contingency environments. For example, in June 2010 we reported that DOD had not fully planned for the use of contractors in support of operations in Iraq and Afghanistan and needed to improve planning for operational contract support in future operations. In addition, we reported that while U.S. Forces-Iraq had taken steps to identify all the Army’s Logistics Civil Augmentation Program (LOGCAP) contract support needed for the drawdown in Iraq, it had not identified the other contractor support it may need. We found that the May 2009 drawdown plan had delegated responsibility for determining contract support requirements to contracting agencies rather than to operational personnel. However, DOD contracting officials told us that they could not determine the levels of contractor services required or plan for reductions based on those needs because they lacked sufficient, relevant information on requirements for contractor services during the drawdown. Similarly for Afghanistan, we found that despite the additional contractors that would be needed to support the troop increase, U.S. Forces-Afghanistan was engaged in very little planning for contractors with the exception of planning for the increased use of LOGCAP. Further, we have reported on limitations in DOD’s ability to track contractor personnel deployed with U.S. forces. In January 2007, DOD designated the Synchronized Predeployment and Operational Tracker (SPOT) as its primary system for tracking data on contractor personnel deployed with U.S. forces. SPOT was designed to account for all U.S., local, and third-country national contractor personnel by name and to contain a summary of services being provided and information on government-provided support. Our reviews of SPOT, however, have highlighted shortcomings in the system’s implementation in Iraq and Afghanistan. For example, we found that varying interpretations by DOD officials as to which contractor personnel should be entered into the system resulted in SPOT not presenting an accurate picture of the total number of contractor personnel in Iraq or Afghanistan. In addition, we reported in 2009 that DOD’s lack of a departmentwide policy for screening local or third-country nationals—who constitute the majority of DOD contractor personnel in Iraq and Afghanistan—poses potential security risks. We are currently assessing DOD’s process for vetting firms that are supporting U.S. efforts in Afghanistan. Regarding planning for the use of contractors in future operations, since February 2006 DOD guidance has called for the integration of an operational contract support annex—Annex W—into certain combatant command operation plans, if applicable to the plan. However, 4 years later we reported that of the potential 89 plans that may require an Annex W, only 4 operation plans with Annex Ws had been approved by the department. As a result, DOD risks not fully understanding the extent to which it will be relying on contractors to support combat operations and being unprepared to provide the necessary management and oversight of deployed contractor personnel. Moreover, the combatant commanders are missing an opportunity to fully evaluate and react to the potential risks of reliance on contractors. While the strategic level defines the direction and manner in which an organization pursues improvements in services acquisition, it is through the development, execution, and oversight of individual contracts that the strategy is implemented. Keys to doing so are having clearly defined and valid requirements, a sound contract, and effective contractor management and oversight. In short, DOD, like all organizations, needs to assure itself that it is buying the right thing in the right way and that doing so results in the desired outcome. Our work over the past decade identified weaknesses in each of these key areas, whether for services provided in the United States or abroad, as illustrated by the following examples: In June 2007, we reported that DOD understated the extent to which it used time-and-materials contracts, which can be awarded quickly and adjusted when requirements or funding are uncertain. We found few attempts to convert follow-on work to less risky contract types and found wide discrepancies in DOD’s oversight. That same month we also reported that DOD personnel failed to definitize—or reach final agreement on—contract terms within required time frames in 60 percent of the 77 contracts we reviewed. Until contracts are definitized, DOD bears increased risk because contractors have little incentive to control costs. We then reported in July 2007 that DOD had not completed negotiations on certain task orders in Iraq until more than 6 months after the work began and after most of the costs had been incurred, contributing to its decision to pay the contractor nearly all of the $221 million questioned by auditors. We subsequently reported in 2010 that DOD had taken several actions to enhance departmental insight into and oversight of undefinitized contract actions; however, data limitations hindered DOD’s full understanding of the extent to which they are used. As early as 2004, we raised concerns about DOD’s ability to effectively administer and oversee contracts in Iraq. We noted that effective contract administration and oversight remained challenging in part because of the continued expansion of reconstruction efforts, staffing constraints, and need to operate in an unsecure and threatening environment. In 2008, we reported that the lack of qualified personnel hindered oversight of contracts to maintain military equipment in Kuwait and provide linguistic services in Iraq and questioned whether DOD could sustain increased oversight of its private security contractors. During our 2010 visits with deployed and recently returned units, we found that units continue to deploy to Afghanistan without designating contracting officer’s representatives beforehand and that those representatives often lacked the technical knowledge and training needed to effectively oversee certain contracts. Several units that had returned from Afghanistan told us that contracting officer’s representatives with no engineering background were often asked to oversee construction projects and were unable to ensure that the buildings and projects they oversaw met the technical specifications required in the drawing plans. We are currently assessing the training on the use of contract support that is provided to military commanders, contracting officer’s representatives, and other nonacquisition personnel before they deploy. Underlying the ability to properly manage the acquisition of goods and services is having a workforce with the right skills and capabilities. DOD recognizes that the defense acquisition workforce, which was downsized considerably through the 1990s, faces increases in the volume and complexity of work because of increases in services contracting, ongoing contingency operations, and other critical missions. For example, while contract spending dramatically increased from fiscal years 2001 through 2008, DOD reported that its acquisition workforce decreased by 2.6 percent over the same period. In April 2010, DOD issued an acquisition workforce plan that identified planned workforce growth, specified recruitment and retention goals, and forecasted workforce-wide attrition and retirement trends. As part of that plan, DOD announced that it would increase the size of two oversight organizations—the Defense Contract Audit Agency and the Defense Contract Management Agency—over the next several years to help reduce the risk of fraud, waste, and abuse in DOD contracts. However, we reported in September 2010 that DOD had not completed its assessment of the critical skills and competencies of its overall acquisition workforce and that it had not identified the funding needed for its initiatives until the conclusion of our review. The current budget situation raises questions as to whether DOD will be able to sustain its projected workforce growth and related initiatives. We are currently reviewing the Defense Contract Management Agency’s capacity for oversight and surveillance of contracting activity domestically in light of its role in contingency operations. DOD has recognized the need to take action to address the challenges it faces regarding contract management and its reliance on contractors, including those related to operational contract support. Over the past several years, the department has announced new policies, guidance and training initiatives, but not all of these actions have been implemented and their expected benefits have not yet been fully realized. While these actions are steps in the right direction, we noted in our February 2011 high-risk update that to improve outcomes on the billions of dollars spent annually on goods and services, sustained DOD leadership and commitment are needed to ensure that policies are consistently put into practice. Specifically we concluded that DOD needs to take steps to strategically manage services acquisition, including defining and measuring against desired outcomes, and developing the data needed to do so; determine the appropriate mix, roles, and responsibilities of contractor, federal civilian, and military personnel; assess the effectiveness of efforts to address prior weaknesses with specific contracting arrangements and incentives; ensure that its acquisition workforce is adequately sized, trained, and equipped to meet the department’s needs; and fully integrate operational contract support throughout the department through education and predeployment training. DOD has generally agreed with the recommendations we have previously made and has actions under way to implement them. I would like to touch on a few of the actions already taken by DOD. On a broad level, for example, improved DOD guidance, DOD’s initiation and use of independent management reviews for high-dollar services acquisitions, and other steps to promote the use of sound business arrangements have begun to address several weaknesses, such as the department’s management and use of time-and-materials contracts and undefinitized contract actions. Further, DOD has identified steps to promote more effective competition in its acquisitions, such as requiring contracting officers to take additional actions when DOD receives only one bid in response to a solicitation and revising its training curriculum to help program and acquisition personnel develop and better articulate the department’s requirements. Similarly, efforts are under way to reduce the department’s reliance on contractors. In April 2009, the Secretary of Defense announced his intent to reduce the department’s reliance on contractors by hiring new personnel and by converting, or in-sourcing, functions currently performed by contractors to DOD civilian personnel. To help provide better insights into, among other things, the number of contractors providing services to the department and the functions they perform and to help make informed workforce decisions, Congress enacted legislation in 2008 requiring DOD to annually compile and review an inventory of activities performed pursuant to contracts for services. In January 2011, we reported that while DOD had taken actions to reduce prior inconsistencies resulting from DOD components using different approaches to compile the inventory, it still faced data and estimating limitations that raised questions about the accuracy and usefulness of the data. Given this early state of implementation, the inventory and associated review processes are being used to various degrees by the military departments to help inform workforce decisions, with the Army generally using the inventories to a greater degree than the other military departments. Later this year we will review DOD’s strategic human capital plans for both its civilian and acquisition workforces, the status of efforts to in-source functions previously performed by contractor personnel, and DOD’s upcoming inventory of services. Furthermore, DOD has taken several steps intended to improve planning for the use of contractors in contingencies and to improve contract administration and oversight. For example, in the area of planning for the use of contractors, in October 2008 the department issued Joint Publication 4-10, Operational Contract Support, which establishes doctrine and provides standardized guidance for and information on planning, conducting, and assessing operational contract support integration, contractor management functions, and contracting command and control organizational options in support of joint operations. DOD also provided additional resources for deployed contracting officers and their representatives through the issuance of the Joint Contingency Contracting Handbook in 2007 and the Deployed Contracting Officer’s Representative Handbook in 2008. In 2009, the Army issued direction to identify the need for contracting officer’s representatives, their roles and responsibilities, and their training when coordinating operational unit replacements. Our work found that beyond issuing new policies and procedures, DOD needs to fundamentally change the way it approaches operational contract support. In June 2010, we called for a cultural change in DOD that emphasizes an awareness of operational contract support throughout all aspects of the department to help it address the challenges it faces in ongoing and future operations. This view is now apparently shared by the department. In a January 2011 memorandum, the Secretary of Defense expressed concern about the risks introduced by DOD’s current level of dependency on contractors, future total force mix, and the need to better plan for operational contract support in the future. Toward that end, he directed the department to undertake a series of actions related to force mix, contract support integration, planning, and resourcing. According to the Secretary, his intent was twofold: to initiate action now and to subsequently codify the memorandum’s initiatives in policy and through doctrine, organization, training, materiel, leadership, education, personnel, and facilities changes and improvements. He concluded that the time was at hand, while the lessons learned from recent operations were fresh, to institutionalize the changes necessary to influence a cultural shift in how DOD views, accounts for, and plans for contractors and personnel support in contingency environments. The Secretary’s recognition and directions are significant steps, yet cultural change will require sustained commitment from senior leadership for several years to come. While my statement has focused on the challenges confronting DOD, our work involving State and USAID has found similar issues, particularly related to not planning for and not having insight into the roles performed by contractors and workforce challenges. The need for visibility into contracts and contractor personnel to inform decisions and oversee contractors is critical, regardless of the agency, as each relies extensively on contractors to support and carry out its missions in Iraq and Afghanistan. Our work has identified gaps in USAID and State’s workforce planning efforts related to the role and extent of reliance on contractors. We noted, for example, in our 2004 and 2005 reviews of Afghanistan reconstruction efforts that USAID did not incorporate information on the contractor resources required to implement the strategy, hindering its efforts to make informed resource decisions. More generally, in June 2010, we reported that USAID’s 5-year workforce plan for fiscal years 2009 through 2013 had a number of deficiencies, such as lacking supporting workforce analyses that covered the agency’s entire workforce, including contractors, and not containing a full assessment of the agency’s workforce needs, including identifying existing workforce gaps and staffing levels required to meet program needs and goals. Similarly, in April 2010, we noted that State’s departmentwide workforce plan generally does not address the extent to which contractors should be used to perform specific functions, such as contract and grant administration. As part of State’s fiscal year 2011 budget process, State asked its bureaus to focus on transitioning some activities from contractors to government employees. State officials told us, however, that departmentwide workforce planning efforts generally have not addressed the extent to which the department should use contractors because those decisions are left up to individual bureaus. State noted that in response to Office of Management and Budget guidance, a pilot study was underway regarding the appropriate balance of contractor and government positions, to include a determination as to whether or not the contracted functions are inherently governmental, closely associated to inherently governmental, or mission critical. In the absence of strategic planning, we found that it was often individual contracting or program offices within State and USAID that made case-by- case decisions on the use of contractors to support contract or grant administration functions. For example, USAID relied on a contractor to award and administer grants in Iraq to support community-based conflict mitigation and reconciliation projects, while State relied on a contractor to identify and report on contractor performance problems and assess contractor compliance with standard operating procedures for its aviation program in Iraq. State and USAID officials generally cited a lack of sufficient number of government staff, the lack of in-house expertise, or frequent rotations among government personnel as key factors contributing to their decisions to use contractors. Our work over the past three years to provide visibility into the number of contractor personnel and contracts associated with the U.S. efforts in Iraq and Afghanistan found that State and USAID continue to lack good information on the number of contractor personnel working under their contracts. State and USAID had agreed to use the SPOT database to track statutorily-required information. The system still does not reliably track the agencies’ information on contracts, assistance instruments, and associated personnel in Iraq or Afghanistan. As a result, the agencies relied on other data sources, which had their own limitations, to respond to our requests for information. We plan to report on the agencies’ efforts to track and use data on contracts, assistance instruments, and associated personnel in Iraq or Afghanistan later this year. The agencies have generally agreed with the recommendations we have made to address these challenges. To their credit, senior agency leaders acknowledged that they came to rely on contractors and other nongovernmental organizations to carry out significant portions of State and USAID’s missions. For example, the Quadrennial Diplomacy and Development Review (QDDR), released in December 2010, reported that much of what used to be the exclusive work of government has been turned over to private actors, both for profit and not for profit. As responsibilities mounted and staffing levels stagnated, State and USAID increasingly came to rely on outsourcing, with contracts and grants to private entities often representing the default option to meet the agencies’ growing needs. Further, the QDDR recognized the need for the agencies to rebalance the workforce by determining what functions must be conducted by government employees and what functions can be carried out by nongovernment entities working on behalf of and under the direction of the government. As part of this effort, the QDDR called for State and USAID to ensure that work that is critical to carrying out their core missions is performed by an adequate number of government employees. The review also recommended that for contractor-performed functions, the agencies develop well-structured contracts with effective contract administration and hold contractors accountable for performance and results. Along these lines, the Administrator of USAID recently announced a series of actions intended to improve the way USAID does business, including revising its procurement approach. The acknowledgment of increased contractor reliance and the intention to examine their roles is important, as is developing well-structured contracts and effectively administering contracts. Left unaddressed, these challenges may pose potentially serious consequences to achieving the U.S. government’s policy objectives in Iraq and Afghanistan. For example, in March 2011, the Secretary of State testified that the department is not in an “optimal situation,” with contractors expected to comprise 84 percent of the U.S. government’s workforce in Iraq. We recently initiated a review of State’s capacity to plan for, award, administer, and oversee contracts with performance in conflict environments, such as Iraq and Afghanistan. As part of this review, we will assess the department’s workforce both in terms of number of personnel and their expertise to carry out acquisition functions, including contractor oversight. We will also assess the status of the department’s efforts to enhance its workforce to perform these functions. The issues I discussed today—contract management, the use of contractors in contingency environments, and workforce challenges—are not new and will not be resolved overnight, but they need not be enduring or intractable elements of the acquisition environment. The challenges encountered in Iraq and Afghanistan are the result of numerous factors, including poor strategic and acquisition planning, inadequate contract administration and oversight, and an insufficient number of trained acquisition and contract oversight personnel. These challenges manifest in various ways, including higher costs, schedule delays, and unmet goals, but they also increase the potential for fraud, waste, abuse, and mismanagement in contingency environments such as Iraq and Afghanistan. While our work has provided examples that illustrate some effects of such shortcomings, in some cases, estimating their financial effect is not feasible or practicable. The inability to quantify the financial impact should not, however, detract from efforts to achieve greater rigor and accountability in the agencies’ strategic and acquisition planning, internal controls, and oversight efforts. Stewardship over contingency resources should not be seen as conflicting with mission execution or the safety and security of those so engaged. Toward that end, the agencies have recognized that the status quo is not acceptable and that proactive, strategic, and deliberate analysis and sustained commitment and leadership are needed to produce meaningful change and make the risks more manageable. DOD has acknowledged the need to institutionalize operational contract support and set forth a commitment to encourage cultural change in the department. State and USAID must address similar challenges, including the use and role of contractors in continency environments. The recent QDDR indicates that the agencies have recognized the need to do so. These efforts are all steps in the right direction, but agreeing that change is needed at the strategic policy level must be reflected in the decisions made by personnel on a day- to-day basis. Chairman Thibault, Chairman Shays, this concludes my prepared statement. I would be happy to respond to any questions you or the other commissioners may have. For further information about this statement, please contact me at (202) 512-4841 or francisp@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Johana R. Ayers, Vince Balloon, Jessica Bull, Carole Coffey, Timothy DiNapoli, Justin Jaynes, Sylvia Schatz, Sally Williamson, and Gwyneth Woolwine. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Defense (DOD) obligated about $367 billion in fiscal year 2010 to acquire goods and services to meet its mission and support its operations, including those in Iraq and Afghanistan. GAO's work, as well as that of others, has documented shortcomings in DOD's strategic and acquisition planning, contract administration and oversight, and acquisition workforce. These are challenges that need to be addressed by DOD and by the Department of State and the U.S. Agency for International Development (USAID) as they carry out their missions in Iraq and Afghanistan and prepare for future contingencies. Today's statement discusses (1) contract management challenges faced by DOD, including those that take on heightened significance in a contingency environment; (2) actions DOD has taken and those needed to address these challenges; and (3) similar challenges State and USAID face. The statement is drawn from GAO's body of work on DOD contingency contracting, contract management, and workforce, as well as prior reports on State and USAID's contracting and workforce issues. DOD faces a number of longstanding and systemic challenges that hinder its ability to achieve more successful acquisition outcomes--obtaining the right goods and services, at the right time, at the right cost. These challenges include addressing the issues posed by DOD's reliance on contractors, ensuring that DOD personnel use sound contracting approaches, and maintaining a workforce with the skills and capabilities needed to properly manage acquisitions and oversee contractors. The issues encountered with contracting in Iraq and Afghanistan are emblematic of these systemic challenges, though their significance and impact are heightened in a contingency environment. GAO's concerns regarding DOD contracting predate the operations in Iraq and Afghanistan. GAO identified DOD contract management as a high-risk area in 1992 and raised concerns in 1997 about DOD's management and use of contractors to support deployed forces in Bosnia. In the years since then, GAO has continued to identify a need for DOD to better manage and oversee its acquisition of services. DOD has recognized the need to address the systemic challenges it faces, including those related to operational contract support. Over the past several years, DOD has announced new policies, guidance, and training initiatives, but not all of these actions have been implemented and their expected benefits have not yet been fully realized. While DOD's actions are steps in the right direction, DOD needs to (1) strategically manage services acquisition, including defining desired outcomes; (2) determine the appropriate mix, roles, and responsibilities of contractor, federal civilian, and military personnel; (3) assess the effectiveness of efforts to address prior weaknesses with specific contracting arrangements and incentives; (4) ensure that its acquisition workforce is adequately sized, trained, and equipped; and (5) fully integrate operational contract support throughout the department through education and predeployment training. In that regard, in June 2010 GAO called for a cultural change in DOD that emphasizes an awareness of operational contract support throughout all aspects of the department. In January 2011, the Secretary of Defense expressed concerns about DOD's current level of dependency on contractors and directed the department to take a number of actions. The Secretary's recognition and directions are significant steps, yet instilling cultural change will require sustained commitment and leadership. State and USAID face contracting challenges similar to DOD's, particularly with regard to planning for and having insight into the roles performed by contractors. In April 2010, GAO reported that State's workforce plan did not address the extent to which contractors should be used to perform specific functions. Similarly, GAO reported that USAID's workforce plan did not contain analyses covering the agency's entire workforce, including contractors. The recently issued Quadrennial Diplomacy and Development Review recognized the need for State and USAID to rebalance their workforces and directed the agencies to ensure that they have an adequate number of government employees to carry out their core missions and to improve contract administration and oversight. GAO has made multiple recommendations to the agencies to address contracting and workforce challenges. The agencies have generally agreed with the recommendations and have efforts under way to implement them.
Newborn screening programs in the United States began in the early 1960s with the development of a screening test for PKU and a system for collecting and transporting blood specimens on filter paper. All newborn screening begins with a health care provider collecting a blood specimen during a newborn’s first few days of life. The baby’s heel is pricked to obtain a few drops of blood, which are placed on a specimen collection card and sent to a laboratory for analysis. State departments of health may use their own laboratory to test samples from the dried blood spots or may have a contract with a private laboratory, a laboratory at a university medical school, or another state’s public laboratory. Laboratories may choose among a variety of testing methods to maximize the efficiency and effectiveness of their testing. A major technical advance in newborn screening is use of the tandem mass spectrometer, an analytical instrument that can precisely measure small amounts of material and enable detection of multiple disorders from a single analysis of a blood sample. Tandem mass spectrometry (MS/MS) has greatly increased the number of disorders that can be detected, but it cannot completely replace other analysis methods because it cannot screen for all disorders included in state newborn screening programs. After initial testing, state newborn screening program staff notify health care providers of abnormal results because it may be necessary to verify the accuracy of the initial screening result by testing a sample from a second specimen or to ensure that the infant receives more extensive diagnostic testing to confirm the presence of a disorder. The infant may also need immediate treatment. Laboratories and state maternal and child health programs generally carry out the notification process. Primary care and specialty physicians are involved in various stages of the newborn screening process. They generally are responsible for notifying the family of abnormal screening results and may confirm initial results through additional testing. If necessary, they identify appropriate management and treatment options for the child. State maternal and child health program staff may follow up to ensure that these activities occur. Several HHS agencies carry out activities related to newborn screening, including collecting and sharing information about state newborn screening programs, promoting quality assurance, and funding screening services. HRSA’s Maternal and Child Health Bureau has primary responsibility for promoting and improving the health of infants and mothers. HRSA offers grants to states, including the Maternal and Child Health Services Block Grant, that state newborn screening programs may use to support their newborn screening services. HRSA also funded the development of the Council of Regional Networks for Genetic Services (CORN) in 1985 to provide a forum for information exchange among groups concerned with public health aspects of genetic services. The newborn screening committee of CORN identified several areas of importance to programs, including the process of selecting disorders for screening, communication, quality assurance, and funding. It developed guidelines in these areas to increase consistency among state newborn screening programs and also began collecting data on state programs. In 1999, CORN was disbanded, and HRSA established the National Newborn Screening and Genetics Resource Center—the Resource Center. The Resource Center is supported by a cooperative agreement between the Genetic Services Branch of HRSA’s Maternal and Child Health Bureau and the University of Texas Health Science Center at San Antonio Department of Pediatrics. The Resource Center develops annual reports on state newborn screening activities and provides technical assistance to state newborn screening programs. It also provides information and educational resources to health professionals, consumers, and the public health community. CDC’s Newborn Screening Branch, in partnership with the Association of Public Health Laboratories (APHL), operates NSQAP. NSQAP is a voluntary, nonregulatory program that is designed to help state health departments and their laboratories maintain and enhance the quality of their newborn screening test results. In addition, CDC’s National Center on Birth Defects and Developmental Disabilities funds research related to newborn screening. The Centers for Medicare & Medicaid Services’ (CMS) involvement in newborn screening relates to its Medicaid and CLIA programs. CMS administers Medicaid, a jointly funded, federal-state health insurance program for certain low-income individuals, which covers newborn screening for eligible infants. Nationwide, Medicaid finances services for one in three births each year. Through the CLIA program, CMS also regulates laboratory testing performed on specimens obtained from humans, including the dried blood spots used for newborn screening. CLIA’s purpose is to ensure the accuracy, reliability, and timeliness of laboratory test results. CLIA requires that laboratories comply with quality requirements in five major areas: personnel qualifications and responsibilities, quality control, patient test management, quality assurance, and proficiency testing. Laboratories that fail to meet CLIA’s quality requirements are subject to sanctions, including denial of Medicaid payments. Through the CLIA program, laboratories that test dried blood spots in connection with newborn screening must have a process for verifying the accuracy of their tests at least two times each year. State newborn screening laboratories can meet this requirement through participation in the proficiency testing program offered by NSQAP. The National Institutes of Health’s (NIH) National Institute of Child Health and Human Development has sponsored research on disorders identified through newborn screening, including PKU, congenital hypothyroidism, and galactosemia. Research has addressed issues such as the effectiveness of screening and treatments and the application of new technologies for identifying additional disorders. The Children’s Health Act of 2000 authorized HHS to award grants to improve or expand the ability of states and localities to provide screening, counseling, or health care services for newborns and children who have, or are at risk for, heritable disorders and to evaluate the effectiveness of these services. As of February 2003, funds had not been appropriated to fund these grants. The act also authorized the establishment of a committee to advise the Secretary of HHS on reducing the mortality and morbidity of newborns born with disorders. The Secretary of HHS signed the charter for this committee in February 2003. Under the Health Insurance Portability and Accountability Act of 1996, HHS developed regulations to protect the privacy of health information, which as defined in the regulations, would include the results of testing of newborns. The regulations give individuals the right, in most cases, to inspect and obtain copies of health information about themselves. In addition, the regulations generally restrict health plans and certain health care providers from disclosing such information to others without the patient’s consent, except for purposes of treatment, payment, or healthcare operations. While the federal regulations preempt state requirements that conflict with them, states are free to enact and enforce more stringent privacy protections. Most entities and individuals that are covered by the regulations must be in compliance by April 14, 2003. Although state newborn screening programs vary in the number of disorders for which they screen, states generally follow similar practices and criteria in selecting disorders for their programs. States also conduct most other aspects of their programs in similar ways. Almost all state programs provide information for parents and conduct provider education, but fewer than one-fourth of the states provide information for parents on their option to test for additional disorders not included in the state’s program. All state programs notify health care providers—and some also notify parents—about abnormal screening results, and all states reported following up on abnormal results. Most state newborn screening programs screen for 8 disorders or fewer. The number of disorders included in state programs ranges from 4 to 36. (See app. II for the number of disorders screened for by each state.) Programs are implemented through state statutes and/or regulations, which often require screening for certain disorders. According to the Resource Center, all states require screening for PKU and congenital hypothyroidism, and 50 states require screening for galactosemia. Table 1 lists the disorders most commonly included in state newborn screening programs. (See app. III for information on these disorders.) Some states provide screening for certain disorders to selected populations, through pilot programs, or by request. For example, in addition to the 44 states that require screening for sickle cell diseases for all newborns, 6 states provide screening for sickle cell diseases to selected populations or through pilot programs. Some states are taking steps that could expand the number of disorders included in their programs. The criteria that state newborn screening programs reported they consider in selecting disorders to include in their programs are generally consistent across states. For example, they generally include how often the disorder occurs in the population, whether an effective screening test exists to identify the disorder, and whether the disorder is treatable. These criteria are also consistent with recommendations of the American Academy of Pediatrics (AAP) newborn screening task force. Neither the criteria states use nor AAP’s recommendations include benchmarks, such as the lowest incidence or prevalence rate that would be acceptable for population- based newborn screening or measurements of treatment effectiveness or screening reliability. Some states reported that they are considering revising their criteria because MS/MS can identify disorders for which treatment is not currently available. Because MS/MS technology can be used for screening multiple disorders in a single analysis, states may choose to include such disorders in their testing along with disorders that can be treated. Twenty-one states use MS/MS in their screening programs (see app. II); the number of disorders for which screening is conducted using MS/MS ranges from 1 to 28. (See app. IV for a list of selected disorders for which screening is conducted using MS/MS.) Many states consider cost when selecting disorders to include in their newborn screening program. In addition, several states told us that they would need additional funding to expand the number of disorders in their program. The costs associated with adding disorders include costs of additional testing, educating parents and providers, and following up on abnormal results. Additional costs may also be associated with acquiring and implementing new technology, such as purchasing MS/MS technology and training staff in its use. With the exception of federal recommendations that newborns be screened for three specific disorders, there are no federal guidelines on the set of disorders that should be included in state screening programs. The U.S. Preventive Services Task Force, which is supported by HHS’s Agency for Healthcare Research and Quality, has recommended screening for sickle cell diseases, PKU, and congenital hypothyroidism. In addition, NIH issued a consensus statement recommending that all newborns be screened for sickle cell diseases, as well as a consensus statement concluding that genetic testing for PKU has been very successful in the prevention of severe mental retardation. AAP’s newborn screening task force reported that infants born anywhere in the U.S. should have access to screening tests and procedures that meet accepted national standards and guidelines. The task force recommended that federal and state public health agencies, in partnership with health professionals and consumers, develop and disseminate model state regulations to guide implementation of state newborn screening systems, including the development of criteria for selecting disorders. In 2001, HRSA awarded a contract to the American College of Medical Genetics to convene an expert group to assist it in developing a recommended set of disorders for which all states should screen and criteria that states should consider when adding to or revising the disorders in their newborn screening programs. The expert group is expected to make recommendations to HRSA in spring 2004. Some state officials told us they have concerns about the development of a uniform set of disorders because states differ in incidence rates for disorders and capacity for providing follow-up and treatment. Most states reported that the state health department or board of health has authority to select the disorders included in newborn screening programs. Six states reported that they could not modify the disorders included in their newborn screening programs without legislation. Forty- five states reported that they have an advisory committee that is involved in selecting disorders; such a committee generally makes recommendations to the state health department or board of health. Most states reported that their advisory committee is not required by state statute or regulation. We found that most newborn screening advisory committees are multidisciplinary and include physicians, other health workers, and individuals with disorders or parents of children with disorders. (See table 2.) Almost all states reported they offer information for parents and education for providers on their newborn screening program. Eleven states have newborn screening statutes requiring that parents of newborns be informed of the program at the time of screening. In most states, information for parents includes how the blood specimen is obtained, the disorders included in the state program, and how parents will be notified of testing results. Seven states reported they include information for parents on their option to obtain testing for additional disorders that are not included in the state’s program, but that may be available to them through other laboratories. Provider education offered by states includes information on the collection and submission of specimens, the management of the disorders, and medical specialists available to treat the disorders. While state newborn screening programs produce or compile materials for parents, they generally do not provide them directly to parents and are unable to say when, or if, parents actually receive them. Rather, the state provides materials to other individuals, including hospital staff, midwives, pediatricians, primary care providers, and local health department staff, who are expected to share them with parents. Over half the states reported that their materials for parents are available in English and one or more other languages. The parties states notify about newborn screening results vary, depending on whether the result is abnormal or normal. (See table 3.) All states reported that for abnormal results, they notify the physician of record or the birth or submitting hospital. The physician or hospital, in turn, is generally responsible for notifying parents. Most states reported they notify physicians and hospitals by telephone; many states reported also notifying them by letter, fax, or E-mail. While the AAP newborn screening task force recommended that programs notify parents or guardians, fewer than half the states routinely notify parents directly of abnormal results, and no state routinely notifies parents directly of normal results. States that notify parents generally said that notification of parents was by letter. States also reported that they take other actions in response to abnormal screening results. About three-fourths of states reported testing samples from second specimens when the initial specimen is abnormal or unsatisfactory. All states reported conducting follow-up activities. Over 90 percent of states said that their follow-up activities include obtaining additional laboratory information to confirm the presence of a disorder, which could include obtaining the results of diagnostic tests performed by other laboratories. Almost all states reported that they refer infants with disorders for treatment and most follow up to confirm that treatment has begun. About two-thirds of the states reported that they conduct or fund periodic follow-up of newborns diagnosed with a disorder, which could include ensuring that they continue to receive treatment and monitoring their health status. According to Resource Center data on state newborn screening programs, the length of the follow-up period varies among disorders and across states. States reported that they spent over $120 million on newborn screening in state fiscal year 2001, with individual states’ expenditures ranging from $87,000 to about $27 million. Seventy-four percent of these expenditures supported laboratory activities. The primary funding source for most states’ newborn screening expenditures was newborn screening fees. The fees are generally paid by health care providers submitting specimens; they in turn may receive payments from Medicaid and other third-party payers, including private insurers. Other funding sources that states identified included the Maternal and Child Health Services Block Grant, direct payments from Medicaid, and other state and federal funds. States reported they spent over $120 million on laboratory and program administration/follow-up activities in state fiscal year 2001.23, 24 Individual states’ expenditures ranged from $87,000 to about $27 million. Based on information provided by 46 states, we found that, on average, states spent $29.44 for each infant screened in state fiscal year 2001. Two-thirds of these states spent from $20 to $40 per infant. (See app. V for expenditures per infant screened in each state.) Laboratory expenditures accounted for 74 percent of states’ expenditures; program administration/follow-up expenditures accounted for 26 percent. States reported that laboratory expenditures generally supported activities such as processing and analyzing specimens, notifying health care providers and parents of screening test results, and evaluating the quality of laboratory activities. Program administration/follow-up expenditures generally supported activities such as notifying appropriate parties of test results, confirming that infants received additional laboratory testing, confirming that infants diagnosed with disorders received treatment, and providing education to parents and health care providers. In addition, almost half the states reported that laboratory expenditures supported education of parents and health care providers. We asked states to provide us expenditure information for laboratory and program administration/follow-up; we instructed states to include only those follow-up activities that are conducted through confirmation of diagnosis and referral for treatment. We did not ask for expenditure information for disease management and treatment services. Expenditure calculations were based on responses from 50 states; South Dakota reported that expenditure information was not available for state fiscal year 2001. Six states reported that their expenditures included significant, nonrecurring expenses in state fiscal year 2001, such as for the purchase of MS/MS equipment or computer software. These expenditures ranged from $22,645 to $415,835, totaling about $1 million. In addition, one state told us that the program administration/follow-up expenditures it reported included approximately $50,000 to $75,000 for disease management and treatment services. Fees are the largest funding source for most states’ newborn screening programs. Forty-three states reported they charge a newborn screening fee to support all or part of program expenditures. The fees are generally paid by health care providers submitting specimens; they in turn may receive payments from Medicaid and other third-party payers, including private insurers. Some states collect the fees through the sale of specimen collection kits to hospitals and birthing centers. Other states may bill hospitals, patients, physicians, Medicaid, or other third-party payers for the fee. Nationwide, newborn screening fees funded 64 percent of newborn screening program expenditures in state fiscal year 2001.28, 29 (See table 4.) Thirteen state programs reported that fees were their sole source of funding in fiscal year 2001, and 19 additional states reported that fees funded at least 60 percent of their newborn screening expenditures. The average fee in the states that charged a fee was about $31, with fees ranging from $10 to $60. Seven state newborn screening programs identified Medicaid as a direct funding source in state fiscal year 2001. These screening programs bill the state Medicaid agency directly for laboratory services or receive a transfer of funds from the state Medicaid agency for screening services provided to Medicaid-enrolled infants. The percentage of expenditures the states reported as directly funded by Medicaid does not include Medicaid payments to hospitals for services provided to newborns. Other funding sources that states identified for newborn screening program expenditures include state funds and the Maternal and Child Health Services Block Grant. About half the states reported that state funds supported laboratory or program administration/follow-up expenditures. In addition, about half the states reported that they rely on the Maternal and Child Health Services Block Grant as a funding source for laboratory or program administration/follow-up expenditures. Seven states identified other funding sources, such as the Preventive Health and Health Services Block Grant. CDC and HRSA offer services to assist states in evaluating the quality of their newborn screening programs. For example, CDC’s NSQAP provides proficiency testing for almost all disorders included in state newborn screening programs, enabling states to meet the CLIA regulatory requirement that laboratories have a process for verifying the accuracy of tests they perform. Through the Resource Center, HRSA supports technical reviews of state newborn screening programs. These voluntary programwide reviews are conducted at the request of state health officials and focus primarily on areas of concern identified by state officials. In addition to these federally supported efforts, most state newborn screening programs reported that they evaluate the quality of the laboratory testing and/or program administration/follow-up components of their newborn screening programs. CDC’s NSQAP is the only program in the country that conducts proficiency testing on the dried blood spots used in newborn screening. While NSQAP is voluntary, as of January 2003, all laboratories that perform testing for state newborn screening programs participated in the proficiency testing program. Participation in NSQAP allows laboratories to meet the CLIA regulatory requirement that they have a process for verifying the accuracy of tests they perform. NSQAP offers proficiency testing for over 30 disorders, including the disorders most commonly included in state newborn screening programs. When a laboratory misclassifies a specimen during proficiency testing, NSQAP notifies the laboratory of the problem. When an abnormal specimen is classified as normal, NSQAP officials work with the laboratory to identify and solve the problem that led to the misclassification. NSQAP provides information on the specimen that was misclassified, gives supplemental specimens to the laboratory to test, and may visit the laboratory, if necessary, to provide additional assistance. In addition to proficiency testing, NSQAP provides other types of quality assurance assistance, including training, guidelines, and consultation to laboratories that participate in the program. For example, in September 2001, NSQAP cosponsored a meeting of laboratory and medical scientists to discuss issues related to the use of MS/MS in newborn screening. In addition, NSQAP provides state newborn screening programs with quality control specimens—test specimens designed to be run over a period of time to ensure the stability of the testing methods—and works with the manufacturers of the filter papers used in the collection of dried blood spots to ensure their quality. NSQAP also publishes quarterly and annual reports on the aggregate performance of participating laboratories. These reports include information on the results of the proficiency testing program. The annual reports also include information on NSQAP’s quality control effort and describe other activities undertaken during the year. HRSA’s Resource Center offers technical reviews to states at their request to help them refine and improve their newborn screening activities. The team that visits the state program typically includes a representative of the Resource Center, a representative from CDC’s NSQAP to focus on laboratory quality assurance, a health care provider to focus on medical and genetic issues, a follow-up coordinator from another state program to focus on the follow-up component of the program, and a representative from HRSA to focus on financial and administrative issues. The Resource Center’s reviews concentrate primarily on areas state officials ask the team to review. For example, states have asked the review team to look at whether or how the set of disorders included in their programs should be expanded, how to incorporate MS/MS into a program, and whether current program staffing levels are appropriate. The review team also assesses the degree to which the state program follows the 1992 CORN guidelines in areas such as public, professional, and patient education, laboratory proficiency testing, and consumer representation on advisory committees. After reviewing a state newborn screening program, the team provides the state with a final report that includes its findings and recommendations to improve the program. Recent findings have included newborn screening advisory committees that were not sufficiently multidisciplinary and programs that did not have a systemwide quality assurance program. Review teams have also identified the need for additional program administration/follow-up staff and for provider education programs to include information on collecting and submitting specimens and reporting screening results. The state newborn screening program is not obligated to accept or implement the team’s recommendations, and HRSA and the Resource Center have no authority to require states to make changes to their program. However, according to the Resource Center, most participating states have made some modifications to their program in response to recommendations. State officials told us, for example, that they have expanded or diversified the membership of their advisory committees, revised practitioner manuals, developed a programwide quality assurance system, and hired additional program administration/follow-up staff. In addition, state newborn screening program staff told us that the recommendations of the review teams helped inform program staff, state legislators, and health department staff as they assessed program needs. HRSA has funded 26 technical reviews in 22 states since the program began in 1987; 9 of these reviews have occurred since January 2000. Every state that has requested a review has been able to receive one. Most states reported evaluating the quality of the laboratory testing and/or program administration/follow-up components of their newborn screening programs. For example, laboratories monitor performance by defining criteria for achieving quality results and designing a monitoring program to evaluate whether they are meeting these criteria. One state told us that it has criteria related to calibration of equipment, personnel training and education, and recordkeeping and documentation. Other measures that programs may monitor include percentage of births screened, number of unusable specimens, demographic information missing from specimen collection cards, and number of children lost to follow-up. Several state officials told us that they use some of these measures to monitor quality of specimens received from hospitals and to identify hospitals that may need education regarding the newborn screening process. In addition, states voluntarily report many of these measures to the Resource Center for inclusion in its annual National Newborn Screening Report, enabling states to compare their program over time with other states’ programs. Moreover, all states report annually to HRSA on the percentage of newborns in the state who are screened for selected disorders, including PKU and congenital hypothyroidism, as part of the Maternal and Child Health Services Block Grant reporting requirements. About half the states reported to us that they have a mechanism for learning of abnormal cases that were misclassified as normal, information that can alert a state to problems with its program. According to experts in the field of newborn screening, these cases occur infrequently but can have serious results when children develop a life-threatening condition that might have been prevented if treated early. Most of these states learn about these cases through their communications with the specialists in their state who manage and treat the disorders identified by newborn screening. If a child is referred to one of these specialists from a source other than the newborn screening program, the specialist will usually contact program officials, who then determine whether the screening program misclassified the child’s screening result as normal. Four states reported that they can learn of abnormal cases misclassified as normal through reports made to state birth defects or disease registries. For example, one state reported that staff at the state birth defects registry notify the newborn screening program of children reported to them, and the newborn screening program then checks whether or not these children were identified through the screening process. State newborn screening statutes usually do not require that parental consent be obtained before screening occurs. However, most state newborn screening statutes or regulations allow exemptions from screening for religious reasons, and several states allow exemptions for any reason. Provisions regarding the confidentiality of screening results are included in state newborn screening statutes and regulations and state genetic privacy laws, but are often subject to exceptions, which vary across states. The most common exceptions allow disclosure of information for research purposes, for use in law enforcement, and for establishing paternity. While few newborn screening statutes provide penalties for violation of confidentiality provisions, many states’ genetic privacy statutes provide criminal sanctions and penalties for violating their provisions, including those related to confidentiality. All states require newborn screening, and state newborn screening statutes usually do not require consent for screening. Only Wyoming’s newborn screening statute expressly requires that persons responsible for collecting the blood specimen obtain consent prior to screening. In addition, of the three states with only regulations requiring newborn screening, Maryland’s regulations on newborn screening require consent for screening. While all states require newborn screening, most newborn screening statutes or regulations provide exemptions in certain situations. In 33 states, newborn screening statutes or regulations provide an exemption from screening if it is contrary to parents’ religious beliefs or practices. Thirteen additional states provide an exemption for any reason. (See table 5.) In over half the states, newborn screening statutes and regulations have provisions that indicate that information collected from newborn screening is confidential.40, 41 However, they permit information to be released without authorization from the child’s legal representative in some circumstances. The most common provision for release of screening information is for use in statistical analysis or research, generally with a requirement that the identity of the subject is not revealed and/or that the researchers comply with applicable state and federal laws for the protection of humans in research activities. Some state screening statutes have additional provisions that allow screening information to be released. Wisconsin’s screening statute, for example, allows the information to be released for use by health care facilities staff and accreditation organizations for audit, evaluation, and accreditation activities; and for billing, collection, or payment of claims. A few states have more restrictive provisions. South Carolina’s screening statute, for example, limits disclosure of the information obtained from screening to the physician, the parents of the child, and the child when he or she reaches age 18. State statutes that govern the collection, use, or disclosure of genetic information may also apply to genetic information obtained from newborn screening. Twenty-five states have laws that prohibit disclosure of genetic information without the consent of the individual; in 23 of these states, the statutes have exceptions that permit disclosure without consent. (See table 6.) For example, 14 states’ genetic privacy laws permit disclosure of genetic information without consent for the purpose of research, provided that individuals’ identities are not revealed and/or the research complies with applicable state and federal laws for the protection of humans in research activities. We found no limitation on the ability of laboratories or state agencies to inform health care providers attending newborns with abnormal screening results. On the contrary, many statutes and regulations require laboratories and state agencies to inform providers of abnormal screening results. As defined in federal regulations implementing the Health Insurance Portability and Accountability Act of 1996, the term health information would also include newborn screening information. Most state newborn screening statutes and genetic privacy laws do not include penalties for lack of compliance. According to the National Conference of State Legislatures, 17 states have laws that provide specific penalties for violating genetic privacy laws. In 6 of these states, violations of genetic privacy statutes are punishable by fine and/or imprisonment. In addition, the statutes authorize civil lawsuits to obtain damages and, in most instances, court costs and attorneys’ fees. In 10 of these states, the statutes provide for civil liability only. In 1 state, violation is punishable only as a crime. We provided a draft of this report to HHS for comment. Overall, HHS said that the report presents a thorough summary of state newborn screening programs’ current practices. (HHS’s comments are reprinted in app. VI.) HHS said that the report needed to reflect that newborn screening is a system that, in addition to testing, includes follow-up, diagnosis, disease management and treatment, evaluation, and education. However, the draft report did identify the various components of the newborn screening system. HHS said that there is a need to more comprehensively address components of the system beyond testing. For example, HHS commented that there is a need for a coordinated effort in states to train and educate health professionals and state newborn screening program directors in the use of newer technologies. In addition, it stated that there is a need to provide information to families and parents about the screening their state provides and the screening options available to them outside of their state’s program. HHS said that it anticipated that the report would, among other things, include recommendations to improve state newborn screening programs. As we noted in the draft report, HRSA has initiated a process to develop recommendations for state newborn screening programs. The scope of our review focused on providing the Congress with descriptive information about state programs. HHS supported the development of benchmarks to help states evaluate the quality of the various components of the newborn screening system. It added that one of the most effective ways the federal government can support state newborn screening programs is by strengthening the scientific basis for newborn screening through funding of systematic evaluation of outcomes and the quality of all components of the newborn screening system. In its comments, HHS provided information on its efforts related to newborn screening. For example, HHS described demonstration projects it funded to examine the use of new technology and initiatives to improve family and provider education. In addition, HHS indicated that all of its programs address the recommendations of the AAP newborn screening task force and encourage the integration of various newborn screening and genetics services into systems of care. HHS provided technical comments. We incorporated the technical comments and other information HHS provided on its programs where appropriate. As arranged with your offices, unless you publicly announce its contents earlier, we will not distribute this report until 30 days after its issue date. We will then send copies of this report to the Secretary of Health and Human Services, the Administrators of the Health Resources and Services Administration and the Centers for Medicare & Medicaid Services, the Directors of the Centers for Disease Control and Prevention and the National Institutes of Health, appropriate congressional committees, and others who are interested. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 7119. An additional contact and the names of other staff members who made contributions to this report are listed in appendix VII. To do our work, we surveyed the health officers in all the states during October and November 2002 about their newborn screening programs. We asked each state health officer to work with laboratory and program administration/follow-up staff in responding to the questions. The survey asked for information on the process for selecting disorders to include in newborn screening programs; laboratory and follow-up activities; parent and provider education efforts; expenditures and funding sources; efforts to evaluate the quality of laboratory testing and program administration/follow-up; and states’ retention and sharing of screening results. The survey focused only on screening for metabolic and genetic disorders. We did not ask for information on disease management and treatment services provided by state newborn screening programs, and the survey did not collect information on newborn screening for hearing and infectious diseases. We pretested the survey in person with laboratory and program administration/follow-up staff from the Virginia and Delaware newborn screening programs. In addition, the survey instrument was reviewed by staff at the Department of Health and Human Services’ (HHS) Centers for Disease Control and Prevention (CDC), National Center for Environmental Health, Newborn Screening Branch, and the National Newborn Screening and Genetics Resource Center, a project funded by HHS’s Health Resources and Services Administration (HRSA). We refined the questionnaire in response to their comments. We received responses from all the states. After reviewing the completed questionnaires and checking the data for consistency, we contacted certain states to clarify responses and edited survey responses as appropriate. In addition, we followed up with four states to obtain more detailed information on their processes for selecting disorders, evaluations of parent and provider education, evaluations of the quality of laboratory testing and program administration/follow-up, and mechanisms for identifying abnormal cases misclassified as normal. To identify which genetic and metabolic disorders are included in states’ newborn screening programs, we reviewed the Resource Center’s U.S. National Screening Status Reports. These reports provide information on the disorders for which states require screening and the disorders for which screening is provided to selected populations, through pilot programs, or by request. To report on efforts by HHS and states to monitor and evaluate the quality of state newborn screening programs, we reviewed annual summary reports, proficiency testing results, and other documents from the Newborn Screening Quality Assurance Program (NSQAP), which CDC operates with the Association of Public Health Laboratories, and interviewed CDC staff on states’ participation. We also reviewed report findings from the seven technical reviews of state newborn screening programs that HRSA, CDC, and the Resource Center conducted from 1999 to 2001. We interviewed Resource Center staff about the content and findings of these reviews and interviewed officials in five states about actions taken in response to the review staff’s findings and recommendations. To determine how state laws address consent and privacy issues related to newborn screening, we analyzed state statutes that provide for newborn screening for genetic and metabolic disorders and state statutes that relate to privacy of genetic information generally. We also reviewed state newborn screening regulations as appropriate. The information on states that require consent for newborn screening is based on our analysis of state newborn screening and genetic privacy statutes and the newborn screening regulations in states that do not have newborn screening statutes. The information on exemptions from screening is based on our review of state newborn screening statutes and newborn screening regulations. Information on privacy is based on our analysis of confidentiality provisions in state newborn screening statutes and, for those states that do not have confidentiality provisions in their newborn screening statutes, on confidentiality provisions in newborn screening regulations. We also analyzed confidentiality provisions in state genetic privacy statutes. To identify the newborn screening statutes and regulations that were within the scope of our review, we relied on research provided by the National Conference of State Legislatures (NCSL) in fall 2002 and analyzed only those newborn screening statutes and regulations identified through that research. With regard to genetic privacy statutes, we analyzed only those statutes identified by NCSL in an April 2002 report identifying state genetic privacy laws. We contacted state officials as appropriate to obtain assistance in locating and interpreting statutory authorities. We also relied on NCSL’s determination of the number of states that provide penalties for the violation of those statutes. Newborn screening programs are governed by a variety of legal authorities. We did not research or analyze any case law interpreting state newborn screening statutes and regulations or genetic privacy statutes, and we did not research or analyze any written interpretive guidance issued by states. We also reviewed relevant literature and obtained information from individual experts, newborn screening laboratory and maternal and child health staff in several states, and representatives of organizations interested in newborn screening, including the American Academy of Pediatrics, American College of Medical Genetics, American College of Obstetricians and Gynecologists, American Medical Association, Association of Maternal and Child Health Programs, Association of Public Health Laboratories, Association of State and Territorial Health Officials, and the March of Dimes. We conducted our work from June 2002 through March 2003 in accordance with generally accepted government auditing standards. Number of disorders for which screening is conducted using tandem mass spectrometry (MS/MS) Number of disorders for which screening is conducted using tandem mass spectrometry (MS/MS) Expenditure per infant screened not calculated because state did not report number of infants screened. Expenditure information not available for state fiscal year 2001. In addition to the person named above, key contributors to this report were Janina Austin, Emily Gamble Gardiner, Ann Tynan, Ariel Hill, Kevin Milne, Cindy Moon, and Susan Lawes.
Each year state newborn screening programs test 4 million newborns for disorders that require early detection and treatment to prevent serious illness or death. GAO was asked to provide the Congress with information on the variations among state newborn screening programs, including information on criteria considered in selecting disorders to include in state programs, education for parents and providers about newborn screening programs, and programs' expenditures and funding sources. To collect this information, GAO surveyed newborn screening programs for genetic and metabolic disorders in all 50 states and the District of Columbia. GAO was also asked to provide information on efforts by the Department of Health and Human Services (HHS) and states to evaluate the quality of newborn screening programs, state laws and regulations that address parental consent for newborn screening, and state laws and regulations that address confidentiality issues. While the number of genetic and metabolic disorders included in state newborn screening programs ranges from 4 to 36, most states screen for 8 or fewer disorders. In deciding which disorders to include, states generally consider similar criteria, such as whether the disorder is treatable. States also consider the cost of screening for additional disorders. HHS's Health Resources and Services Administration is funding an expert group to assist it in developing a recommended set of disorders for which all states should screen and criteria for selecting disorders. Most state newborn screening programs have similar practices for administering and funding their programs. Almost all states provide education on their newborn screening program for parents and providers, but fewer than one-fourth inform parents of their option to obtain tests for additional disorders not included in the state's program. State programs are primarily funded through fees collected from health care providers, who may receive payments from Medicaid and other third-party payers. Nationwide, fees funded 64 percent of states' 2001 fiscal year program expenditures of over $120 million. All newborn screening laboratories participate in a quality assurance program offered by HHS's Centers for Disease Control and Prevention, which assists programs in evaluating the quality of their laboratories. All states require newborn screening, and state statutes that govern screening usually do not require parental consent. However, 33 states' newborn screening statutes or regulations allow exemptions from screening for religious reasons, and 13 additional states' newborn screening statutes or regulations allow exemptions for any reason. Newborn screening statutes and regulations in over half the states contain confidentiality provisions, but these provisions are often subject to exceptions. HHS said that the report presents a thorough summary of state newborn screening programs' current practices.
From fiscal year 2007 through 2012, the total number of female servicemembers has grown from 200,941 to 208,905, with female servicemembers comprising about 14 percent of the total active duty force. During this time, the largest number of active-duty female servicemembers resided in the Army. (See fig. 1.) In fiscal year 2012, more than three-quarters of the Army’s female servicemember population was age 35 and under, with the largest group being between 18 and 24 years old. Recommendations for female- specific preventative health screenings are based on age, such as cervical cancer screening, which would be applicable for female servicemembers from an early age, while others, such as mammograms, are currently not recommended until age 50, absent any personal history of health problems of this nature. (See fig. 2.) DOD operates its own large, complex health system—the Military Health System—that provides health care to approximately 9.7 million beneficiaries across a range of venues, from MTFs located on military installations to the battlefield. These beneficiaries include active-duty servicemembers and their dependents, eligible National Guard and Reserve servicemembers and their dependents, and retirees and their dependents or survivors. The Military Health System has a dual health care mission: supporting wartime and other deployments, known as the readiness mission, and providing peacetime care, known as the benefits mission. The readiness mission provides medical services and support to the armed forces during military operations and involves deploying medical personnel and equipment, as needed, around the world to support military forces. The benefits mission provides medical services and support to members of the armed forces, their family members, and others eligible for DOD health care. The care of the eligible beneficiary population is spread across the military departments—Army, Navy, and Air Force. Each military department delivers care directly through its own MTFs, which are managed by their medical departments, including the Army’s Medical Command (MEDCOM); the Navy’s Bureau of Medicine and Surgery, which is also responsible for providing health care to members of the Marine Corps and their beneficiaries; and the Air Force Medical Service. Servicemembers obtain health care through the military services’ system of MTFs, which is supplemented by participating civilian health care providers, institutions, and pharmacies to facilitate access to health care services when necessary. Active-duty servicemembers receive most of their care from MTFs, where they receive priority access over other beneficiaries. Within the continental United States, the Army is organized into three medical regions—Northern, Southern, and Western—each headed by a subordinate regional medical command, which exercises authority over the MTFs in its region. Across the three regions, there are 27 domestic Army installations with a primary MTF, which report directly to the regional medical commands and are responsible for reporting information for other associated MTFs, which may include smaller MTFs, such as clinics, on the same installation, as well as MTFs on different Army installations or at installations operated by other military services. For example, at Fort Benning, there are multiple facilities located on the installation, including Martin Army Community Hospital—the primary MTF—as well as several clinics. In addition to reporting for all of those facilities, Martin Army Community Hospital also reports to the regional medical command for other Army facilities located off the installation, including an Army clinic at Eglin Air Force Base in Florida. All Army MTFs, both primary and associated, can be classified under one of three categories on the basis of their size: Army Health Centers/Clinics are generally the smallest facilities only offering outpatient primary care. Army Community Hospitals are larger than clinics and offer primary and secondary care, such as inpatient care and surgery under anesthesia. Army Medical Centers are generally the largest facilities offering primary and secondary care as well as other care, such as cancer treatments, neonatal care, and specialty diagnostics. Each of the military services is responsible for maintaining the medical readiness of its active-duty force. DOD’s IMR policy establishes six elements for the military services to assess in order to determine a servicemember’s medical readiness to deploy: 4. individual medical equipment,5. medical readiness laboratory tests, and 6. periodic health assessments (PHA). DOD’s policy establishes a baseline of standards for continuously assessing each of the IMR elements. In addition to this, each of the military services establishes its own policy that may include more specific criteria. Each military service is responsible for assessing and categorizing a servicemember’s IMR as follows: Fully medically ready, current in all categories. Partially medically ready, lacking one or more immunizations, readiness laboratory tests, or medical equipment. Not medically ready, existence of a chronic or prolonged deployment-limiting condition, including servicemembers who are hospitalized or convalescing from serious illness or injury, or individuals who require urgent dental care. Medical readiness indeterminate, inability to determine the servicemember’s current health status because of missing health information such as a lost medical record, an overdue PHA, or an overdue dental exam. All of the military services use different systems to collect information about IMR status. In addition, DOD requires that each of the services provide quarterly reports about the IMR status of their servicemembers. DOD and the military services have a number of organizations that fund or conduct research, including research on health care issues that affect those who have served in a combat zone. The Defense Health Program within DOD’s Office of the Assistant Secretary of Defense for Health Affairs receives significant funding for this research through its annual appropriation. Through an interagency agreement, the Army Medical Research and Materiel Command manages the day-to-day execution of this funding through joint program committees. There are several joint program committees that focus on specific research areas, including clinical and rehabilitative medicine and military operational medicine.Officials from the other military services participate in these committees. Research organizations from the military services, such as the Naval Medical Research Center, the Office of Naval Research, and the Air Force Medical Support Agency, also manage funds from the Defense Health Program for research. In addition to the military services, other organizations within DOD also fund or conduct research, including the TriService Nursing Research Program, which funds and supports research on military nursing. DOD’s policy establishes six elements for assessing the IMR of a servicemember to deploy, most of which are gender-neutral. Four of the six elements—immunization status, medical readiness laboratory tests, individual medical equipment, and dental readiness—are gender-neutral; they apply equally to female and male servicemembers. In order to pass these elements of the IMR assessment, servicemembers must be current for each element, by having immunizations, including MMR (measles, mumps, and rubella); medical readiness laboratory tests, such as a human immunodeficiency virus test and results current within the past 24 months; individual medical equipment, such as gas mask inserts for all personnel needing visual correction; and an annual dental exam. The remaining elements of IMR—deployment-limiting conditions and PHAs—include some aspects that are specific to female servicemembers. The Army, Navy, Air Force, and Marine Corps have policies that define pregnancy as a deployment-limiting condition. In addition, they also have policies that establish a postpartum deferment period—generally 6 months after delivery—when a female servicemember is not required to deploy. The deferment period was established in order to provide for medical recovery from childbirth and to allow additional time to prepare family care plans and child care. However, each of the military services has a policy that allows the servicemember to voluntarily deploy before the period has expired. In addition, cancer that requires continuing treatment and specialty evaluations can also be a deployment-limiting condition. Although cancer treatment could affect both male and female servicemembers, there are some cancers that would be specific to female servicemembers, such as ovarian cancer, while other cancers are specific to male servicemembers, such as prostate cancer. The PHA includes a review of information about preventative screenings and counseling for each servicemember. Some of the preventative screenings that are reviewed as part of the PHA are female-specific, such as mammograms and pap smears. To satisfy this element of IMR, a servicemember’s PHA must be current—the assessment of any changes in health status must have occurred within the past year—for both female and male servicemembers. The results of these preventative screenings do not negatively affect this element of a servicemember’s readiness assessment even when follow-on studies, labs, referrals or additional visits may be pending or planned. Nonetheless, these screenings could identify a health issue that would be considered a deployment-limiting condition—a separate element of IMR—and could therefore limit readiness. For example, the results of a mammogram may identify cancer that requires treatment or specialized medical evaluations that could be determined to be a deployment-limiting condition. On the basis of our survey, we found that most routine female-specific health care services—including pelvic examinations, clinical breast examinations, pap smears, screening mammographies, prescription of contraceptives, and pregnancy tests—were available through the MTFs at the 27 domestic Army installations with a primary MTF. Screening mammography services were not available at two of these installations; however, in those instances, this service was available from a civilian network provider. We did not survey about the availability of male and female providers to provide pregnancy tests as senior health care officials stated that this health care service is not one that is necessarily administered by health care providers. that she would have to wait 3 months to see a female provider, so she opted to see a male provider. On the basis of our survey, the availability of specialized health care services varied by the type of primary MTF on the domestic Army installation; however, when services were not available at the installation, they were available through other MTFs or from a civilian network provider. Specifically, more types of specialized health care services were available on installations with a larger Army Medical Center as the primary MTF than at installations with a smaller Army Health Center/Clinic. For example, none of the installations where the primary MTF was an Army Health Center/Clinic offered surgical, medical, or radiation treatments for breast, ovarian, cervical, and uterine cancers, whereas some installations where the primary MTFs were Army Community Hospitals and Army Medical Centers did make these treatments available. (See table 1.) Additionally, both male and female providers were available to provide specialized female-specific services— such as treatment of abnormal pap smear, prenatal care, labor and delivery, benign gynecological disorders, and postpartum care—that were offered at the 27 domestic Army installations. In addition, when asked about the availability of other programs, officials from 25 of the 27 domestic Army installations we surveyed reported offering female-specific health care programs or activities, including female-specific groups for breast cancer, pregnancy education, pregnancy physical training, postpartum care, women’s clinics, and health care fairs. Five of the six installations that we visited reported having female-specific programs, such as breast cancer awareness activities, lactation consultants, a women’s clinic or health care team, annual women’s health care fair, and a pregnancy physical training program. With respect to privacy for individuals, including female servicemembers, who seek care at domestic Army MTFs, Army MEDCOM officials noted that reasonable safeguards should be in place to limit incidental, and avoid prohibited, uses and disclosures of information.cubicles, dividers, shields, curtains, or similar barriers should be utilized in an area where multiple patient-staff communications routinely occur. DOD provides space-planning criteria for health facilities that assert that private space be made available to counsel patients, including facilities for outpatient women’s health services. When asked to report on the challenges MTFs face in ensuring the physical privacy of female servicemembers, senior health care officials at most domestic Army installations (18 of 27) we surveyed did not report examples of any challenges. However, officials from the remaining nine installations cited two privacy-related challenges—physical layout of the exam rooms and auditory issues. For example, officials from three installations reported that some exam rooms were configured such that some examination tables face the door. Officials from two of these installations reported the use of a privacy curtain to overcome this room limitation. Nonetheless, all of the female servicemembers that we interviewed at the six sites that we visited felt that adequate steps were taken to ensure their physical privacy during health care visits. Officials from another installation reported on the survey that the layout of a waiting room may allow for conversations at the reception desk to be overheard, which may compromise patient privacy. Additionally, 3 of the 39 female servicemembers that we interviewed stated that they had concerns regarding auditory privacy in the waiting or exam rooms. At three of the six installations that we visited, we observed clinics that had waiting areas with separate check-in bays, such as those for walk-in appointments, pharmacy, and laboratory tests. The separation of these check-in areas spread people out and provided more distance between those checking in and those sitting in the waiting rooms. Behavioral health illnesses affect both men and women, and with the exception of postpartum depression, are not easily distinguished by gender. Consequently, behavioral health services are not inherently gender-specific. Behavioral health services were provided in a variety of settings, such as through outpatient, inpatient, residential, and telebehavioral settings. We found in our survey that the availability of behavioral health services at domestic Army installations varied; however, when these services were not available at the installation, they were available from other sources, including from other MTFs or from civilian network providers. (See table 2.) All of the 27 domestic Army installations we surveyed offered individual and group outpatient treatments, and most (23 of 27) offered family outpatient treatment. If treatments were not available at the installation, they were offered from another MTF or a civilian network provider. About one-third (10 of 27) of domestic Army installations offered inpatient treatment, and fewer offered residential treatment (5 of 27). In addition to general behavioral health services, all of the domestic Army installations included in our survey offered some type of behavioral health services for substance abuse. (See table 3.) With regard to the availability of substance abuse treatment options, all 27 domestic Army installations we surveyed offered individual outpatient treatment. All but one domestic Army installation offered group outpatient treatment and more than a third (11 of 27) offered family outpatient treatment. Few domestic Army installations offered inpatient treatment (5 of 27) or residential treatment (4 of 27) for substance abuse. If these treatments were not available on the installations, they were available from another MTF, a civilian network provider, or both. As a way to increase access to behavioral health services, telebehavioral health services—medically supervised behavioral health treatment using secured two-way telecommunications technology to link patients from an originating site for treatment with providers who are at another site—can be used to connect servicemembers and behavioral health providers. This service was available at 22 of the 27 domestic installations we surveyed. Telebehavioral health services can be used to provide treatment to servicemembers in remote locations, where providers may not be readily available, and to ensure continuity of care for servicemembers who change duty stations. While behavioral health services are not inherently gender-specific, a number of Army installations we surveyed reported offering programs or activities that were specific to women. Officials from 18 of 27 domestic Army installations provided examples of female-specific behavioral health programs or activities, including a post-deployment group for female servicemembers, postpartum groups, and specific therapy groups for female servicemembers. Four of the six installations we visited reported having female-specific behavioral health programs or activities, such as postpartum, post-deployment, and general women’s support groups. The importance of female-specific groups was echoed by most (34 of 39) of the female servicemembers that we interviewed. These female servicemembers told us that there was a need for female-specific groups for certain topics, such as post-traumatic stress disorder (PTSD), postpartum depression, parenting, and general female servicemember issues. With respect to privacy when providing behavioral health services, officials from 17 of the 27 domestic Army installations that we surveyed did not report any challenges to ensuring physical privacy when providing behavioral health services to female servicemembers when asked to report on the challenges MTFs face in ensuring the physical privacy of female servicemembers. Officials from the other 10 installations reported two main challenges to ensuring physical privacy—having mixed gender waiting rooms and concerns regarding auditory privacy in the waiting or exam rooms. Three of the 27 installations reported using white noise machines in an effort to help mask noise and address any potential auditory concerns. All of the female servicemembers we interviewed at the six installations that we visited felt that adequate steps were taken to ensure their physical privacy during behavioral health visits. The Women’s Health Research Interest Group, which is supported by the TriService Nursing Research Program, is currently in the process of identifying research gaps on health issues affecting female servicemembers. As part of this effort, they are comparing a compiled list of existing research with data on health care issues for female servicemembers to determine if there are any existing gaps in research. Interest group officials said that the goal is to develop a repository for peer-reviewed research articles related to health issues for female servicemembers, including those who served in combat, and to use this repository to identify research that could enhance the health care of female servicemembers, including those who have served in a combat zone. To ensure that researchers will have access to the results of their work, officials have plans to distribute their results in presentations at local and national conferences. In addition, officials told us that they will disseminate their findings through peer-reviewed publications and post this information on the TriService Nursing Research Program website, which is available to the public. However at the time of our review, only one DOD research organization that we spoke with was aware of their work. Specifically, an official from the Air Force Medical Support Agency told us that it was aware of the efforts by the Women’s Health Research Interest Group. In addition, other DOD research organizations told us that they would be interested in the results of this work even though they were not aware of it at the time of our discussion. While none of the other DOD research organizations that we spoke with are trying to identify gaps in research on female servicemembers, officials from each organization told us that they conduct research based on needs and capabilities. For example, one organization said that it reviews health care issues experienced during deployments and speaks with health care providers to determine what research is needed to better restore a servicemember’s ability to function. DOD research organizations said that while they focus their research on needs or capabilities, they consider gender in their research efforts. For example, officials from one division of the Army Medical Research and Materiel Command told us that when developing a research announcement based on genitourinary injuries sustained during deployments, they contacted the services to determine the type and extent of injuries encountered. At the time of the inquiry, only one female was reported by the services as having a significant genitourinary injury and this led to the development of an announcement that did not specifically mention females or males. While this announcement was not gender-specific, officials said that research proposals could include female servicemembers. In addition, officials from another division of the Army Medical Research and Materiel Command told us that when discussing proposed research to examine blood markers for PTSD, the original proposal did not include female servicemembers because researchers believed that female hormones would make detecting blood biomarkers for PTSD more difficult. Officials from Army Medical Research and Materiel Command found this justification for leaving out female servicemembers unsatisfactory so they required researchers to include both genders in this study. We provided a draft of this report to DOD for comment. DOD responded that it did not have any comments on the draft report. We are sending copies of this report to the Secretary of Defense, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or at williamsonr@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix IV. To describe the availability of routine, specialized, and behavioral health care services to female servicemembers at domestic Army installations and from other sources, we surveyed senior health care officials at the 27 domestic Army installations that had a primary military treatment facility (MTF). Through this survey, we collected information on the availability of these services to female servicemembers at installations to which more than two-thirds of the Army’s female servicemembers were attached as of August 1, 2012. In developing the survey, we conducted pre-tests to refine and validate the specific health care services as those services available to female servicemembers in the Army and checked that (1) the terminology was used correctly; (2) the questionnaire did not place undue burden on agency officials; (3) the information could be feasibly obtained; and (4) the survey was complete and unbiased. We chose the four pretest sites to include at least one installation with a primary MTF that was a medical center, a community hospital, and a health center/clinic. We conducted one of the pretests where all GAO participants were present and three pretests with some GAO participants in person and others participated by telephone. We made changes to the content and format of the survey on the basis of the feedback we received during the pretests. On August 20, 2012, Army Medical Command (MEDCOM) officials sent the survey to senior health care officials at the 27 domestic Army installations as a Word document by email that the respondents were requested to return after marking checkboxes or entering responses to open-answer boxes. All surveys were returned by September 26, 2012, for a 100 percent response rate. We conducted follow-up with senior health care officials about missing or inconsistent responses, through Army MEDCOM officials, between September 2012 and December 2012. The survey is presented in appendix III. In addition to the contact named above, Bonnie Anderson, Assistant Director; Jennie Apter; Danielle Bernstein; Natalie Herzog; Ron La Due Lake; Amanda K. Miller; Lisa Motley; Mario Ramsey; and Laurie F. Thurber made key contributions to this report.
Female servicemembers are serving in more complex occupational specialties and are being deployed to combat operations, potentially leading to increased health risks. Similar to their male counterparts, female servicemembers must maintain their medical readiness; however, they have unique health care needs that require access to gender-specific services. The National Defense Authorization Act for Fiscal Year 2012 directed GAO to review a variety of issues related to health care for female servicemembers. This report describes (1) the extent that DOD's policies for assessing individual medical readiness include unique health care issues of female servicemembers; (2) the availability of health care services to meet the unique needs of female servicemembers at domestic Army installations; and (3) the extent that DOD's research organizations have identified a need for research on the specific health care needs of female servicemembers who have served in combat. GAO reviewed DOD and military-service policies on individual medical readiness and surveyed senior health care officials about the availability of specific health services at the 27 domestic Army installations with MTFs that report directly to the domestic regional medical commands. GAO focused on the Army because it has more female servicemembers than the other military services. GAO also visited six Army installations--two from each of the Army's three domestic regional medical commands--and interviewed DOD officials who conduct research on health issues for servicemembers. The Department of Defense's (DOD) policy for assessing the individual medical readiness of a servicemember to deploy establishes six elements to review, most of which are gender-neutral. Four of the six elements--immunization status, medical readiness laboratory tests, individual medical equipment, and dental readiness--apply equally to female and male servicemembers. The remaining elements of individual medical readiness--deployment-limiting conditions and periodic health assessments--include aspects that are specific to female servicemembers. For example, the Army, Navy, Air Force, and Marine Corps have policies that define pregnancy as a deployment-limiting condition. Officials surveyed by GAO reported that female-specific health care services and behavioral health services were generally available through domestic Army installations. Specifically, according to GAO's survey results: Most routine female-specific health care services--pelvic examinations, clinical breast examinations, pap smears, prescription of contraceptives, and pregnancy tests--were available at the 27 surveyed domestic Army installations. The availability of specialized health care services--treatment of abnormal pap smears, prenatal care, labor and delivery, benign gynecological disorders, postpartum care, and surgical, medical, and radiation treatment of breast, ovarian, cervical, and uterine cancers--at the 27 surveyed domestic Army installations varied. However, when these services were not available at the installation, they could be obtained through either another military treatment facility (MTF) or from a civilian network provider. The availability of behavioral health services, such as psychotherapy or substance abuse treatment, which were not gender-specific, varied across the 27 domestic Army installations; however, similar to specialty care, these services could be obtained from other MTFs or civilian network providers. In addition, 18 of the 27 surveyed Army installations reported offering female-specific programs or activities, such as a post-deployment group for female servicemembers or a postpartum group. One DOD organization, the Women's Health Research Interest Group, is currently in the process of identifying research gaps on health issues affecting female servicemembers. Interest group officials said that the goal is to develop a repository for peer-reviewed research articles related to health issues for female servicemembers, including those who served in combat, and to use this repository to identify research that could enhance the health care of female servicemembers, including those who have served in a combat zone. To ensure that researchers will have access to the results of their work, officials have plans to distribute their results in presentations at local and national conferences. In addition, officials told GAO that they will disseminate their findings through peer-reviewed publications and post this information on the Internet to make it available to the public. GAO provided a draft of this report to DOD for comment. DOD responded that it did not have any comments on the draft report.
FPS assesses risk and recommends countermeasures to GSA and tenant agencies; however, FPS’s ability to influence the allocation of resources using risk management is limited because resource allocation decisions are the responsibility of GSA and tenant agencies, which may be unwilling to fund the countermeasures FPS recommends. We have found that under the current risk management approach, the security equipment that FPS recommends and is responsible for acquiring, installing, and maintaining may not be implemented if tenant agencies are unwilling to fund it. For example, in August 2007 FPS recommended a security equipment countermeasure—the upgrade of a surveillance system shared by two high-security locations that, according to FPS officials, would cost around $650,000. While members of one building security committee (BSC) told us they approved spending between $350,000 and $375,000 to fund their agencies’ share of the countermeasure, they said that the BSC of the other location would not approve funding; therefore, FPS could not upgrade the system it had recommended. In November 2008 FPS officials told us that they were moving ahead with the project by drawing on unexpended revenues from the two locations’ building-specific fees and the funding that was approved by one of the BSCs. Furthermore, FPS officials, in May 2009, told us that all cameras had been repaired and all monitoring and recording devices had been replaced, and that the two BSCs had approved additional upgrades and that FPS was implementing them. As we reported in June 2008, we have found other instances in which recommended security countermeasures were not implemented at some of the buildings we visited because BSC members could not agree on which countermeasures to implement or were unable to obtain funding from their agencies. Compounding this situation, FPS takes a building-by-building approach to risk management, using an outdated risk assessment tool to create building security assessments (BSA), rather than taking a more comprehensive, strategic approach and assessing risks among all buildings in GSA’s inventory and recommending countermeasure priorities to GSA and tenant agencies. As a result, the current approach provides less assurance that the most critical risks at federal buildings across the country are being prioritized and mitigated. Also, GSA and tenant agencies have concerns about the quality and timeliness of FPS’s risk assessment services and are taking steps to obtain their own risk assessments. For example, GSA officials told us they have had difficulties receiving timely risk assessments from FPS for space GSA is considering leasing. These risk assessments must be completed before GSA can take possession of the property and lease it to tenant agencies. An inefficient risk assessment process for new lease projects can add costs for GSA and create problems for both GSA and tenant agencies that have been planning for a move. Therefore, GSA is updating a risk assessment tool that it began developing in 1998, but has not recently used, to better ensure the timeliness and comprehensiveness of these risk assessments. GSA officials told us that in the future they may use this tool for other physical security activities, such as conducting other types of risk assessments and determining security countermeasures for new facilities. Additionally, although tenant agencies have typically taken responsibility for assessing risk and securing the interior of their buildings, assessing exterior risks will require additional expertise and resources. This is an inefficient approach considering that tenant agencies are paying FPS to assess building security. Finally, FPS continues to struggle with funding challenges that impede its ability to allocate resources to more effectively manage risk. FPS faces challenges in ensuring that its fee-based funding structure accounts for the varying levels of risk and types of services provided at federal facilities. FPS funds its operations through security fees charged to tenant agencies. However, FPS’s basic security fee, which funds most of its operations, does not account for the risk faced by specific buildings, the level of service provided, or the cost of providing services, raising questions about equity. FPS charges federal agencies the same basic security fee regardless of the perceived threat to a particular building or agency. In fiscal year 2009, FPS charged 66 cents per square foot for basic security. Although FPS categorizes buildings according to security levels based on its assessment of each building’s risk and size, this assessment does not affect the security fee FPS charges. For example, level I facilities typically face less risk because they are generally small storefront-type operations with a low level of public contact, such as a Social Security Administration office. However, these facilities are charged the same basic security fee of 66 cents per square foot as a level IV facility that has a high volume of public contact and may contain high-risk law enforcement and intelligence agencies and highly sensitive government records. We also have reported that basing government fees on the cost of providing a service promotes equity, especially when the cost of providing the service differs significantly among different users, as is the case with FPS. In our June 2008 report, we recommended that FPS improve its use of the fee-based system by developing a method to accurately account for the cost of providing security services to tenant agencies and ensuring that its fee structure takes into consideration the varying levels of risk and service provided at GSA facilities. We also recommended an evaluation of whether FPS’s current use of a fee-based system or an alternative funding mechanism is the most appropriate manner to fund the agency. While DHS agreed with these recommendations, FPS has not fully implemented them. FPS does not have a strategic human capital plan to guide its current and future workforce planning efforts, including effective processes for training, retention, and staff development. Instead, FPS has developed a short-term hiring plan that does not include key human capital principles, such as determining an agency’s optimum staffing needs. Moreover, FPS has been transitioning to an inspector-based workforce, thus eliminating the police officer position and relying primarily on FPS inspectors for both law enforcement and physical security activities. FPS believes that this change will ensure that its staff has the right mix of technical skills and training needed to accomplish its mission. However, FPS’s ability to provide law enforcement services under its inspector-based workforce approach may be diminished because FPS will rely on its inspectors to provide these services and physical security services simultaneously. In the absence of a strategic human capital plan, it is difficult to discern how effective an inspector-based workforce approach will be. The lack of a human capital plan has also contributed to inconsistent approaches in how FPS regions and headquarters are managing human capital activities. For example, FPS officials in some of the regions we visited said they implement their own procedures for managing their workforce, including processes for performance feedback, training, and mentoring. Additionally, FPS does not collect data on its workforce’s knowledge, skills, and abilities. These elements are necessary for successful workforce planning activities, such as identifying and filling skill gaps and succession planning. We recently recommended that FPS improve how it collects data on its workforce’s knowledge, skills, and abilities to help it better manage and understand current and future workforce needs; and use these data in the development and implementation of a long-term strategic human capital plan that addresses key principles for effective strategic workforce planning. DHS concurred with our recommendations. Furthermore, FPS did not meet its fiscal year 2008 mandated deadline of increasing its staffing level to no fewer than 1,200 full-time employees by July 31, 2008, and instead met this staffing level in April 2009. FPS’s staff has steadily declined since 2004 and critical law enforcement services have been reduced or eliminated. For example, FPS has eliminated its use of proactive patrol to prevent or detect criminal violations at many GSA buildings. According to some FPS officials at regions we visited, not providing proactive patrol has limited its law enforcement personnel to a reactive force. Additionally, officials stated that in the past, proactive patrol permitted its police officers and inspectors to identify and apprehend individuals that were surveilling GSA buildings. In contrast, when FPS is not able to patrol federal buildings, there is increased potential for illegal entry and other criminal activity. In one city we visited, a deceased individual had been found in a vacant GSA facility that was not regularly patrolled by FPS. FPS officials stated that the deceased individual had been inside the building for approximately 3 months. FPS does not fully ensure that its contract security guards have the training and certifications required to be deployed to a GSA building. We have noted that the effectiveness of a risk management approach depends on the involvement of experienced and professional security personnel. Further, that the chances of omitting major steps in the risk management process increase if personnel are not well trained in applying risk management. FPS requires that all prospective guards complete about 128 hours of training including 8 hours of X-ray and magnetometer training. However, in one region, FPS has not provided the X-ray or magnetometer training to its 1,500 guards since 2004. Nonetheless, these guards are assigned to posts at GSA buildings. X-ray training is critical because guards control access points at buildings. Insufficient X-ray and magnetometer training may have contributed to several incidents at GSA buildings in which guards were negligent in carrying out their responsibilities. For example, at a level IV federal facility in a major metropolitan area, an infant in a carrier was sent through an X-ray machine due to a guard’s negligence. Specifically, according to an FPS official in that region, a woman with her infant in a carrier attempted to enter the facility, which has child care services. While retrieving her identification, the woman placed the carrier on the X-ray machine. Because the guard was not paying attention and the machine’s safety features had been disabled, the infant in the carrier was sent through the X-ray machine. FPS investigated the incident and dismissed the guard; however, the guard subsequently sued FPS for not providing the required X-ray training. The guard won the suit because FPS could not produce any documentation to show that the guard had received the training, according to an FPS official. In addition, FPS officials from that region could not tell us whether the X-ray machine’s safety features had been repaired. Additionally, we found that FPS does not have a fully reliable system for monitoring and verifying guard training and certification requirements. We reviewed 663 randomly selected guard records and found that 62 percent of the guards had at least one expired certification, including a declaration that guards have not been convicted of domestic violence, which make them ineligible to carry firearms. We also found that some guards were not provided building-specific training, such as what actions to take during a building evacuation or a building emergency. This lack of training may have contributed to several incidents where guards neglected their assigned responsibilities. For example, at a level IV facility, the guards did not follow evacuation procedures and left two access points unattended, thereby leaving the facility vulnerable; at a level IV facility, the guard allowed employees to enter the building while an incident involving suspicious packages was being investigated; and, at a level III facility, the guard allowed employees to access the area affected by a suspicious package, which was required to be evacuated. FPS has limited assurance that its guards are complying with post orders. It does not have specific national guidance on when and how guard inspections should be performed. FPS’s inspections of guard posts at GSA buildings are inconsistent and the quality varied in the six regions we examined. We also found that guard inspections are typically completed by FPS during regular business hours and in locations where FPS has a field office, and seldom on nights or weekends. However, on an occasion when FPS officials conducted a post inspection at night, they found a guard asleep at his post after taking a pain-killer prescription drug. FPS also found other incidents at high-security facilities where guards neglected or inadequately performed their assigned responsibilities. For example, a guard failed to recognize or did not properly X-ray a box containing handguns at the loading dock at a facility. FPS became aware of the situation because the handguns were delivered to FPS. Because guards were not properly trained and did not comply with post orders, our investigators—with the components for an improvised explosive device (IED) concealed on their persons—passed undetected through access points controlled by FPS guards at 10 of 10 level IV facilities in four major cities where GAO conducted covert tests. The specific components for this device, items used to conceal the device components, and the methods of concealment that we used during our covert testing are classified, and thus are not discussed in this testimony. Of the 10 level IV facilities our investigators penetrated, 8 were government owned and 2 were leased facilities. The facilities included district offices of a U.S Senator and a U.S. Representative as well as agencies of the Departments of Homeland Security, Transportation, Health and Human Services, Justice, State, and others. The two leased facilities did not have any guards at the access control points at the time of our testing. Using publicly available information, our investigators identified a type of device that a terrorist could use to cause damage to a federal facility and threaten the safety of federal workers and the general public. The device was an IED made up of two parts—a liquid explosive and a low-yield detonator—and included a variety of materials not typically brought into a federal facility by employees or the public. Although the detonator itself could function as an IED, investigators determined that it could also be used to set off a liquid explosive and cause significantly more damage. To ensure safety during this testing, we took precautions so that the IED would not explode. For example, we lowered the concentration level of the material. To gain entry into each of the 10 level IV facilities, our investigators showed a photo identification (a state driver’s license) and walked through the magnetometers without incident. Our investigators also placed their briefcases with the IED material on the conveyor belt of the X-ray machine, but the guards detected nothing. Furthermore, our investigators did not receive any secondary searches from the guards that might have revealed the IED material that they brought into the facilities. At security checkpoints at 3 of the 10 facilities, our investigators noticed that the guard was not looking at the X-ray screen as some of the IED components passed through the machine. A guard questioned an item in the briefcase at one of the 10 facilities but the materials were subsequently allowed through the X-ray machines. At each facility, once past the guard screening checkpoint, our investigators proceeded to a restroom and assembled the IED. At some of the facilities, the restrooms were locked. Our investigators gained access by asking employees to let them in. With the IED completely assembled in a briefcase, our investigators walked freely around several floors of the facilities and into various executive and legislative branch offices, as described above. Leveraging technology is a key practice over which FPS has somewhat more control, but FPS does not have a comprehensive approach for identifying, acquiring, and assessing the cost-effectiveness of the security equipment that its inspectors recommend. Individual FPS inspectors have considerable latitude in determining which technologies and other countermeasures to recommend, but the inspectors receive little training and guidance in how to assess the relative cost-effectiveness of these technologies or determine the expected return on investment. FPS officials told us that inspectors make technology decisions based on the initial training they receive, personal knowledge and experience, and contacts with vendors. FPS inspectors receive some training in identifying and recommending security technologies as part of their initial FPS physical security training. Since FPS was transferred to DHS in 2003, its refresher training program for inspectors has primarily focused on law enforcement. Consequently, inspectors lack recurring technology training. Additionally, FPS does not provide inspectors with specialized guidance and standards for cost-effectively selecting technology. In the absence of specific guidance, inspectors follow the Department of Justice minimum countermeasure standards and other relevant Interagency Security Committee standards, but these standards do not assist users in selecting cost-effective technologies. Moreover, the document that FPS uses to convey its countermeasure recommendations to GSA and tenant agencies—the BSA executive summary—includes cost estimates but no analysis of alternatives. As a result, GSA and tenant agencies have limited assurance that the investments in technologies and other countermeasures that FPS inspectors recommend are cost-effective, consistent across buildings, and the best available alternatives. For example, at one location we visited, an explosives detection dog was used to screen mail that is distributed elsewhere. In 2006, FPS had recommended, based on the results of its risk analysis, the use of this dog and an X-ray machine, although at the time of our visit only the dog was being used. Moreover, the dog and handler work 12-hour shifts Monday through Friday when most mail is delivered and shipped, and the dog needs a break every 7 minutes. The GSA regional security officials we spoke with questioned whether this approach was more effective and efficient than using an on-site enhanced X-ray machine that could detect biological and chemical agents as well as explosives and could be used anytime. In accordance with its policies, FPS conducted a BSA of the site in 2008 and determined that using an enhanced X-ray machine and an explosives detection dog would bring the projected threat rating of the site down from moderate to low. FPS included estimated one-time installation and recurring costs in the BSA and executive summary, but did not include the estimated cost and risk of the following mail screening options: (1) usage of the dog and the additional countermeasure; (2) usage of the additional countermeasure only; and (3) usage of the dog only. Consequently, tenant agency representatives would have to investigate the cost and risk implications of these options on their own to make an informed resource allocation decision. It is critical that FPS—as the provider of law enforcement and related security services for GSA buildings—and GSA—as the manager of these properties—have well-established lines of communication with each other and with tenant agencies to ensure that all parties are aware of the ever- changing risks in a dynamic threat environment and that FPS and GSA are taking appropriate actions to reduce vulnerabilities. While FPS and GSA top management have established communication channels, the types of information shared at the regional and building levels are inconsistent, and overall, FPS and GSA disagree over what information should be shared. For example, the memorandum of agreement between DHS and GSA specifies that FPS will provide quarterly briefings at the regional level, but FPS had not been providing them consistently across all regions. FPS resumed the practice in October 2008, however, GSA security officials said that these briefings mostly focused on crime statistics and did not constitute comprehensive threat analyses. Additionally, FPS is only required to meet formally with GSA property managers and tenant agencies as part of the BSA process—an event that occurs every 2 to 5 years, depending on a building’s security level. We identified information sharing gaps at several level III and IV sites that we visited, and found that in some cases these deficiencies led to decreased security awareness and increased risk. At one location, we observed during our interview with the BSC that the committee members were confused about procedures for screening visitors who are passengers in employees’ cars that enter the building via the parking garage. One of the tenants recounted an incident in which a security guard directed the visitor to walk through the garage to an appropriate screening station. According to the GSA property manager, this action created a safety hazard. The GSA property manager knew the appropriate screening procedure, but told us there was no written policy on the procedure that members could access. Additionally, BSC members told us that the committee met as needed. At one location, FPS had received inaccurate square footage data from GSA and had therefore overcharged the primary tenant agency for a guard post that protected space shared by all the tenants. According to the GSA property manager, once GSA was made aware of the problem, the agency obtained updated information and worked with the tenant agencies to develop a cost-sharing plan for the guard post, which made the primary tenant agency’s security expenses somewhat more equitable. BSC members told us that the committee met regularly. At one location, members of a BSC told us that they met as needed, although even when they hold meetings, one of the main tenant agencies typically does not participate. GSA officials commented that this tenant adheres to its agency’s building security protocols and does not necessarily follow GSA’s tenant policies and procedures, which GSA thinks creates security risks for the entire building. At one location, tenant agency representatives and officials from FPS told us they met regularly, but GSA officials told us they were not invited to these meetings. GSA officials at this location told us that they invite FPS to their property management meetings for that location, but FPS does not attend. GSA officials also said they do not receive timely incident information for the site from FPS and suggested that increased communication among the agencies would help them be more effective managers of their properties and provide tenants with better customer service. At one location, GSA undertook a major renovation project beginning in April 2007. FPS, GSA, and tenant agency representatives did not all meet together regularly to make security preparations or manage security operations during construction. FPS officials told us they had not been invited to project meetings, although GSA officials told us that they had invited FPS and that FPS attended some meetings. In May 2008, FPS discovered that specific surveillance equipment had been removed. As of May 2009, FPS officials told us they did not know who had removed the equipment and were working with tenant agency representatives to recover it. However, in June 2009 tenant agency representatives told us that they believed FPS was fully aware that the equipment had been removed in December 2007. Additionally, we conducted a survey of GSA tenant agencies and found that they had mixed views about some of the services they pay FPS to provide. Notably, the survey results indicated that the roles and responsibilities of FPS and tenant agencies are unclear, primarily because on average about one-third of tenant agencies could not comment on how satisfied or dissatisfied they were with FPS’s level of communication of its services, partly because they had little to no interaction with FPS officers. Although FPS plans to implement education and outreach initiatives to improve customer service to tenant agencies, it will face challenges because of its lack of complete and accurate contact data. During the course of our review, we found that approximately 53 percent of the e-mail addresses and 27 percent of the telephone numbers for designated points of contacts were missing from FPS’s contact database and the database required a substantial amount of revising. Complete and accurate contact information for FPS’s customers is critical for information sharing and an essential component of any customer service initiative. Therefore, to improve its services to GSA and tenant agencies, we recommended that FPS collect and maintain an accurate and comprehensive list of all facility- designated points of contact, as well as a system for regularly updating this list; and develop and implement a program for education and outreach to GSA and tenant agencies to ensure they are aware of the current roles, responsibilities, and services provided by FPS. DHS concurred with our recommendations. Furthermore, while FPS and GSA acknowledge that the two organizations are partners in protecting and securing GSA buildings, FPS and GSA fundamentally disagree over how much of the information in the BSA should be shared. Per the memorandum of agreement, FPS is required to share the BSA executive summary with GSA and FPS believes that this document contains sufficient information for GSA to make decisions about purchasing and implementing FPS’s recommended countermeasures. However, GSA officials at all levels cite limitations with the BSA executive summary saying, for example, that it does not contain enough contextual information on threats and vulnerabilities to support FPS’s countermeasure recommendations and justify the expenses that GSA and tenant agencies would incur by installing additional countermeasures. Moreover, GSA security officials told us that FPS does not consistently share BSA executive summaries across all regions. Instead, GSA wants to receive BSAs in their entirety so that it can better protect GSA buildings and the tenants who occupy them. According to GSA, building protection functions are an integral part of its property preservation, operation, and management responsibilities. In a post-September 11th era, it is crucial that federal agencies work together to share information to advance homeland security and critical infrastructure protection efforts. Information is a vital tool in fighting terrorism, and the timely dissemination of that information to the appropriate government agency is absolutely critical to maintaining the security of our nation. The ability to share security-related information can unify the efforts of federal agencies in preventing or minimizing terrorist attacks. However, in the absence of comprehensive information-sharing plans, many aspects of homeland security information sharing can be ineffective and fragmented. In 2005, we designated information sharing for homeland security as a governmentwide high-risk area because of the significant challenges faced in this area—challenges that are still evident today. It is critical that FPS and GSA—which both have protection functions for GSA buildings, their occupants, and those who visit them— reach consensus on sharing information in a timely manner to support homeland security and critical infrastructure protection efforts. We recently recommended that FPS reach consensus with GSA on what information contained in the BSA is needed for GSA to fulfill its responsibilities related to the protection of federal buildings and occupants, and accordingly, establish internal controls to ensure that shared information is adequately safeguarded; guidance for employees to use in deciding what information to protect with sensitive but unclassified designations; provisions for training on making designations, controlling, and sharing such information with GSA and other entities; and a review process to evaluate how well this information sharing process is working, with results reported to the Secretary of Homeland Security. While DHS concurred with this recommendation, we are concerned that the steps it described in its response were not comprehensive enough to address the intent of the recommendation. For example, DHS did not explicitly commit to reaching consensus with GSA in identifying building security information that can be shared, or to the steps we outlined in our recommendation—steps that in our view comprise a comprehensive plan for sharing and safeguarding sensitive information. Therefore, it is important that FPS engage GSA in identifying what building security information can be shared and follow the information sharing and safeguarding steps we included in our recommendation to ensure that GSA acquires the information it needs to protect the 9,000 buildings under its control and custody, the federal employees who work in them, and those who visit them. We have reported that FPS is limited in its ability to assess the effectiveness of its efforts to protect GSA buildings. To determine how well it is accomplishing its mission to protect GSA buildings, FPS has identified some output measures that are a part of the Office of Management and Budget’s Performance Assessment Rating Tool. These measures include determining whether security countermeasures have been deployed and are fully operational, the amount of time it takes to respond to an incident, and the percentage of BSAs completed on time. Some of these measures are also included in FPS’s federal facilities security index, which is used to assess its performance. However, FPS has not developed outcome measures to evaluate the net effect of its efforts to protect GSA buildings. While output measures are helpful, outcome measures are also important because they can provide FPS with broader information on program results, such as the extent to which its decision to move to an inspector-based workforce will enhance security at GSA facilities or help identify the security gaps that remain at GSA facilities and determine what action may be needed to address them. In addition, FPS does not have a reliable data management system that will allow it to accurately track these measures or other important measures such as the number of crimes and other incidents occurring at GSA facilities. Without such a system, it is difficult for FPS to evaluate and improve the effectiveness of its efforts to protect federal employees and facilities, allocate its limited resources, or make informed risk management decisions. For example, weaknesses in one of FPS’s countermeasure tracking systems make it difficult to accurately track the implementation status of recommended countermeasures such as security cameras and X- ray machines. Without this ability, FPS has difficulty determining whether it has mitigated the risk of GSA facilities to crime or a terrorist attack. FPS is taking some steps in each of the key practice areas to improve its ability to better protect GSA buildings. Additionally, GAO has recommended that FPS implement specific actions to promote greater usage of key protection practices and otherwise improve security. However, FPS has not completed many related corrective actions and FPS faces implementation challenges as well. FPS is developing the Risk Assessment and Management Program (RAMP), which could enhance its approach to assessing risk, managing human capital, and measuring performance. With regard to improving the effectiveness of FPS’s risk management approach and the quality of BSAs, FPS believes RAMP will provide inspectors with the information needed to make more informed and defensible recommendations for security countermeasures. FPS also anticipates that RAMP will allow inspectors to obtain information from one electronic source, generate reports automatically, enable FPS to track selected countermeasures throughout their life cycle, address some concerns about the subjectivity inherent in BSAs, and reduce the amount of time inspectors and managers spend on administrative work. Additionally, FPS is designing RAMP so that it will produce risk assessments that are compliant with Interagency Security Committee standards, compatible with the risk management framework set forth by the National Infrastructure Protection Plan, and consistent with the business processes outlined in the memorandum of agreement with GSA. According to FPS, RAMP will support all components of the BSA process, including gathering and reviewing building information; conducting and recording interviews; assessing threats, vulnerabilities, and consequences to develop a detailed risk profile; recommending appropriate countermeasures; and producing BSA reports. FPS also plans to use RAMP to track and analyze certain workforce data, contract guard program data, and other performance data such as the types and definitions of incidents and incident response times. Although FPS intends for RAMP to improve its approach to risk assessment, human capital management, and performance measurement, it is not clear that FPS has fully addressed some implementation issues. For example, one issue concerns the accuracy and reliability of the information that will be entered into RAMP. According to FPS, the agency plans to transfer data from several of its legacy systems, including the Contract Guard Employment Requirements Tracking System (CERTS), into RAMP. In July 2009, we testified on the accuracy and reliability issues associated with CERTS. FPS subsequently conducted an audit of CERTS to determine the status of its guard training and certification. However, the results of the audit showed that FPS was able to verify the status for about 7,600 of its 15,000 guards. According to an FPS official, one of its regions did not meet the deadline for submitting data to headquarters because its data were not accurate or reliable and therefore about 1,500 guards were not included in the audit. FPS was not able to explain why it was not able to verify the status of the remaining 5,900 guards. In 2008, we recommended that FPS develop and implement specific guidelines and standards for measuring its performance and improve how it categorizes, collects, and analyzes data to help it better manage and understand the results of its efforts to protect GSA facilities and DHS concurred with our recommendations. RAMP could be the vehicle through which FPS implements these recommendations, but the use of inaccurate and unreliable data will hamper performance measurement efforts. Furthermore, it is unclear whether FPS will meet the implementation goals established in the program’s proposed timeline. FPS began designing RAMP in early 2007 and expects to implement the program in three phases, completing its implementation by the end of fiscal year 2011. However, in June 2008, we reported that FPS was going to implement a pilot version of RAMP in fiscal year 2009, but in May 2009, FPS officials told us they intend to implement the first phase in the beginning of fiscal year 2010. Until RAMP components are fully implemented, FPS will continue to rely on its current risk assessment tool, methodology, and process, potentially leaving GSA and tenant agencies dissatisfied. Additionally, FPS will continue to rely on its disparate workforce data management systems and CERTS or localized databases that have proven to be inaccurate and unreliable. We recently recommended that FPS provide the Secretary of Homeland Security with regular updates on the status of RAMP including the implementation status of deliverables, clear timelines for completion of tasks and milestones, and plans for addressing any implementation obstacles. DHS concurred with our recommendation and stated that FPS will submit a monthly report to the Secretary. FPS took on a number of immediate actions with respect to contract guard management in response to our covert testing. For example, in July 2009, the Director of FPS instructed Regional Directors to accelerate the implementation of FPS’s requirement that two guard posts at Level IV facilities be inspected weekly. FPS, in July 2009, also required more X-ray and magnetometer training for inspectors and guards. For example, FPS has recently issued an information bulletin to all inspectors and guards to provide them with information about package screening, including examples of disguised items that may not be detected by magnetometers or X-ray equipment. Moreover, FPS produced a 15-minute training video designed to provide information on bomb component detection. According to FPS, each guard was required to read the information bulletin and watch the video within 30 days. Despite the steps FPS has taken, there are a number of factors that will make implementing and sustaining these actions difficult. First, FPS does not have adequate controls to monitor and track whether its 11 regions are completing these new requirements. Thus, FPS cannot say with certainty that it is being done. According to a FPS regional official, implementing the new requirements may present a number of challenges, in part, because new directives appear to be based primarily on what works well from a headquarters or National Capital Region perspective, and not a regional perspective that reflects local conditions and limitations in staffing resources. In addition, another regional official estimated that his region is meeting about 10 percent of the required oversight hours and officials in another region said they are struggling to monitor the delivery of contractor-provided training in the region. Second, FPS has not completed any workforce analysis to determine if its current staff of about 930 law enforcement security officers will be able to effectively complete the additional inspections and provide the X-ray and magnetometer training to 15,000 guards, in addition to their current physical security and law enforcement responsibilities. According to the Director of FPS, while having more resources would help address the weaknesses in the guard program, the additional resources would have to be trained and thus could not be deployed immediately. FPS is also taking steps to implement a more systematic approach to technology acquisition by developing a National Countermeasures Program, which could help FPS leverage technology more cost-effectively. According to FPS, the program will establish standards and national procurement contracts for security equipment, including X-ray machines, magnetometers, surveillance systems, and intrusion detection systems. FPS officials told us that instead of having inspectors search for vendors to establish equipment acquisition, installation, and maintenance contracts, inspectors will call an FPS mission support center with their countermeasure recommendations and the center will procure the services through standardized contracts. According to FPS, the program will also include life-cycle management plans for countermeasures. FPS officials said they established an X-ray machine contract and that future program contracts will also explore the use of the schedule as a source for national purchase and service contracts. According to FPS, the National Countermeasures Program should provide the agency with a framework to better manage its security equipment inventory; meet its operational requirement to identify, implement, and maintain security equipment; and respond to stakeholders’ needs by establishing nationwide resources, streamlining procurement procedures, and strengthening communications with its customers. FPS officials told us they believe this program will result in increased efficiencies because inspectors will not have to spend their time facilitating the establishment of contracts for security equipment because these contracts will be standardized nationwide. Although the National Countermeasures Program includes improvements that may enhance FPS’s ability to leverage technology, it does not establish tools for assessing the cost-effectiveness of competing technologies and countermeasures and implementation has been delayed. Security professionals are faced with a multitude of technology options offered by private vendors, including advanced intrusion detection systems, biotechnology options for screening people, and sophisticated video monitoring. Having tools and guidance to determine which technologies most cost-effectively address identified vulnerabilities is a central component of the leveraging technology key practice. FPS officials told us that the National Countermeasures Program will enable inspectors to develop countermeasure cost estimates that can be shared with GSA and tenant agencies. However, incorporating a tool for evaluating the cost- effectiveness of alternative technologies into FPS’s planned improvements in the security acquisition area would represent an enhanced application of this key practice. Therefore, we recently recommended that FPS develop a methodology and guidance for assessing and comparing the cost-effectiveness of technology alternatives, and DHS concurred with our recommendation. Another concern is that FPS had planned to implement the program throughout fiscal year 2009, but extended implementation into fiscal year 2010, thus it is not clear whether FPS will meet the program’s milestones in accordance with updated timelines. Until the National Countermeasures Program is fully implemented, FPS will continue to rely on individual inspectors to make technology decisions. For example, FPS had anticipated that the X-ray machine and magnetometer contracts would be awarded by December 2008, and that contracts for surveillance and intrusion detection systems would be awarded during fiscal year 2009. In May 2009, FPS officials told us that the X-ray machine contract was awarded on April 30, 2009, and that they anticipated awarding the magnetometer contract in the fourth quarter of fiscal year 2009 and an electronic security services contract for surveillance and intrusion detection systems during the second quarter of fiscal year 2010. We recently recommended that FPS provide the Secretary of Homeland Security with regular updates on the status of the National Countermeasures Program, including the implementation status of deliverables, clear timelines for completion of tasks and milestones, and plans for addressing any implementation obstacles. DHS concurred with this recommendation and stated that FPS will submit a monthly report to the Secretary. Finally, as we stated at the outset, the protection of federal real property has been and continues to be a major concern. Therefore, we have used our key protection practices as criteria to evaluate the security efforts of other departments, agencies, and entities and have made recommendations to promote greater usage of key practices in ensuring the security of public spaces and of those who work at and visit them. For example, we have examined how DHS and the Smithsonian Institution secure their assets and identified challenges. Most recently, we evaluated the National Park Service’s (Park Service) approach to national icon and park protection. We found that although the Park Service has implemented a range of security program improvements in recent years that reflected some aspects of key practices, there were also limitations. Specifically, the Park Service (1) does not manage risk servicewide or ensure the best return on security technology investments; (2) lacks a servicewide approach to sharing information internally and measuring performance; and (3) lacks clearly defined security roles and a security training curriculum. With millions of people visiting the nation’s nearly 400 park units annually, ensuring their security and the protection of our national treasures is paramount. More emphasis on the key practices would provide greater assurance that Park Service assets are well protected and that Park Service resources are being used efficiently to improve protection. FPS faces challenges that are similar, in many respects, to those that agencies across the government are facing. Our key practices provide a framework for assessing and improving protection practices, and in fact, the Interagency Security Committee is using our key facility protection practices as key management practices to guide its priorities and work activities. For example, the committee established subcommittees for technology best practices and training, and working groups in the areas of performance measures and strategic human capital management. The committee also issued performance measurement guidance in 2009. Without greater attention to key protection practices, FPS will be ill equipped to efficiently and effectively fulfill its responsibilities of assessing risk, strategically managing its workforce and contract guard program, recommending countermeasures, sharing information and coordinating with GSA and tenant agencies to secure GSA buildings, and measuring and testing its performance as the security landscape changes and new threats emerge. Furthermore, implementing our specific recommendations related to areas such as human capital and risk management will be critical steps in the right direction. Overall, following this framework—adhering to key practices and implementing recommendations in specific areas—would enhance FPS’s chances for future success and could position FPS to become a leader and benchmark agency for facility protection in the federal government. Mr. Chairman, this concludes our testimony. We are pleased to answer any questions you might have. For further information on this testimony, please contact Mark L. Goldstein at (202) 512-2834 or by e-mail goldsteinm@gao.gov. Individuals making key contributions to this testimony include Tammy Conquest, John Cooney, Elizabeth Eisenstadt, Brandon Haller, Denise McCabe, David Sausville, and Susan Michal-Smith. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Federal Protective Service (FPS) within the Department of Homeland Security (DHS) is responsible for providing law enforcement and related security services for nearly 9,000 federal facilities under the control and custody of the General Services Administration (GSA). In 2004 GAO identified a set of key protection practices from the collective practices of federal agencies and the private sector, which included allocation of resources using risk management, strategic management of human capital, leveraging of technology, information sharing and coordination, and performance measurement and testing. This testimony is based on past reports and testimonies and discusses (1) limitations FPS faces in protecting GSA buildings and resulting vulnerabilities; and (2) actions FPS is taking. To perform this work, GAO used its key practices as criteria, visited a number of GSA buildings, surveyed tenant agencies, analyzed pertinent laws and DHS and GSA documents, conducted covert testing at 10 judgmentally selected high-security buildings in four cities, and interviewed officials from DHS, GSA, and tenant agencies, and contractors and guards. FPS's approach to securing GSA buildings reflects some aspects of key protection practices; however, GAO found limitations in each area and identified vulnerabilities. More specifically: (1) FPS faces obstacles in allocating resources using risk management. FPS uses an outdated risk assessment tool and a subjective, time-consuming process to assess risk. In addition, resource allocation decisions are the responsibility of GSA and tenant agencies. This leads to uncertainty about whether risks are being mitigated. Also, FPS continues to struggle with funding challenges that impede its ability to allocate resources effectively. (2) FPS does not have a strategic human capital management plan to guide its current and future workforce planning efforts, making it difficult to discern how effective its transition to an inspector-based workforce will be. Furthermore, because contract guards were not properly trained and did not comply with post orders, GAO investigators concealing components for an improvised explosive device passed undetected by FPS guards at 10 of 10 high-security facilities in four major cities. (3) FPS lacks a systematic approach for leveraging technology, and inspectors do not provide tenant agencies with an analysis of alternative technologies, their cost, and the associated reduction in risk. As a result, there is limited assurance that the recommendations inspectors make are the best available alternatives, and tenant agencies must make resource allocation decisions without key information. (4) FPS has developed information sharing and coordination mechanisms with GSA and tenant agencies, but there is inconsistency in the type of information shared and the frequency of coordination. (5) FPS lacks a reliable data management system for accurately tracking performance measurement and testing. Without such a system, it is difficult for FPS to evaluate and improve the effectiveness of its efforts, allocate resources, or make informed risk management decisions. FPS is taking actions to better protect GSA buildings, in part as a result of GAO's recommendations. For example, FPS is developing a new risk assessment program and has recently focused on improving oversight of its contract guard program. Additionally, GAO has recommended that FPS implement specific actions to make greater use of key practices and otherwise improve security. However, FPS has not completed many related corrective actions and FPS faces implementation challenges as well. Nonetheless, adhering to key practices and implementing GAO's recommendations in specific areas would enhance FPS's chances for future success, and could position FPS to become a leader and benchmark agency for facility protection in the federal government.
In fiscal year 2005, almost 10 million beneficiaries were eligible to receive health care under TRICARE, DOD’s regionally structured health care program. Under TRICARE, beneficiaries have choices among three different benefit options and may obtain care from either MTFs or civilian providers. The NDAA for fiscal year 2004 directed DOD to conduct a survey to monitor access to care for beneficiaries who chose not to use TRICARE’s managed care option and to appoint a senior official to take actions to ensure that these beneficiaries have adequate access to care. TRICARE beneficiaries fall into various categories, including active duty personnel and their dependents and retirees and their dependents. Retirees and certain dependents and survivors who are entitled to Medicare Part A and enrolled in Part B, and who are generally age 65 and older, are eligible to obtain care under a separate program called TRICARE for Life (TFL). As shown in figure 1, active duty personnel and their dependents represent 42 percent of the beneficiary population. Retirees and their dependents who are not entitled to Medicare (generally under age 65) comprised 44 percent of the TRICARE beneficiary population while retirees and dependents over 65 represented 14 percent of the beneficiary population. TRICARE beneficiaries can choose to obtain health care through MTFs or through civilian providers, which includes providers who belong to the TRICARE provider network as well as nonnetwork providers who agree to accept TRICARE beneficiaries as patients. Individual civilian providers must be licensed by their state, accredited by a national organization, if one exists, and meet other standards of the medical community to be authorized to provide care under TRICARE. Individual TRICARE- authorized civilian providers can include attending physicians, certified nurse-practitioners, clinical nurse specialists, dentists, clinical psychologists, physician assistants, podiatrists, and optometrists, among others. There are two types of authorized civilian providers—network and nonnetwork providers. Network civilian providers are TRICARE- authorized providers who enter a contractual agreement with the regional MCSC to provide health care to TRICARE beneficiaries. By law, TRICARE maximum allowable reimbursement rates must generally mirror Medicare rates, but network providers may agree to accept lower reimbursements as a condition of network membership. In some cases, they agree to accept negotiated reimbursement rates, which are usually discounts off of the TRICARE reimbursement rates, as payment in full for medical care or services. Network civilian providers are reimbursed at their negotiated rate regardless of whether they are providing care to enrolled TRICARE beneficiaries under the Prime option or nonenrolled TRICARE beneficiaries under the Extra option. Network civilian providers file claim forms for TRICARE beneficiaries and follow other contractually required processes, such as those for obtaining referrals. However, network civilian providers are not obligated to accept all TRICARE beneficiaries seeking care. For example, a network civilian provider may decline to accept TRICARE beneficiaries as patients because the provider’s practice does not have sufficient capacity or for other reasons. Nonnetwork civilian providers are TRICARE-authorized providers who do not have a contractual agreement with an MCSC to provide care to TRICARE beneficiaries. Nonnetwork civilian providers may accept TRICARE beneficiaries as patients on a case-by-case basis. These providers may choose to accept the TRICARE reimbursement rate as payment in full for their services on a case-by-case basis. This practice is referred to as “participating” or accepting assignment on a claim. Nonnetwork civilian providers also have the option of charging up to 15 percent more than the TRICARE reimbursement rate for their services on a case-by-case basis—a practice referred to as “non-participating.” However, when a nonnetwork civilian provider bills more than the TRICARE reimbursement rate, TRICARE beneficiaries are responsible for paying the extra amount billed in addition to their required copayments. TROs and MCSCs told us that this authority is infrequently used, in part, because when providers bill the additional 15 percent, they usually collect their total reimbursement from the TRICARE beneficiaries, who may not always pay promptly. When nonnetwork civilian providers “participate” on a claim and agree to accept the TRICARE reimbursement amount as payment in full, the MCSCs usually pay them directly, ensuring timely payment of the claim. TRICARE provides its benefits through three main options for its non- Medicare eligible beneficiary population that vary according to TRICARE beneficiary enrollment requirements, the choices TRICARE beneficiaries have in selecting civilian and MTF providers, and the amount TRICARE beneficiaries must contribute towards the cost of their care. However, while there are three main options, there are only two types of TRICARE beneficiaries—enrolled and nonenrolled—and two types of civilian providers—network and nonnetwork. (See table 1.) All beneficiaries may also obtain care at MTFs although priority is given to active duty beneficiaries and Prime enrollees. The three main options with their corresponding enrollment requirements and provider categories are as follows: TRICARE Prime: This managed care option is the only TRICARE option requiring enrollment. Active duty servicemembers are required to enroll in this option while other TRICARE beneficiaries may choose to enroll. Prime enrollees receive most of their care from providers at MTFs, augmented by network civilian providers who have agreed to meet specific access standards for appointment wait times among other requirements. Prime enrollees have a primary care manager who either provides care or authorizes referrals to specialists. Beneficiaries can be assigned to a primary care manager at the MTF or, if the MTF is at capacity or no MTF is available, Prime enrollees may select a civilian primary care manager. Prime offers lower out-of-pocket costs than the other TRICARE options. Active duty personnel and their dependents do not pay enrollment fees, annual deductibles, or copayments for care obtained from network civilian providers. Retirees and their dependents who are not entitled to Medicare pay an annual enrollment fee and small copayments for care obtained from network civilian providers. TRICARE Standard: TRICARE beneficiaries who choose not to enroll in Prime may obtain health care using this fee-for-service option, which is designed to provide maximum flexibility in selecting providers. Under Standard, nonenrolled TRICARE beneficiaries may obtain care from TRICARE-authorized nonnetwork civilian providers of their choice. TRICARE beneficiaries using this option do not need a referral for most specialty care. Under Standard, all TRICARE beneficiaries must pay an annual deductible and copayments, which vary among active duty dependents and retirees and their dependents, and there is no annual enrollment fee. In addition, nonnetwork providers are not required to meet access standards, such as those for appointment wait times. TRICARE Extra: Similar to a preferred-provider organization, nonenrolled TRICARE beneficiaries may also obtain health care from a TRICARE network civilian provider for lower copayments than they would have under the Standard option—about 5 percent less. TRICARE beneficiaries choosing to use Extra must pay towards the same annual deductible as Standard and are responsible for copayments. Similar to Standard, there is no annual enrollment fee. Additionally, network civilian providers caring for nonenrolled TRICARE beneficiaries must adhere to the same access standards for appointment wait times that they use for enrolled TRICARE beneficiaries under Prime. Among TRICARE beneficiaries who were not Medicare eligible in fiscal year 2005, about 5.5 million or 65 percent of TRICARE’s beneficiaries were enrolled in Prime and thereby declared their intent to use their TRICARE benefit. In contrast, TMA does not know whether nonenrolled beneficiaries intend to use their TRICARE benefit. In fiscal year 2005, claims data showed that about 1.2 million or 14 percent of nonenrolled TRICARE beneficiaries obtained care with 66 percent of this care being delivered through the Standard option and 34 percent delivered through the Extra option. The remaining 1.8 million or 21 percent of nonenrolled beneficiaries were eligible for TRICARE benefits but did not use them during this time period. At any time, this population of eligible nonusers could elect to use Standard or Extra, and DOD would reimburse claims submitted for their health care after annual deductibles are met. TMA uses three MCSCs to provide civilian health care under the TRICARE program. Each MCSC is responsible for the delivery of care to TRICARE beneficiaries in one of three geographic regions—North, South, and West. The MCSCs are contractually required to establish and maintain networks of civilian providers in designated locations within these regions that are referred to as Prime Service Areas. (See fig. 2 for the location of Prime Service Areas in each of the three TRICARE regions.) Prime Service Areas include all MTF enrollment areas, Base Realignment and Closure sites, and additional areas where either TMA or the MCSC deems networks to be cost effective. As a result, each region may contain multiple Prime Service Areas. In these areas, civilian provider networks are required to be large enough to provide access for all TRICARE beneficiaries regardless of enrollment status or Medicare-eligibility. TMA contractually requires that MCSCs’ civilian provider networks meet specific access standards, such as travel times or wait times, for both primary and specialty care. For example, TRICARE beneficiaries seeking primary care should not have to drive more than 30 minutes to get to their appointment locations. In addition to contractual requirements, the MCSCs can add additional access standards that they strive to meet. MCSCs are also responsible for performing other customer service functions, such as processing claims and helping TRICARE beneficiaries locate providers. They also are required to operate TRICARE Service Centers, which are frequently located within MTFs, to provide TRICARE beneficiaries with information on the different TRICARE options, information on benefit coverage, assistance with finding network and nonnetwork civilian providers, determining eligibility status, and other activities. MCSCs provide customer service to any TRICARE beneficiary who requests assistance, regardless of their enrollment status. In each of the three regions, TMA uses a TRO to manage health care delivery. TRO directors are considered the health plan managers for the regions and are responsible for overseeing the MCSCs, including monitoring network quality and adequacy, monitoring customer satisfaction outcomes, and coordinating appointment and referral management policies. TRO directors and staff also provide customer service to all TRICARE beneficiaries who request assistance regardless of their enrollment status. Although they vary in the size of the geographic area covered, each TRICARE region has approximately the same number of TRICARE beneficiaries. However, the number of nonenrolled TRICARE beneficiaries varies by region as does their access to network providers under the Extra option depending on their proximity to a Prime Service Area. (See fig. 3 for the number and distribution of nonenrolled beneficiaries by region.) Throughout the three regions, about 16 percent of nonenrolled TRICARE beneficiaries reside outside of Prime Service Areas. In the North region, 23 percent of nonenrolled TRICARE beneficiaries live outside of Prime Service Areas, and in the West Region, 21 percent of nonenrolled TRICARE beneficiaries live outside of Prime Service areas. Because the South Region has extensive Prime Service Areas, no TRICARE beneficiaries live in locations without a civilian provider network. Although most nonenrolled TRICARE beneficiaries nationwide live in a Prime Service Area, making Extra a readily available option, nonenrolled TRICARE beneficiaries have used Standard more frequently than Extra for each fiscal year from 2001 through 2005. (See fig. 4.) The NDAA for fiscal year 2004 directed DOD to monitor nonenrolled TRICARE beneficiaries’ access to care under the TRICARE Standard option and to designate a senior official to take the actions necessary to ensure access to care for nonenrolled TRICARE beneficiaries. Specifically, the NDAA required surveys to be done in 20 market areas each fiscal year until all markets were surveyed to determine how many civilian providers were accepting nonenrolled TRICARE beneficiaries as new patients. Although the law focused on Standard, TMA officials told us that since nonenrolled TRICARE beneficiaries can receive care through both the Standard and Extra options, they designed the survey to monitor access to care from both network and nonnetwork providers. When developing the survey’s methodology, TMA defined market areas as individual states and determined that all states could be surveyed within a 3-year period. TMA implemented its survey in fiscal year 2005 for the first 20 states. The survey collected data from the billing and insurance specialists of selected civilian providers, both network and nonnetwork, to determine how many were accepting nonenrolled TRICARE beneficiaries as new patients and to identify the reasons providers cite for not accepting these TRICARE beneficiaries. About 17 percent of the providers in the sample belonged to a TRICARE network while the remaining 83 percent of providers in the sample were nonnetwork providers. Because about 14 percent of all civilian providers belong to the TRICARE network, TMA’s sample of civilian providers is fairly representative of the network and nonnetwork civilian provider population serving all TRICARE beneficiaries, including nonenrolled beneficiaries who can use the Standard and Extra options. TMA’s four-question survey focused on a given provider’s awareness of TRICARE, whether the provider was accepting nonenrolled beneficiaries as new patients, and if not, the reasons why they were not. (See app. II for a detailed discussion of the methodology used for this survey and app. III for the complete survey instrument.) The NDAA for fiscal year 2004 also required DOD to designate a senior official to take actions necessary for achieving and maintaining the participation of nonnetwork civilian providers in a number adequate to ensure care for nonenrolled TRICARE beneficiaries in each market area. According to this legislation, the senior official would have the following responsibilities: educating nonnetwork civilian providers about TRICARE, encouraging nonnetwork civilian providers to accept nonenrolled TRICARE beneficiaries as patients, ensuring that nonenrolled TRICARE beneficiaries have the information necessary to locate nonnetwork civilian providers readily, and recommending adjustments in reimbursement rates that the official considers necessary to ensure adequate availability of nonnetwork civilian providers for nonenrolled TRICARE beneficiaries. TMA and its MCSCs use various methods for evaluating access to care, and according to TMA and MCSC officials, the resulting measures indicate that access to care is generally sufficient for nonenrolled TRICARE beneficiaries. TMA is administering the civilian provider survey required by the NDAA for fiscal year 2004, which is designed to obtain information on network and nonnetwork civilian providers’ willingness to accept nonenrolled TRICARE beneficiaries as new patients. TMA also obtains information about access to care through its annual health care survey of all TRICARE beneficiaries and through the anecdotal beneficiary feedback they receive from the TROs, which monitor access in their respective regions. MCSCs also use a variety of approaches to evaluate access to care, including inquiries from beneficiaries, analyses of claims data, and monitoring of the capacity of civilian provider networks. TMA uses multiple methods of evaluating access to care for its nonenrolled TRICARE beneficiaries, including the recently implemented survey of civilian providers and its annual health care survey of TRICARE beneficiaries. In addition, TMA monitors centrally received beneficiary complaints and inquiries, and each TRO monitors access to care in its respective region. In fiscal year 2005, TMA completed the first phase of its mandated survey of civilian health care providers. (See app. II for discussion of technical aspects of this survey’s methodology.) Although the survey was designed to determine the extent to which providers were willing to accept nonenrolled TRICARE beneficiaries as new patients, it is premature to interpret the results because this is the first of three rounds of the survey, and TMA does not have an established benchmark for determining the number of civilian providers that are needed for nonenrolled beneficiaries. During this initial round, TMA randomly selected a representative sample of over 40,000 providers in 20 states. TMA found that the majority of the providers surveyed were accepting new patients, including nonenrolled TRICARE beneficiaries. Specifically, only 14 percent of providers reported that they were not accepting new patients, including TRICARE patients, privately insured patients, or patients who were paying for their own care. Of the remaining 86 percent accepting new patients, the percent that would accept nonenrolled TRICARE beneficiaries as new patients averaged 80 percent for all 20 states. (See table 2 for overall results by state.) An additional comparison of the acceptance rate for two categories of providers—primary care providers and specialists—in each of these 20 states revealed very little difference between the two categories. Of those accepting new patients, 78 percent of primary care providers and 81 percent of specialists would accept nonenrolled TRICARE beneficiaries as new patients. In addition to the statewide sample, TMA also sampled civilian providers in several smaller geographic locations, defined as hospital service areas (HSA), in order to respond to concerns about access to care that were specific to certain locations. TMA selected 29 HSAs—12 that were randomly selected from within the 20 states evaluated for fiscal year 2005 and 17 based on beneficiary concerns about specific locations. As in the 20-state survey, TMA found that most providers in the selected HSAs were accepting new patients, including nonenrolled TRICARE beneficiaries. Specifically, only 13 percent of surveyed providers reported that they were not accepting new patients. Of the remaining 87 percent accepting new patients, 81 percent were accepting nonenrolled TRICARE beneficiaries as new patients. (See table 3.) An additional comparison of the acceptance rates for primary care providers and specialists who were accepting new patients revealed that 75 percent of the surveyed primary care providers and 85 percent of the surveyed specialists would accept nonenrolled TRICARE beneficiaries as new patients. A further comparison of providers accepting nonenrolled TRICARE beneficiaries as new patients between the HSAs selected based on TRICARE beneficiaries’ concerns and the HSAs randomly selected from the 20 surveyed states showed minimal difference in acceptance rates—80 percent and 83 percent, respectively. In both the states and HSAs, civilian providers who indicated that they were not accepting nonenrolled TRICARE beneficiaries as new patients were asked to identify why they made this decision in their own words, and were permitted to provide as many reasons as they wanted. More than half of both network and nonnetwork respondents cited not having a provider available or reimbursement issues as reasons. For providers citing nonavailability as a reason, many explained that they were either in the process of retiring or were too busy to accept any new patients at this time. Providers citing reimbursement issues most often stated an opinion that TRICARE’s reimbursement rates were low and that claims payment was slow. (See app. IV for TMA’s summary of the aggregate results by category.) Although there is no benchmark with which to compare the results of the initial civilian provider survey effort, TMA officials stated that their analysis of the 2005 survey results did not indicate widespread problems with nonenrolled TRICARE beneficiaries’ access to care. Nonetheless, TRO officials used the survey results to identify specific cities in their regions where civilian providers’ acceptance of nonenrolled TRICARE beneficiaries and knowledge about TRICARE were low in comparison to the other locations surveyed. To assist in this effort, the Assistant Secretary of Defense (ASD) for Health Affairs directed TMA’s Communications and Customer Service Directorate to work with the TROs and other TMA officials to develop a strategic marketing plan for these locations. The cities selected by the TROs are as follows: West region: Olympia, Washington (2,732 nonenrolled beneficiaries), Monterey, California (1,180 nonenrolled beneficiaries), Seattle, Washington ( 2,358 nonenrolled beneficiaries), and Anchorage, Alaska (3,381 nonenrolled beneficiaries); North region: Brooklyn, New York (4,276 nonenrolled beneficiaries) and Eau Claire, Wisconsin (902 nonenrolled beneficiaries); and South region: Arlington, Texas (3,025 nonenrolled beneficiaries), Houston, Texas (6,415 nonenrolled beneficiaries), and Boca Raton, Florida (447 nonenrolled beneficiaries). TMA officials and TRICARE beneficiaries have stated that additional survey questions could have yielded useful information. For example, the survey did not ask providers whether they are accepting new Medicare patients—an important proxy because TRICARE reimbursement rates are established using Medicare reimbursement rates, and a comparison of the two programs could provide information on whether providers are more concerned with the amount of reimbursement or other issues. Furthermore, the survey did not ask providers how much of their current practice consists of TRICARE beneficiaries, to capture whether or not providers may already have TRICARE beneficiaries in their practices. However, a provision in the NDAA for fiscal year 2006 instructs TMA to add the following questions to its civilian provider survey: 1. What percentage of Dr. X’s current patient population uses any form of TRICARE? 2. Does Dr. X accept patients under the Medicare program? 3. Would Dr. X accept additional Medicare patients? In addition to its civilian provider survey that covered 20 states, TMA gathers worldwide information on nonenrolled TRICARE beneficiaries’ access to care through its annual Health Care Survey of DOD Beneficiaries, which covers all TRICARE beneficiaries and all TRICARE options. According to survey results from 2003 through 2005, about 77 percent of nonenrolled TRICARE beneficiaries who obtained care reported that “getting needed care” was not a problem for them. Similarly, over 80 percent of these TRICARE beneficiaries reported that they could “get care quickly.” For the same time period, TMA compared its survey results with the results of a civilian health plan survey, the Consumer Assessment of Healthcare Providers and Systems (CAHPS®), which asked participants the same questions on access to care under their plans. From this comparative analysis, TMA found that a similar percentage of civilian health plan participants—about 80 percent—responded that “getting needed care” was not a problem and that they could “get care quickly.” TMA uses this survey as a benchmark to compare TRICARE against civilian plans. Anecdotal information about access to care is available through TMA’s centralized Beneficiary and Provider Services office, which collects and monitors information on TRICARE beneficiaries’ complaints and general inquiries, including issues about access to care. TRICARE beneficiaries may contact this office by telephone, e-mail, written correspondence, or through their congressional representatives. TMA officials broadly categorize each contact by issue and use this information to monitor trends in the feedback they receive through these contacts. A TMA official stated that if the number of contacts they receive related to an issue rises, the appropriate program officials—such as the TROs—are notified and encouraged to investigate the issue. Furthermore, TMA maintains a record of TRICARE beneficiary and provider contacts that have been addressed and those that remain open and continue to require attention. Although the Beneficiary and Provider Services office does not specifically track access-to-care issues as a separate issue, one of the TMA officials responsible for tracking the contacts told us that TRICARE beneficiary complaints and inquiries relating to access issues have been minimal. Overall, concerns and inquiries for the “contractor service complaint” category, which could include access-to-care issues for both enrolled and nonenrolled TRICARE beneficiaries, represented about 1 percent of about 6,900 total contacts about the MCSCs for 2005. In addition, on a regional level, the TROs collect and monitor TRICARE beneficiary feedback gathered from e-mails and phone calls, as well as correspondence they receive from TRICARE beneficiary groups. However, the TROs told us that detailed information on each of these contacts is not routinely maintained. For example, one TRO told us that when a TRICARE beneficiary contacts them for assistance in locating a provider, they track the general reason for the call, but do not document the specific concerns. TRO officials told us that they receive only a small number of contacts from nonenrolled TRICARE beneficiaries who are unable to obtain care from nonnetwork civilian providers. For example, one TRO told us that they received approximately 34 requests for assistance locating a provider in calendar year 2005 from the over 600,000 nonenrolled TRICARE beneficiaries in this region. TRO officials indicated that sometimes these requests are due to TRICARE beneficiaries’ inability to obtain care from a specific provider at a specific time and are not necessarily indicative of access problems because that provider may be available at another time or other providers may be available. The TROs told us that they also monitor nonenrolled TRICARE beneficiaries’ access to care retrospectively by evaluating claims data as a record of health care usage. For example, the TROs use these data to identify how many network and nonnetwork providers have accepted nonenrolled TRICARE beneficiaries as patients and to evaluate the use of the different TRICARE options. Finally, the TROs and military services are in the process of implementing a new method of monitoring TRICARE beneficiary feedback. The Assistance Reporting Tool (ART) is a computer database that when fully operational will be used to archive and manage TRICARE beneficiary feedback on all aspects of health care. Currently each of the three TROs, all Army MTFs, and a portion of Navy and Air Force MTFs use this system as either their primary or one of several tools for managing and archiving TRICARE beneficiary feedback. Because ART is not mandatory for all MTFs, the TROs also rely on other feedback mechanisms to capture the most complete record of TRICARE beneficiary concerns and questions. These other mechanisms include e-mails from TRICARE beneficiaries to MTFs and data requests that the TROs periodically make to MTFs. In addition, while the MCSCs are not required to use ART because it was introduced after TRICARE’s current health care delivery contracts were awarded, one of the MCSCs is currently using it. In the next cycle of TRICARE contracts, TMA officials told us that they plan to require that all MCSCs use this system. TMA officials who have reviewed the preliminary information captured by ART told us that the tool has obtained very little feedback that would indicate nonenrolled TRICARE beneficiaries are having problems with access to care. Each of the three MCSCs has developed its own methods for monitoring whether TRICARE beneficiaries in its region have access to care both in Prime Service Areas and in areas where provider networks do not exist. According to the MCSCs, while their methods for evaluating access to care were not designed to evaluate access specifically for nonenrolled TRICARE beneficiaries, they do provide some information that they use to monitor the availability of both network and nonnetwork civilian providers for this population, which is one component of access to care. The MCSCs also monitor access to care through beneficiary inquiries. Each maintains a data system to archive and tabulate anecdotal TRICARE beneficiary feedback received through some or all of the following methods: telephone, e-mail, congressional correspondence, or walk-in visits to a TRICARE Service Center. The MCSCs organize TRICARE beneficiary feedback into subject categories and then monitor changes in the frequency of contacts in these categories to identify trends and important issues. At our request, each of the MCSCs reviewed their most recent TRICARE beneficiary complaint data and found very small numbers of comments pertaining to health care access. The MCSCs told us this was an indication that TRICARE beneficiaries—both enrolled and nonenrolled—were not experiencing any widespread problems with access to care. For example, one MCSC identified fewer than 40 complaints related to access out of one million contacts with TRICARE beneficiaries in a 1-month period. The second MCSC reported that for the last two quarters of 2005 they received an average of 355 inquiries and complaints each month about access to care. Officials from this MCSC told us that while their TRICARE beneficiary feedback system could not quantify the total number of inquiries received, these 355 inquiries represented a small percentage of all contacts. The third MCSC reported that out of more than 250,000 phone calls and walk-in visits to TRICARE Service Centers during the month of December 2005, 71 contacts, or less than 1 percent of the total contacts, were related to access. The MCSCs also determine how many civilian providers have accepted at least one TRICARE beneficiary by analyzing claims data to examine the extent to which both network and nonnetwork civilian providers are accepting TRICARE beneficiaries as patients. Each MCSC has concluded that more than half of all licensed civilian providers—both network and nonnetwork—in their respective regions have accepted at least one TRICARE beneficiary, regardless of enrollment status, as a patient in the last year. According to MCSCs, access to care appears to be generally sufficient because the percentage of all licensed civilian providers in each region who have submitted at least one TRICARE claim during the past year are as follows: 90 percent in the South region, where TRICARE beneficiaries represent 3.7 percent of the entire region’s population; 56 percent in the West region, where TRICARE beneficiaries represent 3.1 percent of the region’s population; and 52 percent in the North region, where all TRICARE beneficiaries represent an estimated 2.1 percent of the region’s population. Each MCSC told us that one of the primary ways they ensure sufficient access to care for both enrolled and nonenrolled TRICARE beneficiaries is by monitoring whether their civilian provider networks have the capacity to provide care to all beneficiaries in their Prime Service Areas. Throughout the three regions, the majority of nonenrolled TRICARE beneficiaries—84 percent—live within Prime Service Areas, making the choice of using a civilian network provider through Extra a readily available option for them. In the South region, all TRICARE beneficiaries reside in Prime Services Areas. In this region, the MCSC monitors access to care through geographic analyses of provider and TRICARE beneficiary locations to determine whether its networks meet the needs of both enrolled and nonenrolled TRICARE beneficiaries using TRICARE’s access standards. In another region, where not all TRICARE beneficiaries live in Prime Service Areas, the MCSC will assist nonenrolled TRICARE beneficiaries in finding nonnetwork civilian providers on an as-needed basis. In the third region where the Prime Service Areas also do not encompass all TRICARE beneficiaries, the MCSC recruits and contracts with providers outside of Prime Service Areas who are available and willing to deliver care to nonenrolled TRICARE beneficiaries living there. Network providers who deliver care in locations outside of Prime Service Areas currently account for 25 percent of this MCSCs’ network providers. TMA, MCSCs, and provider representatives have cited various factors as impediments to civilian providers’ willingness to accept nonenrolled TRICARE beneficiaries as patients, and TMA and its MCSCs have different ways to address them. Some impediments are specific to TRICARE, including concerns about reimbursement rates and administrative issues, and TMA and its MCSCs have specific ways to address these issues. For example, TMA has the authority to increase reimbursement rates in certain circumstances, and both TMA and MCSCs conduct outreach efforts targeted to assist civilian providers with administrative issues. Other impediments—such as providers’ practices being at maximum patient capacity and provider shortages in certain locations—are not specific to TRICARE and are therefore inherently more difficult for TMA and the MCSCs to address. Since TRICARE was implemented in 1995, some civilian providers—both network and nonnetwork—have complained that TRICARE’s reimbursement rates tend to be lower than those of other health plans, and as a result, some of these providers have been unwilling to accept nonenrolled TRICARE beneficiaries as patients. According to the results of the initial round of TMA’s civilian provider survey, concern about reimbursement amounts was one of the primary reasons that both network and nonnetwork civilian providers cited for not accepting nonenrolled TRICARE beneficiaries as new patients. In the 2005 civilian provider survey, of those who gave reasons for not accepting nonenrolled TRICARE beneficiaries as new patients, 20 percent of network providers and 25 percent of nonnetwork providers cited concerns about reimbursement amounts. However, TMA has the authority to adjust reimbursement rates in areas where it determines that reimbursement rate amounts have been negatively impacting TRICARE beneficiaries’ ability to obtain care. One of providers’ main reasons for not accepting nonenrolled TRICARE beneficiaries as patients is providers’ concern about low reimbursement amounts. TRICARE’s reimbursement rates generally mirror reimbursement rates paid by the Medicare program. Beginning in fiscal year 1991, in an effort to control escalating health care costs, Congress instructed DOD to gradually lower its reimbursement rates for individual civilian providers to mirror those paid by Medicare—an adjustment that has saved hundreds of millions of dollars since the conversion. As of January 2006, the transition to Medicare rates was nearly complete, and reimbursement rates for only 48 services remain higher than Medicare reimbursement rates. (See app. V for a list of these services.) According to TMA and MCSC officials, civilian providers, including both network and nonnetwork, generally seek to develop a practice that includes patients with higher-paying private insurers to compensate for the acceptance of patients with lower-paying health plans, including Medicare, Medicaid, and TRICARE. However, according to TMA and MCSC officials, TRICARE generally has little leverage to encourage network and nonnetwork civilian provider acceptance of its patients because the TRICARE population is small and transient. Further, in locations where the demand for providers’ services exceeds the supply— such as in Alaska—providers can be selective about who they accept as patients. TMA and MCSC officials have also cited providers’ concerns that TRICARE’s pediatric and obstetric rates are lower than Medicaid rates for these services. To investigate these concerns, TMA conducted a comparative analysis that found TRICARE’s reimbursement rates for selected pediatric and obstetric procedures were generally higher than Medicaid’s rates in many states for March 2006. TMA compared the TRICARE reimbursement rate for the service most commonly billed by pediatricians—an office visit for an established patient—with Medicaid rates for this service and found that in 41 of the 45 states for which Medicaid data were available, the TRICARE reimbursement rate exceeded Medicaid’s rate for this service. In addition, TMA compared its reimbursement rates for 14 commonly used maternity and delivery services with Medicaid rates and found that in 35 of the 45 states for which Medicaid data were available, TRICARE reimbursement rates for these services exceeded the Medicaid payment rates. TMA also analyzed reimbursement rates for pediatric immunizations based on MCSCs’ concerns that providers viewed these rates as too low. However, when TMA compared TRICARE’s reimbursement rates with the cost of the vaccine for the 10 most frequently used pediatric vaccines and for the hepatitis A vaccine, TMA’s analysts concluded that the TRICARE reimbursement rates were generally reasonable and not undervalued in relation to what a provider might actually pay to obtain them. Only one vaccine—the pediatric hepatitis A vaccine—appeared to be priced lower than the reasonable cost of obtaining the vaccine. In this instance, the TRICARE reimbursement rate was $22.64, while pediatricians were paying between $27.41 and $30.37 for the vaccine. As a result of this discrepancy, TMA used its general authority to deviate from Medicare rates, and starting May 1, 2006, TMA instructed the MCSCs to reimburse pediatric hepatitis A vaccines nationally at a new reimbursement rate of $30.40. TMA has the authority to increase TRICARE reimbursement rates for network and nonnetwork civilian providers to ensure that all beneficiaries, including nonenrolled beneficiaries, have adequate access to care. TMA’s authorities include (1) waiving reimbursement rate reductions for both network and nonnetwork providers that resulted when TRICARE reimbursement rates were lowered to Medicare levels, (2) issuing locality waivers that increase rates for specific procedures in specific localities, and (3) issuing network-based waivers that increase some network civilian providers’ reimbursements. Once implemented, waivers remain in effect indefinitely until TMA officials determine they are no longer needed. As of August 2006, TMA had approved 15 waivers in total—2 waiving reimbursement rates reductions that resulted when TRICARE reimbursement rates were lowered to Medicare levels, 7 locality waivers, and 6 network waivers. TMA can use its authority to waive reimbursement rate reductions to restore TRICARE reimbursement rates in specific localities to the levels that existed before a reduction was made to align TRICARE rates with Medicare rates. On two occasions, TMA has used this authority in Alaska to encourage both network and nonnetwork civilian providers to accept TRICARE beneficiaries as patients in an effort to ensure adequate access to care. In 2000, TMA used this waiver authority to uniformly increase reimbursement rates for network and nonnetwork civilian providers in rural Alaska, and in 2002 TMA implemented this same waiver for network and nonnetwork civilian providers in Anchorage. The use of these waivers resulted in an average reimbursement rate increase of 28 percent for all of Alaska. However, in 2001, we studied the effect of the 2000 waiver on access to care in rural Alaska and found that it did not increase TRICARE beneficiaries’ access to care. Locality waivers may be used to increase rates for specific medical services in specific areas where access to care has been severely impaired. Reimbursement rate increases for this type of waiver can be established in one of three ways: by adding a percentage factor to the existing TRICARE reimbursement rate, by calculating a prevailing charge, or by using another government reimbursement rate, such as rates used by the Department of Veterans Affairs to purchase health care from civilian providers. The resulting rate increase would be applied to both network and nonnetwork civilian providers for the medical services identified in the areas where access is severely impaired. A total of nine applications for locality-based waivers have been submitted to TMA between January 2003 and August 2006. (See table 4.) Of these, seven locality waivers have been approved by TMA and two are still pending. Six of the approved locality waivers as well as one pending application are for locations in Alaska. This includes one approved waiver to adjust the reimbursement rates for obstetric services to match Medicaid rates in Alaska and nine additional states based on TMA’s comparative analysis of reimbursement rates for 14 obstetrical procedures. Network waivers are used to increase reimbursement rates for network providers up to 15 percent above the TRICARE reimbursement rate in an effort to ensure an adequate number and mix of primary and specialty care network civilian providers for a specific location. Between January 2002 and August 2006, 10 applications for network waivers have been submitted to TMA. Of these, 6 network waivers have been approved by TMA and 4 have been denied. (See table 5.) Providers, TRICARE beneficiaries, MCSCs, as well as TRO directors may apply for a reimbursement rate waiver by submitting written requests supporting the need for reimbursement rate increases on the grounds that access to health care services is impaired due to low reimbursement rates. These requests must contain specific justifications to support the claim that access problems are related to reimbursement rates and must include information such as the number of providers and TRICARE beneficiaries in a location, the availability of MTF providers, geographic characteristics, and cost effectiveness of granting the waiver. All waiver requests are submitted to the TRO directors, who review the application and make a decision whether to forward the request to the Director of TMA through TMA’s contracting officers, who are responsible for administering the MCSCs’ contracts. According to a TMA official, the contracting officers work with TMA analysts to review the submitted requests and verify whether there is an insufficient number of providers in the area and conduct a cost-benefit analysis before making a recommendation to the Director of TMA that the waiver be accepted or denied. Each analysis is tailored to the specific concerns outlined in the waiver requests. According to this official, TMA conducts these additional analyses to ensure that an increase in reimbursement rates would actually alleviate access problems and that access was not impaired due to such things as administrative problems or providers’ unhappiness with claims payment timeliness or accuracy. Once a waiver is granted, there is no mechanism that automatically terminates it. According to a TMA official, there was an expectation within TMA that the continued need for existing waivers would be evaluated on an annual basis. However, waivers have been reviewed on a periodic, ad hoc basis rather than on an annual basis as expected. When TMA implemented new MCSC contracts in fiscal years 2004 and 2005, TMA and the MCSCs discussed existing waivers and mutually agreed to extend all of them because they continued to believe that these waivers were necessary to ensure access to care. However, without a formal analysis of how these waivers have impacted access in the areas in which they were implemented, the actual extent of their effect is unclear. Since the inception of TRICARE, both network and nonnetwork civilian providers have expressed concerns about administrative issues or “hassles” associated with the program, which, when combined with low reimbursement rates, make them less likely to accept nonenrolled TRICARE beneficiaries as patients. TMA and MCSC officials stated that because TRICARE beneficiaries usually represent only a small percentage of a provider’s practice, both network and nonnetwork civilian providers may not be as knowledgeable about the program and its unique administration requirements. Adding to the potential for confusion, while some administrative requirements apply to all TRICARE beneficiaries, the TRICARE program also has separate and distinct administrative requirements for enrolled and nonenrolled TRICARE beneficiaries. For example, network providers must meet specific time frame and documentation requirements when referring enrolled TRICARE beneficiaries for specialty care or when delivering specialty care to enrolled TRICARE beneficiaries. However, referral standards usually do not apply to nonenrolled TRICARE beneficiaries. Additionally, according to the initial round of TMA’s civilian provider survey, 15 percent of network respondents and 7 percent of nonnetwork respondents who gave explanations for why they were not accepting nonenrolled TRICARE beneficiaries as new patients cited administrative inconveniences as a reason. These administrative inconveniences included too much paperwork, problems understanding the benefits and policies, and a lengthy referral process. MCSC and TMA officials also told us that providers’ past experiences with TRICARE administrative issues may have biased their opinion of the program, while, in some cases, there have been improvements. For example, according to MCSCs and TMA officials, some providers perceive that previously identified claims processing problems persist and cite problems with timeliness and claims payment decisions as reasons for not accepting TRICARE patients. While claims processing problems plagued the TRICARE program in its early years, we reported in 2003 that efforts had been made to improve claims processing efficiency, and as a result, claims were being processed in a more timely manner, though some inefficiencies remained. In addition, some TRO officials and providers said that TRICARE claims payment decisions sometimes are not always clear to providers and, as a result, they may believe problems with claims processing exist. This is due in part to the fact that TRICARE’s claims processing outcomes may differ from Medicare’s—despite the programs’ similarities in reimbursement rates—due to different benefit structures and different claims processing tools that are used to prevent overpayment. Furthermore, because they do not always understand the program, providers and TRICARE beneficiaries may complain about adjudication decisions on claims that have been processed correctly. Problems may also occur because providers and TRICARE beneficiaries may make mistakes when filing their claims. In efforts to address problems related to administrative issues, MCSCs conduct a variety of outreach efforts to educate nonnetwork civilian providers on TRICARE requirements and assist with both actual and perceived administrative concerns. For example, MCSCs provide on-line tools and toll-free telephone support to mitigate administrative issues. Also, one MCSC works with state medical associations to address provider concerns and to ensure that information about TRICARE requirements is included in medical association newsletters. Each of the MCSCs has provider relations representatives located in areas throughout the region outside of their central office. These provider relations representatives schedule opportunities to meet with nonnetwork civilian providers that include booths or speaking engagements at health fairs, conferences, and other provider events and, when necessary, work one-on-one with network and nonnetwork civilian providers to provide instructions on ways to respond to TRICARE’s administrative requirements and to help eliminate the burden of unnecessary paperwork. According to MCSCs, these efforts have been helpful because they are not experiencing widespread problems with TRICARE beneficiaries’ access to care. However, similar to the use of waivers, the actual extent to which these efforts have improved access to care is unclear. TMA and MCSCs attempt to address impediments to network and nonnetwork provider acceptance of nonenrolled TRICARE beneficiaries that are not specific to the TRICARE program. However, TMA and MCSCs cannot always resolve access problems related to these impediments. Some network and nonnetwork civilian providers may be unwilling to accept TRICARE beneficiaries as patients because their practices are already at capacity. For example, the initial round of TMA’s civilian provider survey found that 14 percent of providers in the 20 states surveyed were not available to accept any new patients, including TRICARE patients, privately insured patients, or patients who were paying for their own care. According to the MCSCs, access problems related to practice capacity are more likely to occur in geographically remote areas that have few providers than in more densely populated areas with more providers. However, one MCSC stated that access problems related to practice capacity can also occur in urban areas where the medical needs of the population exceed the supply of specific specialties, such as dermatology. TRICARE beneficiaries’ access to care is also impeded in areas where there are insufficient numbers and types of civilian providers, both network and nonnetwork, to cover the local demand for health care. In these locations, the entire community is impacted by provider shortages. Consequently, TRICARE beneficiaries, as well as all other local residents, must sometimes travel long distances to obtain health care. MCSC officials stated that each TRICARE region includes areas with civilian provider shortages. For example, in TRICARE’s North Region, Watertown, New York, has an insufficient number of certain specialty providers for its population, which includes TRICARE beneficiaries stationed at a nearby military installation whose MTF is too small to handle all of their health care needs. TRICARE’s South Region contains many rural areas with few providers, including multiple locations in Oklahoma and Texas. Likewise, in TRICARE’s West Region, MCSC officials stated that there are provider shortages in various locations, including Cheyenne, Wyoming, and Mountain Home, Idaho. TMA and the MCSCs have limited means of responding to access-to-care impediments in areas with network and nonnetwork civilian provider shortages, although TMA has adopted two bonus payment systems that mirror those used by Medicare for these areas. In June 2003, TMA began paying providers a 10 percent bonus payment for the services rendered in Health Professional Shortage Areas, which the Department of Health and Human Services has identified as having a shortage of primary care, dental, or mental health providers. Also, in January 2005, TMA followed Medicare in initiating payment of a 5 percent bonus for services rendered by primary care providers in geographic areas designated by the Department of Health and Human Services as Physician Scarcity Areas, a program that is only operational through 2007. Providers who are eligible for and wish to receive either of these bonus payments must include a specific code on every claim they submit to obtain these additional payments. According to a TMA official, TMA does not know the extent to which these payments have been used and has not evaluated the effectiveness of these bonus payments on access to care. TMA and the MCSCs have attempted to overcome obstacles related to practice capacity and provider shortages by using high-ranking military personnel and field provider relation representatives to make personal appeals to network and nonnetwork civilian providers. In August 2004, the ASD for Health Affairs wrote a letter to providers appealing to their patriotism and asking them to accept TRICARE beneficiaries as patients. One MCSC official claimed that this letter has resulted in additional providers accepting both enrolled and nonenrolled TRICARE beneficiaries as patients. In addition, in certain areas where access is problematic, MCSC provider relations representatives or TRO officials personally call on providers to solicit their support of military personnel through TRICARE. Various TMA offices, including the TROs, and the MCSCs are carrying out the responsibilities that are outlined in the NDAA for fiscal year 2004 to take actions to ensure nonenrolled beneficiaries’ access to care, such as educating civilian providers and recommending reimbursement rate adjustments—though these responsibilities were not formally designated to a single, senior official. For example, TMA’s Communications and Customer Service Directorate has primary responsibility for education and marketing activities for all civilian providers—including nonnetwork providers—although the TROs and MCSCs also share this responsibility. (See table 6.) This office oversees a national contract for marketing and education materials with input from the TROs and the MCSCs. As part of this responsibility, this office designs and prepares marketing and education materials in conjunction with its contractor. On a regional level, the TROs and MCSCs also have responsibilities for educating both network and nonnetwork civilian providers. As part of these efforts, each TRO works with its region’s MCSC to host town-hall meetings and to provide briefings for network and nonnetwork civilian providers. In addition, the MCSCs contact, support, educate, and market to both network and nonnetwork civilian providers. For example, one MCSC distributes its monthly provider newsletter or bulletin to nonnetwork civilian providers who submit 25 or more TRICARE claims in 1 year. MCSCs also provide educational materials to civilian providers, including nonnetwork providers, and, in some instances, schedule provider seminars for nonnetwork providers. Actions to encourage both network and nonnetwork civilian providers to accept nonenrolled TRICARE beneficiaries as patients are currently being addressed by the MCSCs. First, in areas with network civilian providers, MCSCs are required by contract to ensure that the networks are robust enough to provide health care to both enrolled and nonenrolled TRICARE beneficiaries in that location. As a result, MCSCs strive to ensure adequate numbers of network civilian providers who could also provide care to nonenrolled TRICARE beneficiaries. In addition, when nonenrolled TRICARE beneficiaries request assistance with finding providers, MCSCs work to encourage civilian providers, who could be either network or nonnetwork, to accept these TRICARE beneficiaries as patients. In some instances when a provider cannot be easily identified for a TRICARE beneficiary, MCSCs told us their provider relations representatives, who are knowledgeable about providers in their regions, will call on individual providers to encourage them to accept these TRICARE beneficiaries as patients. Nonetheless, as contractually required, MCSCs are focused on recruiting civilian providers for their networks and do not proactively recruit nonnetwork civilian providers to accept TRICARE beneficiaries as patients. Efforts to obtain nonnetwork civilian providers for nonenrolled TRICARE beneficiaries using the Standard option are initiated on an as- needed basis. Additionally, TMA, its TROs, and the MCSCs all have procedures and tools in place aimed at ensuring that nonenrolled TRICARE beneficiaries can readily locate both network and nonnetwork civilian providers. A central TMA office maintains an online directory of both network and nonnetwork civilian providers who have accepted TRICARE beneficiaries as patients in the last 2 years. MCSCs’ Web sites provide a link to this TMA directory and also provide a directory of network civilian providers in their regions. Also, the TROs provide services, including assistance with locating civilian providers, to any TRICARE beneficiary who contacts them. Among other services they provide, Beneficiary Service Representatives at MCSC- operated TRICARE Service Centers assist “walk-in” TRICARE beneficiaries—regardless of their enrollment status—to locate providers. In addition, all MCSCs are contractually required to have representatives available by phone 24 hours a day, 7 days a week to assist with locating a network provider. One MCSC told us that if a network provider is not available, the phone representatives will help locate nonnetwork providers in the area. Finally, the TROs currently are responsible for recommending reimbursement rate adjustments—that have been initiated by their offices, MCSCs, providers, and TRICARE beneficiaries—to increase provider reimbursement rates in areas where access to care is impaired for both enrolled and nonenrolled TRICARE beneficiaries. Since the TROs were established in 2004, two of the three TROs have recommended such increases to provider reimbursement rates in their regions. Nonetheless, TMA has not formally designated a senior official to take responsibilities for nonenrolled TRICARE beneficiaries and nonnetwork civilian providers as outlined in the NDAA for fiscal year 2004. According to TMA officials, this role was assumed by the ASD for Health Affairs, who is responsible for overseeing DOD’s health programs and resources, because these responsibilities are included in the official directive for this position. According to senior TMA officials, the ASD for Health Affairs intended to delegate these responsibilities to the TRO directors. However, while this intent was communicated verbally, the delegation was never formalized in writing. TRO officials told us that while they were aware of the ASD for Health Affairs’ intent, they never received official notification or designation outlining these responsibilities and expectations. As a result, at the time of our site visits, the TROs had not undertaken any efforts beyond the level of assistance they were already providing to nonenrolled TRICARE beneficiaries and nonnetwork civilian providers. Nonetheless, during the time of our review, each TRO was in the process of assigning responsibilities for nonenrolled beneficiaries to a specific staff member in accordance with the staffing plan TMA established for the TROs. Additionally, officials at each of the TROs told us that they provide services and assistance to all TRICARE beneficiaries regardless of enrollment status. To more directly assign responsibilities for nonenrolled beneficiaries’ access to care to the TROs, the NDAA for fiscal year 2006 specifically instructs the TROs to (1) identify nonnetwork providers who will accept nonenrolled TRICARE beneficiaries as patients; (2) communicate with nonenrolled TRICARE beneficiaries; (3) conduct outreach to nonnetwork providers, encouraging their acceptance of TRICARE beneficiaries as patients; and (4) publicize which nonnetwork providers in each region accept nonenrolled TRICARE beneficiaries as patients. It also requires that DOD submit annual reports to Congress on efforts to implement these activities. We received comments on a draft of this report from DOD (see app. VI). In its comments DOD stated that it appreciated the collaborative, insightful, and thorough approach that was taken with this important issue. However, DOD disagreed with our finding that it had not formally designated a senior official to ensure nonenrolled beneficiaries’ access to care, including adequate participation by nonnetwork providers, as required by the NDAA for fiscal year 2004. DOD stated that DOD directive 5136.12 assigned these duties to the TMA director and the TROs by designating the TMA Director as the program manager for TRICARE health and medical resources and other responsibilities. DOD stated that this responsibility clearly encompasses provision of care to nonenrolled beneficiaries and therefore meets the NDAA requirement. We continue to believe that DOD has not adequately addressed the requirement in the mandate. First, in multiple interviews and e-mail exchanges during our audit work, senior DOD officials told us that no specific actions had been taken to designate a senior official and that, by default, the duties fell to the ASD for Health Affairs who is responsible for overseeing DOD’s health programs and resources. Further, during our site visits, TRO officials told us they had never been officially notified of their responsibilities and expectations for nonenrolled beneficiaries and nonnetwork providers. As a result, at the time of our site visits the TROs told us they had not undertaken any efforts beyond the level of assistance they had already been providing to nonenrolled beneficiaries and nonnetwork civilian providers. Second, we do not agree with DOD that the terms of the pre-existing directive satisfy the requirements of the mandate. Contrary to the requirement in the law that one official be designated, the directive generally assigns responsibilities to TMA, as well as to multiple TROs on a geographic basis. While part of the TROs’ responsibilities include developing a plan for the delivery of healthcare within the geographic region, the mandate contemplated a more global approach to addressing provider participation, specifically requiring one senior official to ensure provider participation in each market area. DOD also provided technical comments that we incorporated where appropriate. We are sending copies of this report to the Secretary of Defense, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-7119. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions are listed in appendix VII. The National Defense Authorization Act (NDAA) for fiscal year 2004 directed GAO to review the processes, procedures, and analysis used by the Department of Defense (DOD) to determine the adequacy of the number of network and nonnetwork civilian providers and the actions taken to ensure access to care for nonenrolled TRICARE beneficiaries. Specifically, this report describes (1) how TRICARE Management Activity (TMA) and its managed care support contractors (MCSC) evaluate nonenrolled TRICARE beneficiaries’ access to care and the results of these evaluations; (2) the impediments to civilian provider acceptance of nonenrolled TRICARE beneficiaries, and how they are being addressed; (3) how DOD has implemented the fiscal year 2004 NDAA requirements to take actions to ensure nonenrolled TRICARE beneficiaries’ access to care. To describe how TMA evaluates nonenrolled TRICARE beneficiaries’ access to care, we interviewed and obtained documentation from officials in TMA’s Health Program Analysis and Evaluation Directorate about its civilian provider survey, called the Survey on Continued Viability of TRICARE Standard. Although DOD was required to conduct a survey to assess nonenrolled beneficiaries’ access to care under the Standard option, the survey was administered to both network and nonnetwork civilian providers since nonenrolled beneficiaries can receive care from these providers under both the Extra and Standard options. We reviewed the survey methodology, including the methods for selecting respondents, the survey’s response rate, the designation of TRICARE market areas, and the survey instrument itself. We also reviewed TMA’s methods for randomly sampling market areas and providers and their administration of the survey instrument and found these decisions methodologically sound and statistically valid. In addition, we reviewed the survey results, including the published results and analysis. While we did not independently validate the survey data, we did assess the reliability of the data by reviewing survey documentation and internal controls and by interviewing knowledgeable agency officials and found that the data were sufficiently reliable for our purposes. To obtain information on how the civilian provider survey was developed, we interviewed officials at the Office of Management and Budget (OMB) because the Paperwork Reduction Act required OMB approval before it could administered. We also interviewed TRICARE beneficiary group representatives who had recommended sites for inclusion in the survey where nonenrolled TRICARE beneficiaries’ access to health care may be impaired. To identify how the civilian provider survey results would be used to evaluate access to care, we met with officials of TMA’s Office of Health Plan Operations, the director of TMA’s Standard Programs Division, and officials from the three TRICARE Regional Offices (TROs). We also reviewed TMA’s annual Health Care Survey of Defense Beneficiaries and compared it with a survey conducted by the Department of Health and Human Services’ Consumer Assessment of Health Care Providers and System of individuals who received health care through civilian health insurers. These surveys include identical questions on access-to-care issues that allowed for comparative analysis of the opinions expressed by TRICARE beneficiaries and civilian health plan users. Using data from the 2003-2005 surveys we analyzed nonenrolled TRICARE beneficiaries’ responses to access to care and compared them with results from the Consumer Assessment of Health Care Providers and Systems. We did not independently verify the data from each of these surveys; however, we did assess the reliability of these data by reviewing related documentation and interviewing knowledgeable agency officials and found that they were sufficiently reliable for our purposes. To further identify and describe other methods TMA and MCSCs used to evaluate care access for nonenrolled TRICARE beneficiaries, we met with officials of TMA, the TROs, MCSCs, and each of the services’ Office of the Surgeon General to obtain information on the systems they use for monitoring TRICARE beneficiary feedback and conducting other types of analyses, such as monitoring health care claims. The TROs and military services provided information on the Assistance Reporting Tool, a system that is being developed to monitor and archive TRICARE beneficiary feedback. The MCSCs also shared information about their independent systems for maintaining TRICARE beneficiary feedback. TMA, MCSC, and military service officials provided us with examples of TRICARE beneficiary feedback reports and health care claims data for nonenrolled TRICARE beneficiaries that TMA uses to evaluate access to care for this population. We did not independently verify data from the MCSCs’ TRICARE beneficiary feedback systems and TMA’s claims data files; however, we did assess the reliability of these data by interviewing knowledgeable officials and reviewing previous GAO work using these data and found that they were sufficiently reliable for our purposes. To identify how the MCSCs monitor access to care both in Prime Service Areas and in areas where networks have not been established, we obtained information about their techniques for network development and for civilian provider recruitment. To identify and describe the impediments to providers’ acceptance of nonenrolled TRICARE beneficiaries, we obtained information from TMA Health Plan Operations, TMA Health Program Analysis and Evaluation Directorate, TRO, and MCSC officials on the possible reasons that providers were unwilling to accept nonenrolled TRICARE beneficiaries as patients. We also met with representatives of TRICARE beneficiary groups and the American Medical Association to obtain anecdotal information about impediments to health care access and to supplement our data on possible access-to-care problems. To identify and describe how impediments, such as TRICARE reimbursement rates and administrative issues, are being addressed, we reviewed TRICARE’s reimbursement policies and authorities as well as provider outreach strategies and marketing and education efforts of TMA and its MCSCs. We also reviewed the procedures for issuing waivers used to increase reimbursement rates in areas where TMA determines that access to care is impaired, including the application, review, and decision process. We then obtained information from TMA’s Office of Medical Benefit and Reimbursement Systems on all of the completed and pending requests for reimbursement waivers. Finally, we interviewed MCSC and TRO officials to identify the administrative issues that impact provider acceptance of TRICARE beneficiaries and how they conduct outreach efforts to alleviate problems and/or educate providers about these issues. However, we did not assess the extent to which these efforts improved civilian providers’ acceptance of nonenrolled beneficiaries as patients. To examine how DOD has implemented the NDAA fiscal year 2004 requirements for oversight of nonenrolled TRICARE beneficiaries’ access to care, we reviewed pertinent sections of this legislation outlining the tasks that DOD must perform to comply with the law. We interviewed officials in TMA’s office of Health Plan Operations, the director of the TRICARE Standard Programs Division, and officials in each of the TROs. To identify whether and how the oversight responsibilities outlined in the NDAA were being managed, we obtained information from TRO and MCSC officials for each of the three regions and TMA’s Communications and Customer Service Directorate to identify activities in place to educate network and nonnetwork providers about TRICARE Standard, to encourage network and nonnetwork providers to treat nonenrolled TRICARE beneficiaries, and to ensure that nonenrolled TRICARE beneficiaries have the information necessary to locate providers readily. We conducted our work from July 2005 through December 2006 in accordance with generally accepted government auditing standards. The National Defense Authorization Act (NDAA) for fiscal year 2004 required that the TRICARE Management Activity (TMA) conduct surveys in TRICARE market areas within the United States to determine how many health care providers are accepting new patients under TRICARE Standard in each market area. The NDAA did not stipulate how TMA should define a market area but specified that 20 market areas should be completed each fiscal year until all market areas in the United States have been surveyed. Although the mandate focused on Standard, TMA officials designed the survey to monitor access to care from both network and nonnetwork providers since nonenrolled TRICARE beneficiaries can receive care through both the Standard and Extra options. Before TMA could begin administering the civilian provider survey, it required review and clearance from the Office of Management and Budget (OMB) under the Paperwork Reduction Act. Subsequent to this review, OMB approved a four-item questionnaire for the study administered in fiscal year 2005. (See app. III for the approved questionnaire.) In designing the Survey on Continued Viability of TRICARE Standard (the civilian provider survey), TMA defined the individual states and the District of Columbia as 51 market areas—a definition that will allow TMA to complete the survey of all markets within a 3-year period and to develop estimates of access to health care at both the state and national levels. However, in order to provide information on smaller geographic areas where nonenrolled TRICARE beneficiaries may be having problems finding either network or nonnetwork providers, TMA supplemented the statewide samples by oversampling from submarkets within each state called Hospital Service Areas (HSA). The HSA geographic designation is derived from a Dartmouth University study that groups zip codes into distinct sets based on the analysis of patient travel patterns to the hospital or hospitals they use most often. TMA endorsed the HSA submarket methodology because these areas are nonoverlapping and encompass all of the United States. In addition, nonenrolled TRICARE beneficiaries reside in almost all of the 3,436 HSAs. TMA’s methodology asks for oversamples from HSAs in the 24 states where 80 percent of nonenrolled TRICARE beneficiaries reside. When the study is complete in fiscal year 2007, TMA will have survey data from 2 HSAs selected randomly from each of the 24 states where the majority of nonenrolled TRICARE beneficiaries live, as well as information from HSAs purposively selected because TRICARE beneficiaries or TROs were concerned with access in these areas. To select the market areas that would be surveyed in fiscal year 2005, TMA randomly selected sites from the individual states and the District of Columbia and randomly selected 12 submarket HSAs within the 20 market areas. In addition, in order to be able to respond to TRICARE beneficiary concerns that access in some locations was impaired, TMA selected 17 additional submarket HSAs that TRICARE beneficiaries had identified as problem areas in terms of access to health care. Four of these 17 sites were outside the 20 selected state-wide market areas because TRICARE beneficiaries had raised concerns about access issues in these locations. TMA selected its sample for the civilian provider survey from the American Medical Association Masterfile, a data set of U.S. providers that includes data on all providers who have the necessary educational and credentialing requirements. This Masterfile did not differentiate between TRICARE’s network and nonnetwork civilian providers. However, TMA selected this file because it is widely recognized as one the best commercially available lists of providers in the United States and contains over 600,000 active providers along with their addresses, phone numbers, and information on practice characteristics, such as their specialty. Although the Masterfile is considered to contain most providers, deficiencies in coverage and inaccuracies in detail remain. Therefore, TMA attempted to update providers’ addresses and phone numbers and to ensure that providers were eligible for the survey. From this Masterfile, TMA expected to randomly sample about 1,000 providers from each market and submarket area—a sample size that would achieve TMA’s desired margin of error. However, in some instances, a sample of 1,000 exceeded the number of providers in the market or submarket area, in which case TMA attempted to contact all providers in that area. Overall, TMA initially sampled about 41,000 providers, including both network and nonnetwork civilian providers. After verifying phone numbers and eliminating ineligible providers, TMA attempted to contact about 33,000 office-based providers in the 20 states and 29 HSAs evaluated in fiscal year 2005. When analyzing provider responses, TMA weighted each response so that the sampled providers represented the population from which they were selected. To administer the civilian provider survey TMA hired a contractor, who conducted the fieldwork for this project. The contractor mailed a combined cover letter and questionnaire to the billing managers for all providers in their sample. If the provider did not respond to the mailed questionnaire, TMA followed up with a second mailing 3 weeks later and conducted a telephone interview within 30 days of the first mailing for those who did not respond to the mailed survey. During the survey period, telephone interviewers called each provider’s office up to 10 times in an attempt to obtain a completed survey. Because the overall response rate to the survey was 55 percent, TMA conducted an analysis of their findings to determine whether the results were biased by a high percentage of providers not responding. Although TMA officials told us that OMB’s approval for the fiscal year 2005 survey did not specify a required response rate, OMB’s public guidance specifies that if response rates are lower than 80 percent, agencies need to conduct a nonresponse analysis. Such an analysis is used to verify that nonrespondents to the survey would not answer differently from those who did respond and that the respondents are representative of the target population, thus ensuring that the data are statistically valid. When conducting this analysis, TMA interviewed a sample of providers who did not respond to the original survey and compared their responses and demographics with the original survey respondents. TMA also compared nonrespondents’ demographics with those of the target population of health care providers. The results of TMA’s nonresponse analysis indicate that the survey respondents are representative of the target population of providers. The nonresponse analysis provided additional useful information for TMA. First, it did not show a difference in the rate that responding and nonresponding network civilian providers were aware of the TRICARE program. However, it did show a statistically significant difference in the rate of awareness between responding and nonresponding nonnetwork civilian providers. These results indicate that having a familiarity with TRICARE increases a provider’s incentive to respond to the survey. In order to adjust for this bias, TMA could have calculated an adjustment to the sampling weights—an adjustment that has not been applied to the survey results. As a result, the unweighted survey results tend to overstate civilian providers’ awareness and acceptance of TRICARE. Nonetheless, TMA’s survey contractor noted that the survey results are not problematic if the survey is used to compare changes in awareness and acceptance from year to year. Further, TMA’s use of the unadjusted results of the initial survey phase as indicators of areas in which to focus marketing and outreach efforts is appropriate because TMA is using it to make relative comparisons of the areas surveyed. TMA’s survey of civilian providers continues, and their analysts expect to complete data collection for the nation over a 3-year period ending in fiscal year 2007. Although TMA’s efforts meet the mandate’s requirement of surveying 20 market areas each fiscal year until all market areas were surveyed, collecting survey results over this period may limit TMA’s stated goal of deriving an overall national estimate because the national estimate will combine data collected over several years rather than during one relatively short time period, as well as the likelihood different instruments will be used over time. For example, four additional questions may be added to the fiscal year 2006 survey. TMA officials told us that the time lag could potentially impact the results used to derive a national estimate, but that their limited resources for this study prevent them from conducting a nationwide survey under a shorter time frame. The National Defense Authorization Act (NDAA) for fiscal year 2004 directed the Department of Defense (DOD) to monitor nonenrolled TRICARE beneficiaries’ access to care under the TRICARE Standard option. Although the mandate focused on Standard, nonenrolled TRICARE beneficiaries can receive care from both nonnetwork civilian providers through the Standard option and from network civilian providers through the Extra option. Beneficiaries can move freely between these options depending on their choice of civilian provider each time they receive care. Therefore, DOD’s survey was designed to monitor nonenrolled beneficiaries’ access to care from both network and nonnetwork providers. As each cycle of the survey is completed, TMA will be able to project survey results to the sampled market areas. When all cycles of the survey are complete, TMA will be able to project the survey data at the national level. Following is the actual survey instrument that was used to obtain information from civilian providers. The staff administering this survey were not aware of whether the civilian providers they contacted were network or nonnetwork, and the same survey questions, which specifically mentioned the Standard option, were asked of all respondents. Nonetheless, if network civilian providers were to deliver care to nonenrolled beneficiaries, the responding providers’ staff would likely understand that this care would be provided under the Extra option. Therefore, for the purposes of the survey, the term “Standard” referred to both the Standard and Extra option. CPT codeProcedure or service performed Biopsy, vertebral body, open; thoracic Bone marrow or blood-derived peripheral stem cell transplantation; allogenic Bone marrow or blood-derived peripheral stem cell transplantation; autologous Cystourethroscopy, with ureteroscopy and/or pyeloscopy; with resection of ureteral or renal pelvic tumor Litigation or transaction of fallopian tube(s), abdominal or vaginal approach, unilateral or bilateral Litigation or transaction of fallopian tube(s), abdominal or vaginal approach, postpartum, unlaterial or bilateral, during same hospitalization (separate procedure) Occlusion of fallopian tube(s) by device (eg. Band, clip, Galope ring) vaginal or suprapubic approach Cordocentesis (intrauterine), any method Fetal monitoring during labor by consulting physician (ie, non-attending physician) with written report; supervision and interpretation Fetal monitoring during labor by consulting physician (ie, non-attending physician) with written report; interpretation only Surgical treatment of ectopic pregnancy; tubal or ovarian, requiring salpingectomy and/or oophorectomy, abdominal or vaginal approach Surgical treatment of ectopic pregnancy; interstitial, uterine pregnancy requiring total hysterectomy Surgical treatment of ectopic pregnancy; cervical, with evacuation Cerciage of cervix, during pregnancy; vaginal Cerciage of cervix, during pregnancy; abdominal Vaginal delivery only (with or without episiotomy and/or forceps) Vaginal delivery only (with or without episiotomy and/or forceps); including postpartum car External cephalic version, with or without tocolysis Delivery of placenta (separate procedure) Cesarean delivery only; including postpartum care Vaginal delivery only, after previous cesarean delivery (with or without episiotomy and/or forceps) Cesarean delivery only, following attempted vaginal delivery after previous cesarean delivery Cesarean delivery only, following attempted vaginal delivery after previous cesarean delivery; including postpartum care Treatment of incomplete abortion, any trimester, completed surgically Induced abortion, by dilation and curettage Induced abortion, by one or more intra-amniotic injuctions (amniocentesis-injections), including hospital admission and visits, delivery of fetus and secundines Induced abortion, by one or more intra-amniotic injuctions (amniocentesis-injections), including hospital admission and visits, delivery of fetus and secundines; with dilation and curettage and/or evacuation Induced abortion, by one or more vaginal suppositories (eg, prostaglandin) with or without cervical dilation (eg, laminaria), including hospital admission and visits, delivery of fetus and secudines Induced abortion, by one or more vaginal suppositories (eg, prostaglandin) with or without cervical dilation (eg, laminaria), including hospital admission and visits, delivery of fetus and secudines; with dilation and curettage and/or evacuation Induced abortion, by one or more vaginal suppositories (eg, prostaglandin) with or without cervical dilation (eg, laminaria), including hospital admission and visits, delivery of fetus and secudines; with hysterotomy (failed medical evacuation) Multifetal pregnancy reduction(s) (MPR) Vertebral corpectomy (vertebral body resection), partial or complete, transperitoneal or retroperitoneal approach with decompression of spinal cord, cauda equine or nerve root(s), lower thoracic, lumbar, or sacral; each additional segment (List separately in addition to code for primary procedure) Strabismus surgery by posterior fixation suture technique, with or without muscle recession (List separately in addition to code for primary procedure) Injection procedure during cardiac catheterization; for pulmonary angiography Injection procedure during cardiac catheterization; for selective right ventricular or right atrial angiography (eg.internal mammary), whether native or used for bypass. Injection procedure during cardiac catheterization; for selective left ventricular or left atrial angiography Injection procedure during cardiac catheterization; for aortography Injection procedure during cardiac catheterization; for selective coronary angiography (injection of radiopaque material may be by hand) In addition to the contact named above, Bonnie Anderson, Assistant Director, Kevin Dietz, Cathleen Hamann, Lois Shoemaker, Robert Suls, and Suzanne Worth made key contributions to this report.
The Department of Defense (DOD) provides health care through its TRICARE program. Under TRICARE, beneficiaries may obtain care through a managed care option that requires enrollment and the use of civilian provider networks, which are developed and managed by contractors. Beneficiaries who do not enroll may receive care through TRICARE Standard, a fee-for-service option, using nonnetwork civilian providers or through TRICARE Extra, a preferred provider organization option, using network civilian providers. Nonenrolled beneficiaries in some locations have reported difficulties finding civilian providers who will accept them as patients. The National Defense Authorization Act (NDAA) for fiscal year 2004 directed GAO to provide information on access to care for nonenrolled TRICARE beneficiaries. This report describes (1) how DOD and its contractors evaluate nonenrolled beneficiaries' access to care and the results of these evaluations; (2) impediments to civilian provider acceptance of nonenrolled beneficiaries, and how they are being addressed; and (3) how DOD has implemented the NDAA fiscal year 2004 requirements to take actions to ensure nonenrolled beneficiaries' access to care. To address these objectives, GAO examined DOD's survey results and DOD and contractor documents and interviewed DOD and contractor officials. DOD and contractor officials use various methods to evaluate access to care, and according to these officials, their methods indicate that access is generally sufficient for nonenrolled beneficiaries. For example, in its 2005 survey of civilian providers DOD found that 14 percent of civilian providers surveyed in 20 states were not accepting new patients from any health plan. Of those accepting new patients, about 80 percent would accept nonenrolled TRICARE beneficiaries as new patients. DOD's contractors use various methods to monitor access to care. While these methods were not designed specifically to evaluate access for nonenrolled beneficiaries, they provide information that allows contractors to monitor the availability of both network and nonnetwork civilian providers for this population. According to contractor officials, their measures indicate that nonenrolled beneficiaries' access to care is sufficient overall. DOD, its contractors, and beneficiary and provider representatives cited various factors as impediments to network and nonnetwork civilian providers' acceptance of nonenrolled TRICARE beneficiaries and ways to address them. These impediments include concerns specific to TRICARE, including reimbursement rates and administrative issues, as well as issues not specific to TRICARE, such as providers without sufficient practice capacity for additional patients. DOD and its contractors have specific ways to address impediments related to reimbursement rates and administrative issues, but issues that are not specific to TRICARE are more difficult to resolve. For example, DOD has authority to increase reimbursement rates for network and nonnetwork civilian providers in areas where access to care has been impaired. Furthermore, other impediments not specific to TRICARE, such as provider practices at capacity and few providers in geographically remote locations, cannot be readily resolved and create access difficulties for all local residents, including TRICARE beneficiaries. Various DOD offices as well as DOD's contractors are already carrying out the responsibilities outlined by the NDAA for fiscal year 2004--such as educating civilian providers and recommending reimbursement rate adjustments--actions that help ensure nonenrolled beneficiaries' access. However, a senior official was not formally designated to have responsibility for these mandated actions. DOD commented on the report, stating that GAO's approach was insightful, but disagreeing with GAO's finding that a senior official was not formally designated to be responsible for taking actions to ensure TRICARE beneficiaries' access to care as outlined in the NDAA. DOD said that an existing directive designating a senior official to serve as program manager for TRICARE met this requirement. However, the directive does not specifically designate an official responsible for ensuring access as specified in the NDAA. Nor did DOD take other actions to designate that a senior official have such responsibilities.
The MHS operated by DOD has two missions: (1) supporting wartime and other deployments and (2) providing peacetime health care. In support of these two missions, DOD operates a large and complex health care system that employs more than 150,000 military, civilian, and contract personnel working in MTFs. DHA oversees the TRICARE health plan, and also exercises authority and control over the MTFs and subordinate clinics assigned to the NCR Medical Directorate. Outside of the NCR Medical Directorate, each military service operates its own MTFs and their subordinate clinics. Each military service recruits, trains, and funds its own medical personnel to administer medical programs and provide medical services to servicemembers. DHA does not have direct command and control of MTFs operated by the military services. In the MHS, health care is provided at no cost to active duty military servicemembers through the TRICARE Prime health care option. Reservists called or ordered to active service for more than 30 days have the same coverage as active duty servicemembers under TRICARE Prime, while inactive reservists may qualify to purchase TRICARE Reserve Select (TRS) coverage. The health care services covered by TRICARE Prime and TRS are generally the same, although the options vary by factors such as enrollment requirements, choices between civilian and MTF providers, required contribution from servicemembers toward the cost of care, and referral and prior authorization requirements. Data from the end of fiscal year 2014 showed that there were 1,587,987 active duty servicemembers enrolled in TRICARE Prime and 121,912 reservists with TRS. Within the United States, TRICARE is organized into three main regions—North, South, and West. (See fig. 1 for a map of the three regions.) DHA and TRICARE Regional Offices are responsible for managing purchased care through contractors in each of these regions. In each region, the contractor develops a network of civilian providers— referred to as network providers—to serve all the TRICARE beneficiaries in geographic areas called Prime Service Areas. The TRICARE Regional Offices, in particular, are responsible for monitoring the quality and adequacy of contractors’ provider networks and customer-satisfaction outcomes. Overseas, TRICARE is divided into three areas: Eurasia- Africa, Latin America and Canada, and Pacific. One contractor serves these areas, which is overseen by the TRICARE Overseas Office. Reservists have a general cycle of coverage during which they are eligible for MHS care through various TRICARE options based on their duty status – preactivation, active duty, deactivation, and inactive. During preactivation, when reservists are notified that they will serve on active duty in support of a contingency operation for more than 30 days in the near future, they are eligible for TRICARE benefits as an active duty servicemember. While on active duty for more than 30 days, reservists are required to enroll in TRICARE Prime and are eligible to receive the same medical services accorded to active-duty servicemembers. During deactivation, reservists returning from more than 30 days of active duty in support of a contingency operation may be eligible for 180 days of transitional health care through various TRICARE options. Inactive reservists can choose to purchase TRS coverage, if eligible, or use any other health coverage for which they may be eligible, such as employer- sponsored insurance. For reservists who are injured, become ill, or incur a disease while in a duty status, a line of duty determination is required and must be approved in order to determine eligibility for any MHS medical care associated with the specific injury, illness, or disease. If this determination is positive, the reservist is eligible to receive medical treatment for the specific injury, illness, or disease described in the line of duty determination through MTFs or civilian providers. DOD’s policies also entitle all deployed DOD civilian employees to medical treatment and services, including mental health services, at the same level and scope of those services provided to military personnel while in that deployed setting. Upon returning from deployment, DOD civilians are eligible to receive care in an MTF for any mental health conditions determined to be related to their deployment. This care is provided at no cost. Formerly deployed DOD civilians with mental health conditions related to their deployment can also elect to get treatment outside an MTF and get their care reimbursed, or seek care through their regular health insurance if the health plan will cover the treatment. DOD has established, in regulation and policy, TRICARE Prime access standards related to various aspects of DOD mental health care, including appointment wait times. The standards include appointment wait time standards for four types of mental health appointments—acute, routine, wellness, and specialty (see table 1). DOD measures its compliance with these wait time standards in several ways, including monitoring the average number of days to be seen for each appointment type, as well as the average percentage of appointments that meet the relevant access standard. The MHS goal is for 90 percent of appointments of each type to meet the relevant access standard. The standards apply to care delivered to TRICARE Prime beneficiaries in either direct care or purchased care. The policy notes that it applies to overseas locations to the extent practicable, but overseas locations often present unique circumstances. The access standards, however, do not apply to care delivered in deployed settings. Within the MHS’s direct care system, compliance with the access standards is monitored using appointment data available from DOD’s Composite Health Care System, the electronic system through which patient appointments are booked. Appointments are scheduled in the system, which is programmed to count the actual waiting time between the date of the appointment request and the scheduled appointment date. These metrics are readily available at each MTF and at higher organizational levels, including service headquarters. DOD reports that oversight of appointment wait times and the availability of care for all clinics under the command of an MTF is a key responsibility of the leadership team at that facility and that ultimately, the MTF Commander is accountable for performance related to delays in care for all medical care, including mental health. In the purchased care system, detailed data about compliance with DOD’s access to care standards are not available; instead, patient satisfaction with the length of time to an appointment, as measured through TRICARE beneficiary surveys, is used as a surrogate measure of access. While the TRICARE regional contractors submit appointment data to DHA, the contractors do not collect and report the same level of detail on access to care as is available in the direct care system, and the contractors do not use the same information systems. Moreover, the overseas contractor does not collect the same data as the U.S. contractors. Use of MHS mental health care has generally increased among active duty servicemembers. From fiscal year 2009 through fiscal year 2013, the percentage of active duty servicemembers who used either outpatient or inpatient mental health care provided through DOD’s MHS increased across all four services. Across all services, utilization began to decline in fiscal year 2014 (see fig. 2). DOD makes a variety of mental health care services available both domestically and overseas to active duty servicemembers, ranging from outpatient services such as psychotherapy and telehealth, to inpatient services such as acute psychiatric care, as well as emergency services. Table 2 lists the covered outpatient and inpatient mental health services available to servicemembers through the various TRICARE options. Army and Air Force officials stated the same mental health care is generally available domestically and overseas. While a TRICARE fact sheet notes that when overseas, limitations on mental health care services may apply, the Air Force reported that servicemembers are screened prior to overseas tours to minimize the likelihood of needing care that is not available. Army officials added that the availability of specific mental health services to be provided through direct care in a particular MTF or clinic depends on multiple factors, including the MTF’s size and the mental health needs of the beneficiary population it serves. For example, not all MTFs offer inpatient mental health services. In such cases, an active duty servicemember requiring inpatient mental health services would either be referred to purchased care or travel to an MTF that offers such services. In fiscal year 2014, DOD provided most of its outpatient mental health services through the direct care system, while providing most of its inpatient mental health services through the purchased care system. Among all of DOD’s military services, outpatient mental health care is provided at nearly all MTFs and clinics, while inpatient mental health care is less widespread. For example, the Army reported that in fiscal year 2014, all of its 56 MTFs and clinics provided outpatient mental health care, whereas only 13, or 23 percent, of the Army’s MTFs and clinics provided inpatient mental health care. For the same year, the Air Force reported all of its 75 MTFs with mental health clinics provided outpatient mental health services, but only 2 MTFs, or approximately 3 percent, provided inpatient mental health care. The Navy, which also provides health care services for the Marine Corps, reported that in fiscal year 2014, there were 67 MTFs and clinics that provided outpatient mental health care, whereas only 4 MTFs provided inpatient mental health care. Consistent with the availability of outpatient mental health services at MTFs and clinics, about 76 percent of outpatient mental health encounters for active duty servicemembers were provided through direct care across all military services in fiscal year 2014 (see table 3). In contrast, inpatient mental health care was provided mostly through purchased care in that year for all the military services except the Navy, which divided its provision of inpatient mental health care equally through direct care and purchased care. DOD and Army officials reported that they have increasingly focused on providing mental health care through the direct care system—citing reasons such as a need for DOD and the services to be aware of active duty servicemembers’ mental well-being to assess both their fitness for duty and whether they pose any risk to themselves or others. To address the mental health needs of servicemembers, the military services have integrated mental health providers in primary care settings, embedded mental health providers within units, and used telehealth. Each military service has established a program to integrate mental health providers into primary care settings to decrease overall health care costs and improve patient access to mental health services. A DOD official stated that these mental health providers, officially termed internal behavioral health consultants, are typically psychologists and social workers who help primary care providers with any mental health concerns that servicemembers have. As needed, the consultants also help servicemembers adhere to their treatment regimens. According to DOD, internal behavioral health consultants typically see patients one to four times for a 30-minute appointment per episode of care and refer them to specialty mental health care for a more intensive level of services if they show no improvement. The number of behavioral health consultants varies across DOD’s military services. Army officials stated that in fiscal year 2014, the Army had internal behavioral health consultants in 38 of its 56 MTFs and clinics. The officials reported that integrating internal behavioral health consultants into primary care settings has been an effective method to provide behavioral health care. The Air Force reported that as of fiscal year 2014, 71 of its 75 MTFs had internal behavioral health consultants. The Navy reported that as of fiscal year 2014, the service had placed internal behavioral health consultants in 71 clinics throughout the 27 domestic and overseas Naval MTFs, and has plans to eventually have them in 80 clinics. DOD’s military services have also co-located mental health providers with units to provide an easy point of access to mental health care and to help destigmatize these services. For example, the Army uses Embedded Behavioral Health (EBH) teams, which are multidisciplinary teams of approximately 13 mental health providers and support staff who are stationed near servicemembers’ units and barracks. In fiscal year 2014, the Army had 58 functional EBH teams that supported all of its combat brigade teams. Air Force officials reported that the Air Force embeds mental health providers in units that remotely pilot aircraft, Special Operations units, and at one Air Force base. In fiscal year 2014, the Air Force had 23 embedded mental health providers. The Navy uses both the Army’s and the Air Force’s approach, with embedded mental health teams stationed near servicemembers’ units, as well as mental health providers co-located with units. The Navy embeds active duty mental health providers in all of its large seagoing platforms, in all Marine Corps infantry regiments, and in all Navy and Marine Corps Special Operations Commands. In fiscal year 2014, the Navy had 19 locations with embedded mental health teams located near units and an additional 11 mental health providers stationed aboard its ships. The military services have also implemented telehealth programs with varying degrees of complexity to leverage the services’ existing resources and increase the availability of mental health care in areas with provider shortages. The Army has a global multidisciplinary telehealth program, consisting of psychiatrists, psychologists, and other mental health providers, who provide mental health care to servicemembers in regionally-remote settings as well as in deployed settings. An Army official stated that the Army uses telehealth when an MTF reaches capacity, supplementing that MTF’s delivery of care with telehealth services instead of using the purchased care or requiring the servicemember to wait to receive care. The same official noted that the Army is looking to use its telehealth program to support administrative evaluations in addition to providing mental health treatment. The Air Force uses telehealth to provide psychiatry services to smaller isolated bases. Air Force officials reported that their telehealth program consists of 9 staff, including 5 psychiatrists who use video teleconferencing to support MTFs that do not have an on-site psychiatrist and 2 psychologists who primarily assist with completing administrative evaluations. The Navy Medicine telehealth program office has begun telehealth services at several Navy health care facilities and is developing plans for its systematic use. Navy officials reported that the Navy used telehealth on an ad hoc basis in the past to provide mental health care at installations that did not have access to mental health providers. Our analysis of DOD staffing data for the direct care system from fiscal years 2009 through 2015 and for the purchased care system from December 2010 through July 2015 shows that DOD increased the number of available mental health providers in both delivery systems, thereby increasing DOD’s mental health capabilities to meet servicemembers’ mental health care needs. In its direct care system, DOD increased the number of mental health providers by 15 percent— from 4,608 providers to 5,276 providers—from fiscal year 2009 through fiscal year 2015. This increase was in response to the NDAA for Fiscal Year 2010, which required the Secretary of each military service to increase the number of active duty mental health personnel authorized for the service. Of the three military services, the Army and Air Force increased their mental health provider staffing from fiscal year 2009 through fiscal year 2015. The Army’s addition of 467 mental health providers, from 2,721 in fiscal year 2009 to 3,188 in fiscal year 2015, was the largest increase among the military services. In contrast, the Navy decreased its staffing by 51 mental health providers, from 883 in fiscal year 2009 to 832 in fiscal year 2015 (see fig. 3). Despite the general increase in the available MHS mental health providers, shortages for certain types of mental health providers, such as psychiatrists and social workers, have persisted (see fig. 4). For example, DOD staffing data show that by the end of fiscal year 2015, all of the military services had a shortage of psychiatrists, with 127 positions authorized but not filled. In addition, both the Navy and the Air Force had more authorized positions for every type of mental health provider, except mental health nurse practitioners, than were filled. (See app. 1 for more information on service-level staffing shortages in fiscal year 2015.) We previously found that the military services believed that a nationwide shortage of mental health professionals, as well as overarching military- specific challenges such as frequent deployments and relocations and competitive compensation, have adversely affected DOD’s ability to recruit and retain mental health providers. In the purchased care system, two of the three domestic TRICARE managed care contractors increased the number of available network mental health providers in Prime Service Areas, which are geographic areas usually within an approximate 40-mile radius of an MTF. Network providers are those civilian providers who have a contractual relationship with the TRICARE managed care contractor to provide care at a negotiated rate. From December 2010 to July 2015, the total number of network mental health providers increased by 7,887 providers (52 percent increase), from 15,216 in December 2010 to 23,103 in July 2015, in the TRICARE North region. During the same time period, the total number of network mental health providers increased by 39,025 providers (190 percent increase), from 20,515 in December 2010 to 59,540 in July 2015, in the TRICARE West region. In contrast, from December 2011 to July 2015, the total number of network mental health providers in the TRICARE South region decreased by 2,149 providers (17 percent decrease), from 12,505 in December 2011 to 10,356 in July 2015 (see fig. 5). An official from the TRICARE South contractor stated that while the number of network mental health providers in Prime Service Areas showed a decrease, the South region over this time period had an overall increase in mental health providers (if providers outside Prime Service Areas are also included). The official added that despite the decrease, the number of providers contracted was more than sufficient to meet the demand in the vast majority of areas throughout the South region. Despite the general increase in mental health providers in the purchased care system, DOD data show that from December 2010 to July 2015 provider shortages, particularly for psychiatrists, persisted for certain Prime Service Areas in all three regions (e.g., Fort Riley, KS; Ft Polk, LA; Traverse City, MI). The provider shortages were more prevalent in the TRICARE West Region, with up to 15 percent of Prime Service Areas contracting fewer psychiatrists than targeted and up to 11 percent of Prime Service Areas contracting fewer behavioral health providers than targeted for the network in certain years. In spite of the general decrease in contracted network mental health providers from December 2011 to July 2015 in the TRICARE South Region, the region had the fewest psychiatrist and behavioral health provider shortages, with no more than 6 percent of Prime Service Areas experiencing psychiatrist shortages, and no more than 2 percent of Prime Service Areas experiencing behavioral health provider shortages during this time period. Officials from the TRICARE South contractor told us that these shortages were possibly due to shortages of psychiatrists, particularly child psychiatrists, both at the national and local levels. TRICARE Regional Office officials told us that the contractors were limited in their ability to address these provider shortages because the provider shortages affect the entire health system and are not specific to the TRICARE program. Unlike the mental health care made available to active duty servicemembers, DOD generally does not make mental health care available to inactive reservists at no cost to them. While active duty servicemembers have access to a range of mental health care services provided at no cost to the servicemembers through TRICARE Prime, officials from the various Guard and Reserve components of DOD’s military services told us that DOD generally does not make such mental health care available to inactive reservists—with the exception of conducting mental health assessments and referring reservists to community resources. Reservists who purchase TRS have access to the list of covered inpatient and outpatient services available through TRICARE. However, National Guard officials stated that the premiums, co-pays, and deductibles associated with seeking treatment would not be affordable for some reservists, particularly those of lower ranks and for whom the Reserve or Guard is the sole source of employment. DOD reports that as of June 2014, about 25 percent of the reservists eligible to participate in TRS were enrolled. Regarding the assessment and referral services available to inactive reservists, service officials reported that these services are provided through Directors of Psychological Health (DPH), who are typically licensed mental health providers, and who are responsible for developing community resource guides and cultivating community contacts that can provide either free or discounted mental health care to inactive reservists. The Air and Army National Guard DPH programs are different in certain respects. Air National Guard. An Air National Guard official stated that as of July 2015, the Air National Guard had 93 DPHs who were embedded in 89 Air Force wings, with another 8 national staff who supported the DPHs in individual states. The official stated that Air National Guard servicemembers have more in-person, face-to-face interactions with their DPHs by virtue of the DPHs being embedded with the units. Army National Guard. A National Guard Bureau official stated that as of July 2015, the Army National Guard had 157 DPHs. An Air National Guard official reported that unlike Air National Guard DPHs, Army National Guard DPHs may have dual status (for example, as a guardsman on the weekend and a clinical psychologist during the week), and those DPHs can therefore be deployed along with their units, leaving the Army National Guard with the challenge of finding a replacement to serve reservists remaining in their state. In addition to the general DPH role to assess and refer reservists to community mental health resources, Air Force Reserve DPHs can also provide some clinical services, according to an Air Force Reserve official. The official told us that the Air Force Reserve DPHs are clinical social workers who are licensed, credentialed, and privileged to work in military hospitals and clinics. The same official stated that while these DPHs provide referrals and are mostly engaged in prevention efforts, the DPHs can provide some clinical services for reservists, which are then documented as such in the reservists’ electronic medical records. An Air Force Reserve official noted that as of August 2015, there were 30 DPHs embedded in wings where the Air Force Reserve saw the greatest need, and the official said that the Air Force Reserve is likely to get additional DPHs in the future. A Navy official stated that the Navy and Marine Corps Reserve Psychological Health Outreach Program teams, in their roles as DPHs, have instituted a Resiliency Check-In program that provides Navy and Marine Corps reservists with check-in screenings with a mental health provider, information on local community mental health resources, and case management as needed following the screening. The official, who is responsible for the psychological health of Navy and Marine Reserve Forces, credited the program with helping destigmatize mental health care. According to the official, these 12 regionally embedded Psychological Health Outreach Program teams are composed of 4-5 licensed mental health professionals led by regional DPHs who service all of the Navy and Marine Corps Reserve sites using various forms of communication and annual site visits. For each Resiliency Check-In event, the teams typically schedule a minimum 15-minute appointment with each reservist to screen for psychological health and other needs. If a reservist is found to be in need of mental health care or have other needs that can impact the reservist’s psychological health or unit, the teams provide the reservist with information detailing local health care resources; the teams also follow-up with the reservist for as long as needed. Data from DHA’s TRICARE Operation Center for April 2014 through August 2015 showed that MHS-wide, for appointments by active duty servicemembers in the direct care system, the mental health access to care standards were generally met for all domestic and overseas appointment types except routine appointments. The data come from Access to Care Mental Health Summary reports, which measure access both in terms of the average percentage of appointments that met the relevant access standard and the average number of days to be seen. The data show that in terms of the percentage of domestic and overseas mental health appointments that met the access standard each month between April 2014 and August 2015, on average 96 percent or more of specialty and wellness appointments met the 28-day standard, exceeding the MHS goal of 90 percent (see table 4). For acute appointments, more than 90 percent of all three services’ appointments met the 1-day standard on average; however, only 79 percent of NCR Medical Directorate acute appointments met the standard. For routine appointments, only the Air Force exceeded the 90 percent goal. Less than half of the Army routine appointments on average met the 7-day standard. In terms of average days to be seen, MHS-wide specialty and wellness appointments met the 28-day standard by a wide margin, averaging about 11 and 10 days to be seen, respectively (see table 5). There was more variability in terms of meeting the 1-day standard for acute appointments. Specifically, the Air Force and the Army met the standard from April 2014 through August 2015, but the Navy and the NCR Medical Directorate did not meet the standard—averaging 1.67 and 1.79 days to be seen, respectively. For routine appointments, only the Air Force met the 7-day standard in terms of average days to be seen, and both the Army and the NCR Medical Directorate more than doubled the desired days to a routine appointment. We also examined routine appointment access in terms of the average days to the third next available appointment, another measure of access that is prospective rather than retrospective, and which DOD officials suggested may be a more accurate representation of appointment availability. Data for the average number of days to the third next available routine appointment showed that the military services also did not meet the 7-day routine appointment access standard from April 2014 through August 2015 using this measure, with an MHS average of about 11 days to the third next available routine appointment. (See table 6.) Although the mental health appointment access data suggest that DOD is not meeting the 7-day standard for routine appointments, officials suggested that these data are misleading, as some appointments were coded incorrectly—negatively impacting the routine appointment access results. For example, DOD, Navy and NCR Medical Directorate officials attributed the large numbers of days to routine appointments, in part, to a known technical problem in DOD’s Composite Health Care System, which is used to book appointments. Officials said that this resulted in non- routine mental health appointments that were averaging about 15 days to be seen being mistakenly booked as routine—driving up the average days to routine appointments. In February 2016, DOD officials reported that a fix to this technical problem in the appointment booking system was underway and that the services had developed and disseminated specific guidance for booking clerks on the issue of incorrect appointment type categorization. Additionally, Army officials said that a particular type of mental health treatment might be negatively impacting the Army’s performance against the access standard for routine appointments; about one-third of routine appointments were for an intensive outpatient program which has scheduled start dates, such as the first Monday of the month. An Army official noted that while these appointments were booked as routine, the planned start date for the program may not occur for two weeks after booking. Another Army official noted that categorizing these appointments as routine may be a misapplication of the routine appointment category. An Army official said that as of December 2015 Army officials had begun discussions internally about standardizing the coding for these appointments. Nonetheless, a DOD official also said that the routine appointment data suggested that there may not be enough routine appointments available to meet demand, and that to resolve the access performance issue mental health clinics might need to make more routine appointments available on their schedules. The official suggested that because access data show that other types of mental health appointments, such as specialty appointments, are being scheduled well within the MHS standard timeframe of 28 days, mental health clinic appointment schedules should possibly be revised to allocate more routine appointments. However, in February 2016 DOD officials reported that until the issue of incorrectly coded appointments in the Composite Health Care System was resolved, the officials were limited in their ability to determine the extent to which a shortage of mental health appointments exists. In addition to the factors mentioned above, NCR Medical Directorate officials also reported that difficulty recruiting and hiring qualified clinicians had affected their ability to deliver timely care to their patient population, but they were taking steps to improve access. Officials reported that ongoing enhancements to their staffing, referral, and appointing processes, as well as care delivery options, such as telehealth, were intended to improve access performance. NCR Medical Directorate officials said that MTF directors and commanders meet at least monthly to review progress in this area. Although DOD has established standards and is monitoring access for four types of mental health appointments, DOD told us that most mental health appointments provided in the MHS’s direct care system fall into a fifth category—follow-up appointments—which generally do not have an official DOD access to care standard. A DOD official said that acute, routine, wellness, and specialty appointment categories are generally to be used only for a patient’s initial referral or assessment for mental health care and that additional follow-up appointments for counseling, for instance, would fall into this fifth appointment category, which are coded as ‘future’ appointments. Of the more than 2.6 million direct care mental health appointments that were scheduled from April 2014 through August 2015, about 59 percent were follow-up appointments (see fig. 6). Regarding the lack of an access standard for follow-up appointments, a DOD official said that unlike the other appointment types, an access standard for follow-up appointments was not established in regulation, and that follow-up appointments generally do not have an official access standard against which they are measured. However, data are available from the Composite Health Care System through which DOD could monitor follow-up appointment access. Federal standards for internal control note that control activities need to be established and reviewed to monitor performance measures and indicators, and that these controls could call for comparisons and assessments relating different sets of data to one another so that analyses of the relationships can be made and appropriate actions taken. By not establishing, reviewing, and monitoring an official performance standard for follow-up appointments— the most common mental health appointment type—DOD is missing performance information for the majority of the mental health care it provides. Unlike DOD, some health care systems have established access standards and measurement strategies for follow-up mental health care. For example, the Veterans Health Administration’s policy is that follow-up care for established veterans should be provided within 30 days of the clinically indicated date. In addition, a 2012 report from the Department of Veterans Affairs Office of Inspector General noted that the private sector health care organizations they studied measured follow-up appointments by establishing a pre-determined average number of visits (e.g., four) within the first 45–60 days of an initial new patient appointment, and they also measured the length of time between subsequent visits (e.g., the amount of time until the second, third, and fourth visits). DOD officials said that DHA is in the process of improving its oversight and monitoring of access to mental health care, although it did not report plans to develop an access standard for follow-up appointments. A DOD official noted that while there is not currently a DHA-led governance structure to conduct monthly assessments of access to care in specialty care (including mental health), there will be one in the future that mirrors the already established primary care monitoring structure. The official noted that a newly formed advisory board had been established to optimize specialty care, evaluate performance, and make recommendations for continuous process improvement, and another official noted that under this advisory board, a Mental Health Working Group had been chartered. That group has developed a mental health strategic plan—slated for implementation by the end of fiscal year 2016— which contains goals and initiatives related to improving access performance, including the standardization of business processes across all three services and the NCR. As part of its effort to standardize business processes, the Mental Health Working Group is proposing a revision to current coding practices, which it hopes will allow for greater surveillance and the ability to intervene and incentivize compliance with access standards. Additionally, the strategic plan contains an initiative related to identifying benchmarks for access, which is scheduled to be completed by the third quarter of fiscal year 2016. Nonetheless, DOD officials did not indicate that developing an official access standard for follow-up appointments would be part of the strategic planning process. Limited data are available regarding access to care in the MHS’s purchased care system. As previously mentioned, in lieu of detailed access to care compliance data, patient satisfaction with length of time to appointment, as measured through beneficiary surveys described later in this report, is used as a surrogate measure of access. In addition to beneficiary surveys, TRICARE Regional Office officials told us that they generally rely on beneficiary reports of access concerns, rather than on appointment wait time data. The officials said that all such reports are investigated and researched, and corrective action is taken when possible. A consultant that reviewed access in the MHS’s purchased care system noted that these methods of monitoring compliance with access standards—beneficiary surveys and monitoring of beneficiary complaints—were consistent with the primary methods used by civilian health plans, even though the methods are not consistent with the direct- care system’s focus on appointment wait time data. The consultant also found that developing automated systems to better monitor wait times in purchased care was neither practical nor feasible, given the dispersed networks of providers that make up the purchased care system. Nonetheless, in all three domestic TRICARE regions the 28-day mental health appointment access standard for specialty appointments is monitored for a servicemember’s first mental health appointment that is referred to purchased care. The wait time is measured as the time between a referral authorization and the first specialty service date reported on associated claims. While this measure may overstate the time to care because it does not account for factors such as the time a servicemember may take after receiving a referral authorization before calling for an appointment, the measure appears to be the best available proxy for monitoring compliance with this standard, according to the consultant that reviewed access in the MHS’s purchased care system. Data provided by each TRICARE Regional Office showed that the three regions did not meet the MHS goal of having 90 percent of appointments meet the access standard. For specialty mental health appointments with behavioral health providers, more than three-fourths of appointments met the access standard, and for psychiatry appointments, about two-thirds of appointments met the access standard (see table 7). However, as noted previously, this measurement may overstate the amount of time servicemembers must wait before receiving care. Various DOD and Air Force surveys have found that some servicemembers experienced problems or have concerns about accessing mental health care. For example, fiscal year 2011 through fiscal year 2014 data from DOD’s Health Care Survey of DOD Beneficiaries—the principal tool with which DHA monitors the opinions and experiences of MHS beneficiaries directly—found that about one in three servicemembers who had a need for treatment or counseling experienced problems accessing mental health care in the MHS. (See app. II, table 13.) Service-specific results were generally similar to the overall results, although fewer active duty Air Force servicemembers experienced access problems compared to the other services, with an estimated 23 to 29 percent of Air Force servicemembers experiencing problems over the four-year period. Similar to the results for active duty servicemembers, about an estimated one-third of reservists experienced problems accessing mental health treatment over the four years. Results from the Air Force’s 2013 Community Assessment survey found that the majority of Air Force servicemembers did not feel that various mental health access barriers related to logistics and appointment scheduling were applicable to them. (See app. II, table 14.) However, it is unknown what percentage of Air Force servicemembers responding to the 2013 Community Assessment actually sought counseling or other mental health care. In response to various questions related to potential access barriers, the 2013 survey results estimate that the listed access barriers did not affect the majority of Air Force servicemembers’ ability to seek counseling or other mental health care services. For example, in response to the statement “It would be difficult to schedule an appointment,” 81 percent of active duty Air Force servicemembers reported that this statement “does not describe me at all.” Additional DOD surveys specific to purchased care have also identified some potential problems regarding access to civilian mental health providers. For example, DOD’s nationwide TRICARE Standard Surveys of Civilian Providers have found that less than half of civilian mental health providers were accepting new TRICARE patients. (See app. II, table 15.) As we have previously reported, surveys from 2008 through 2011 estimated that only about 39 percent of civilian mental health providers were accepting any new TRICARE patients. Data from the 2012 and 2013 TRICARE Standard Survey of Civilian Providers provided by DOD showed that this percentage had not improved over time, with an estimated 37 percent of civilian mental health providers accepting new TRICARE patients during the surveys. A TRICARE Regional Office official suggested that the challenge of finding a civilian mental health provider who is accepting new TRICARE patients should be less of a concern for Prime beneficiaries, because they would typically be seeking care from TRICARE network providers. The Regional Office official also said that they would assist any Prime beneficiary who reported challenges in finding a mental health provider, and a DOD official explained that if a mental health specialist is not available, the contractor (domestic or overseas) is contractually responsible for locating a non- network provider for Prime beneficiaries. Additionally, DOD’s TRICARE Standard Survey of Beneficiaries, which surveyed beneficiaries not enrolled in TRICARE Prime (that is, nonenrolled beneficiaries), including reservists with TRS, have also found that these beneficiaries experienced problems accessing a civilian mental health care provider. (See app. II, table 16.) As we previously reported, surveys from 2008 through 2011 show that an estimated 28 percent of these nonenrolled beneficiaries experienced problems accessing civilian mental health care providers. The 2012 and 2013 survey data show similar results, with an estimated 30 percent of nonenrolled beneficiaries experiencing problems accessing services provided by a civilian mental health care provider. In addition to surveys, recent research related to access to DOD mental health has also identified potential problems with access to care for some types of servicemembers. For example, a 2015 RAND Corporation study about access to behavioral health care found that active duty servicemembers classified as living in geographically remote areas made up to 20 percent fewer visits to behavioral health care providers than those living closer to facilities. The RAND study also found that remote servicemembers needing or wanting behavioral health care face challenges similar to those faced by the rural population generally, including a shortage of appropriate service providers, long travel times to facilities, and few travel options. Additionally, the study found that gaps in broadband service in rural and remote areas impede the use of telehealth services. However, despite the high representation of reservists who live in geographically remote areas, RAND’s analyses did not find the remoteness associated with less utilization of behavioral health care in that population. Two Army-specific studies also identified some concerns with access to mental health care. A 2010 survey of Army mental health care providers and their patients found that while the majority of the providers reported being able to spend sufficient time with patients (92 percent) and schedule encounters to meet patients’ needs (82 percent), the providers also identified services for which access to treatment was more limited and patient subgroups with an unmet need for additional clinical care or services. For example, the providers’ patients with more severe symptoms and diagnostic and clinical complexity reported higher rates of access problems. Additionally, a study of three samples of Army National Guard soldiers at three time points found that while stigma was the most frequently cited barrier to care (34 percent of soldiers overall), 31 percent of the soldiers reported at least one significant barrier to care related to logistics (where to get help, inadequate transport, difficult to schedule, getting time off work, care costs too much money, no providers available, long distances to care). One logistical barrier—mental health treatment costing too much—was the most commonly reported, with 16 percent of soldiers overall noting this barrier. We also learned about mental health access challenges in our interviews with service officials representing reservists and with representatives from an association representing Reserve officers. Both groups identified mental health access challenges experienced by reservists. However, service officials reported that they typically hear about these types of access challenges anecdotally and do not systematically collect information about access challenges faced by reservists. For example, while activated reservists or those with line of duty mental health conditions may have a right to DOD health care, Army National Guard and Army Reserve officials and representatives from the Reserve Officers Association reported that reservists’ access may be limited by their distance from an MTF or from other resources available in their area, particularly if they live in a geographically remote area. An Army National Guard official and a Navy and Marine Corps Reserve official noted that in some communities reservists face challenges finding providers that will accept TRICARE or providers that are accepting new patients. National Guard officials and a Navy and Marine Corps Reserve official also noted that the TRS premiums and other costs are another access barrier for some reservists, particularly for lower ranking servicemembers or those who are otherwise unemployed. Additionally, representatives from the Reserve Officers Association and the Army National Guard reported challenges associated with putting reservists on orders to receive care related to a line of duty condition. An Army National Guard official noted, for example, that the minimum time for orders is an 8-hour day or a 4- hour drill period, which means servicemembers would have to be put on active duty for that day, precluding them from doing anything else, including working their civilian jobs. While access problems were identified in surveys, studies, and our interviews, DOD’s current work to establish a governance structure for mental health access oversight, which includes the implementation of the department’s mental health strategic plan, may address some of these concerns. For example, the governance structure may improve accountability when access standards are not being met. However, some problems such as finding available mental health providers may remain because, as noted previously, provider shortages affect the entire health system and are not specific to DOD or the TRICARE program. It is too early to determine the extent to which DOD’s ongoing efforts will resolve all of these concerns. Officials from the Army, Navy, and Marine Corps reported that the availability of mental health care varies depending on the deployed environment. They noted that such care is more variable than the services available domestically and that in general, deployed reservists and DOD civilians have access to the same mental health care available to active duty servicemembers in that deployed environment. This is consistent with findings from our prior work. For example, in 2013 we reported that the health care services that are available aboard Navy vessels largely depend on the type and class of vessel. Larger vessels generally offer a wider range of services—including specialized services—than do smaller vessels, due largely to their more robust crew levels and capabilities. Additionally, MHAT studies about the mental health care available in Afghanistan, where a significant number of deployed servicemembers have been located in recent years, have found that the mental health resources available there were robust but unevenly distributed. The Joint Mental Health Advisory Team 8 (J-MHAT 8) study, conducted in 2012, noted that the range of mental health care provided in Afghanistan included emergency psychiatric care and medical evacuations, psychotherapy, medication management, traumatic event management, outreach, education, awareness training, and medical evaluations. However, the study also found that the providers and clinics that deliver these services were unevenly distributed, resulting in a small number of clinics providing the bulk of the services. Navy and Marine Corps officials told us that the availability of mental health care in deployed settings varies depending on a number of factors. For example, a Navy official cited factors including the type of deployed setting, the number of deployed personnel, and the assessed needs of a particular unit. A Marine Corps official noted some additional factors, such as the deployment purpose and the expected stress from the deployment. This official stated that Navy and Marine commanders, after consulting with psychological health advisors and considering a variety of factors, generally determine what, if any, mental health resources deploy with particular units. Mental health care is provided to deployed servicemembers and DOD civilians through various means. Army officials told us, for example, that mental health care in deployed settings is provided to servicemembers through behavioral health officers assigned to brigade combat teams, as well as through combat operational stress control teams. An Army official stated that each brigade deploys with two different behavioral health officers—frequently psychologists and social workers—and staff that support these officers with providing care. The official said that the combat operational stress control teams comprise up to 30 individuals that assist with prevention initiatives and provide support for mental health issues for each brigade combat team. The official added that the division would also have a psychiatrist who helps coordinate care. Navy and Air Force officials similarly stated that in certain deployed settings mental health providers may be co-located or embedded with deployed personnel. The Marine Corps’ combat operational stress control program includes teams of Marine leaders, religious ministry personnel, and mental health providers assigned to battalion-sized units that have been trained in identifying, managing, and preventing combat stress issues. Clinic settings and telehealth are also used to deliver mental health care in certain deployed settings. For example, the report for J-MHAT 8 described clinic settings in Afghanistan through which servicemembers received care. That study found, for example, that the majority of mental health services provided in Afghanistan were provided at combat stress clinics and behavioral health clinics, which are outpatient clinics that provide mental health care to any walk-in patients. Mental health care in Afghanistan was also provided in restoration clinics—residential treatment facilities designed to maximize restoration and return-to-duty for servicemembers. J-MHAT 8 also found that telehealth was used in Afghanistan, although most providers surveyed reported that they preferred in-person counseling as a method of care delivery for servicemembers. Army officials noted that over the past several years, the Army has increasingly leveraged telehealth to increase access to care, particularly in remote locations. Data on the number of deployed mental health providers in Afghanistan show that the number of providers available to offer mental health care services increased from 2005 through 2010, before decreasing as the United States scaled back its military operations (see fig. 7). Army officials told us that there are generally more behavioral health providers deployed to areas of combat operations, such as Afghanistan, compared to other non-combat missions. Data provided by the military services and DOD regarding the total number of mental health providers deployed to any location since fiscal year 2014 suggest that deployed mental health provider availability has continued to decrease since the last MHAT study in Afghanistan—MHAT 9 in 2013—when there were 129 mental health providers in Afghanistan, consistent with the overall drawdown in deployed forces. For fiscal year 2014, the services reported a total of 114 mental health providers in any deployed setting (of these, the Army reported 64, the Air Force reported 29, the Navy reported 21, and the Marine Corps reported none). As of February 2016, DOD reported that there were a total of 36 mental health providers in deployed settings, of which 10 were located in Afghanistan. Army officials confirmed that the number of deployed mental health providers has decreased since 2013 in accordance with the overall force drawdown in Afghanistan. Army and DOD officials also reported some additional factors that have affected the total number of deployed mental health providers in recent years, such as troops no longer performing combat patrols and the behavioral health providers’ non-combat missions, which include providing local support to Allied missions and supporting redeployment operations. According to DOD, data on access to mental health care in deployed settings are generally not available, and DOD’s access to care standards do not apply in these environments. For example, an Army official noted that the Army’s access to data on mental health encounters in deployed settings is fairly limited. He stated that data availability depends on factors such as commanders’ preferences regarding what data to record and internet connectivity at the deployed site. In lieu of data on access, a DOD official noted that DOD has reviewed the staffing ratios in the MHAT reports in order to monitor access in the deployed environment. In recent years MHAT data has indicated that the number of mental health providers in Afghanistan was sufficient to meet mental health needs of deployed servicemembers, according to the 2013 MHAT 9 report. The MHAT studies conducted in Afghanistan from 2009 through 2013 showed some improvement in active duty servicemembers’ opinions about access to mental health care. As part of the studies, servicemembers were surveyed about various logistical barriers to accessing mental health care, and the responses were separated by those servicemembers who screened positive for mental health problems and those who did not (see table 8). The MHAT studies found that the percentage of servicemembers agreeing with the statements “mental health services aren’t available” and “it is too difficult to get to the location where the mental health specialist is” decreased significantly since 2009. However, as table 8 shows, the MHAT studies from 2009 through 2013 also found that some servicemembers continued to experience barriers accessing mental health services. For example, the MHAT studies consistently found that a higher percentage of Army servicemembers who screened positive for mental health problems experienced barriers to mental health care compared with those who did not screen positive. The studies also found that the percentage of servicemembers who experienced some access barriers remained fairly stable from 2009 through 2013. For example, during this period the percentage of servicemembers reporting that they would experience difficulty getting time off work for treatment remained the highest compared to the other access barrier questions. In addition, the MHAT studies also identified stigma as a strong potential barrier to seeking mental health care, with servicemembers that screened positive for mental health conditions reporting high levels of stigma-related concerns. For example, in the 2013 study, 49 percent of Army servicemembers that screened positive for a mental health condition reported that they agreed or strongly agreed that they would be seen as weak if they were to seek mental health care. Although the MHAT studies showed that some servicemembers continued to experience barriers accessing mental health services over time, as noted previously, the number of servicemembers deployed to Afghanistan has declined in recent years. Additionally, DOD reported in February 2016 that it was working to expand its telehealth efforts there and that efforts such as circulating providers throughout the battlefield and organizing providers in teams had improved the utilization and efficiency of deployed mental health providers since the last MHAT report in 2013. Providing our nation’s military servicemembers with timely access to mental health care is a crucial responsibility of DHA and the military services. Recent data show that DOD is generally meeting three of its four appointment wait time access standards in its direct care system— where the majority of outpatient mental health care is delivered. However, recent DOD surveys also show that about a third of servicemembers reported that they experienced problems accessing care—indicating that servicemember perceptions of access and DOD’s access to care standards may not be aligned. Our work also shows that despite federal internal control standards that call for agencies to have sufficient information to monitor agency performance, DOD lacks an important standard for follow-up appointments, which represent nearly two-thirds of the mental health care provided in the MHS’s direct care system. Without such a standard, DOD does not have a mechanism for holding MTFs or the services accountable for providing timely access to the most common mental health care provided in the direct care system. Nonetheless, DOD has efforts underway to expand the mental health care available to its servicemembers and to improve access to that care. For example, DOD’s current work to establish a governance structure for the oversight of mental health access, which includes the department’s mental health strategic plan, could help DOD and the military services identify, monitor, and improve the performance of those military services or MTFs not performing up to standards and help ensure that servicemembers have timely access to necessary mental health care. However, it is too soon to determine what the impact of DOD’s efforts will be on improving access. Additionally, some factors outside of DOD’s control, such as the nationwide shortage of mental health providers, may continue to limit DOD’s ability to address all identified access problems. To enhance oversight of access to mental health care and help ensure that servicemembers have timely access to mental health care, we recommend that that Secretary of Defense direct the Assistant Secretary of Defense for Health Affairs to establish an access standard for mental health follow-up appointments and regularly monitor data on these appointments. We provided a draft of this report to DOD for comment. DOD provided written comments, which are reproduced in appendix III. DOD also provided technical comments that were incorporated, as appropriate. In its written comments, DOD concurred with our recommendation, but noted that developing a standard for follow-up mental health appointments would be difficult. Nonetheless, the agency reported that it would review appropriate methods to develop follow-up standards. DOD did not provide a time frame for implementing this recommendation. We are sending copies of this report to the Secretary of Defense, appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This appendix provides results from our analysis of DOD fiscal year 2015 quarterly mental health staffing reports by military service and the National Capital Region (NCR) Medical Directorate. These reports are submitted by the services and the NCR Medical Directorate to the Office of the Assistant Secretary of Defense for Health Affairs human capital office each quarter to provide status updates on mental health care provider staffing levels. Tables 13 through 16 contain results for questions relevant to access to mental health care in DOD’s Military Health System from four recent surveys. The four surveys are: (1) the fiscal year 2011-2014 Health Care Survey of DOD Beneficiaries; (2) the 2013 Air Force Community Assessment survey, (3) the TRICARE Standard Survey of Civilian Providers for 2012 and 2013; and (4) the TRICARE Standard Survey of Beneficiaries for 2012 and 2013. Randall B. Williamson, (202) 512-7114 or williamsonr@gao.gov. In addition to the contact named above, Lori Achman, Assistant Director; Muriel Brown; Krister Friday; Jacquelyn Hamilton; Dharani Ranganathan; Christina Ritchie; and Helen Sauer made key contributions to this report.
DOD reports that between 2005 and 2013, the number of individuals who received mental health care through DOD's MHS grew by 32 percent. MHS mental health care is provided free to active duty servicemembers. Reservists and DOD civilians are eligible for MHS care under certain circumstances. The National Defense Authorization Act for Fiscal Year 2015 contains a provision for GAO to assess the availability and accessibility of mental health care in DOD's MHS for military servicemembers. This report examines, among other things, (1) the mental health care DOD makes available to servicemembers domestically and overseas and (2) the accessibility of mental health care provided to servicemembers domestically and overseas. GAO analyzed recent, available data on MHS mental health utilization, staffing, and appointment access and compared access data to relevant DOD standards. GAO reviewed mental health data from several DOD surveys as well as documents related to MHS mental health care. GAO also interviewed DOD and service officials and representatives from servicemember and provider associations. The Department of Defense's (DOD) Military Health System (MHS) makes a variety of inpatient and outpatient mental health care available to active duty servicemembers and activated National Guard and Reserve servicemembers (reservists) domestically and overseas through its TRICARE health care system. The type of care includes psychological testing and assessment, psychotherapy, medication management, and inpatient psychiatric care. This care is typically available through military treatment facilities and clinics (direct care), and it is supplemented by care provided through networks of civilian providers (purchased care). In fiscal year 2014, DOD provided 76 percent of 2.9 million outpatient mental health encounters through direct care and 69 percent of 0.2 million inpatient mental health bed days through purchased care. To deliver mental health care, the military services use a range of strategies including telehealth, embedding mental health providers within units, and integrating mental health providers in primary care. While DOD has increased the number of available mental health providers in both direct and purchased care in recent years, DOD data indicate that the military services still face shortages for certain providers, such as psychiatrists. Unlike the care available for active duty servicemembers and activated reservists, MHS mental health care for inactive reservists is generally limited to referrals to non-DOD community resources or, if eligible, the reservists can purchase coverage for health care, including mental health care, through TRICARE Reserve Select, a premium-based health plan for reservists. DOD data on domestic and overseas direct care from April 2014 through August 2015 show that MHS-wide DOD's access to care standards were generally met for three of four mental health appointment types. However, in the case of routine appointments—initial appointments for a new or exacerbated condition—data show that other than the Air Force, MHS routine mental health appointments generally did not meet the 7-day access standard. DOD and service officials attributed this to several factors, including some appointments being incorrectly coded, thus negatively impacting the routine appointment access results. They told GAO that DOD was taking steps to address the coding problem and improve oversight of mental health access. Additionally, the data show that about 59 percent of mental health appointments are follow-up appointments, which generally do not have an official DOD access standard. Federal internal control standards call for agencies to have sufficient information to monitor agency performance. By not establishing and monitoring a follow-up appointment standard, DOD cannot hold the military services accountable for the majority of mental health care provided in the direct care system. For purchased care, limited access data are available, and DOD instead relies on beneficiary surveys and complaints to monitor access—consistent with methods used by civilian health plans. DOD surveys have identified access problems for some servicemembers. For example, a DOD beneficiary survey estimated that about one-third of active duty servicemembers experienced problems accessing mental health care from 2011 through 2014. Additionally, provider surveys from 2012 and 2013 found that only an estimated 37 percent of civilian mental health providers were accepting any new TRICARE patients. DOD's ongoing efforts to improve oversight of mental health access, including implementing a strategic plan, may help address some of these problems, but it is too early to tell. GAO recommends that DOD establish an access standard for mental health follow-up appointments and regularly monitor data on these appointments. DOD concurred with GAO's recommendation.
Tax expenditures are preferential provisions in the tax code, such as exemptions and exclusions from taxation, deductions, credits, deferral of tax liability, and preferential tax rates that result in forgone revenue for the federal government. The revenue that the government forgoes is viewed by many analysts as spending channeled through the tax system. However, tax expenditures and their relative contributions toward achieving federal missions and goals are often less visible than spending programs, which are subject to more systematic review. Many tax expenditures—similar to mandatory spending programs—are governed by eligibility rules and formulas that provide benefits to all those who are eligible and wish to participate. Tax expenditures do not compete overtly with other priorities in the annual budget, and spending embedded in the tax code is effectively funded before discretionary spending is considered. Tax expenditures generally are not subject to congressional reauthorization and, therefore, lack the opportunity for regular review of their effectiveness. Some We have long recommended greater scrutiny of tax expenditures.tax expenditures may be ineffective at achieving their social or economic purposes, and information about their performance as well as periodic evaluations can help policymakers make more informed decisions about resource allocation and the most effective or least costly methods to deliver federal support. Performance measurement is the ongoing monitoring and reporting that focuses on whether programs have achieved objectives in terms of the types and levels of activities or outcomes of those activities. Program evaluations typically examine a broader range of information on program performance and its context than is feasible to monitor on an ongoing basis. A “program” may be any activity, project, function, or policy that has an identifiable purpose or set of objectives, including tax expenditures. In the context of community development programs, impact evaluations can be a useful tool to assess the net effect of a program by comparing program outcomes with an estimate of what would have happened in the absence of the program. This form of evaluation is employed when external factors are known to influence the program’s outcomes, in order to isolate the program’s contribution to achievement of its objectives. Importantly, challenges in performance measurement and evaluation are not unique to tax expenditures as agencies have encountered difficulties in measuring the performance of spending programs as well. Pub. L. No. 111-352, 124 Stat. 3866 (2011). GPRAMA amends the Government Performance and Results Act of 1993, Pub. L. No. 103-62, 107 Stat. 285 (1993). choices in setting priorities as government policymakers address the rapidly building fiscal pressures facing our national government. For fiscal year 2010, we identified 23 tax expenditures that fund community development activities. Appendix II lists each tax expenditure with information on its estimated cost, type, and taxpayer group, as well as enactment and expiration dates. Five tax expenditures primarily promote community development in economically distressed areas, including Indian reservations; these programs cost the federal government approximately $1.5 billion in fiscal year 2010. Nine tax expenditures both support community development and address other federal mission areas, such as rehabilitating historic or environmentally contaminated properties for business use as well as constructing a range of transportation facilities, such as airports and docks, and water and hazardous waste systems. These multipurpose tax expenditures cost the federal government approximately $8.7 billion in fiscal year 2010. Two large state and local bond tax expenditures also may support community development, although community development activities account for only a portion of the total costs of those tax expenditures. Finally, the federal government has periodically offered temporary tax relief following certain disasters, including six packages of tax provisions focused on specific areas as well as one provision available for any presidentially declared disaster area. Figure 1 illustrates the mix of various tax expenditures that support community development. The federal government has five tax expenditures primarily to promote community development in economically distressed areas, such as low- income communities and Indian reservations. As noted below, all but one of these programs have expired. The Empowerment Zones and Renewal Communities (EZ/RC) programs ($730 million in revenue losses in fiscal year 2010) were established to reduce unemployment and generate growth in economically distressed communities that were designated through a competitive process. Initially, the EZ program offered a mix of grants and tax incentives for community and economic development, but later EZ rounds and the RC program offered primarily tax incentives for business development. While eligibility varied slightly by program and round, the 40 EZ- and 40 RC-designated communities were selected largely on the basis of poverty and unemployment rates, population, and other area statistics based on Decennial Census data.tax provisions expired at the end of 2011. The RC tax provisions expired at the end of 2009, and the EZ The New Markets Tax Credit (NMTC) ($720 million in revenue losses in fiscal year 2010) encourages investment in impoverished, low- income communities that traditionally lack access to capital. Whereas the EZ/RC programs target designated communities, the NMTC targets Census tracts where the poverty rate is at least 20 percent or where median family incomes do not exceed 80 percent of such incomes within a state or a metropolitan area. In January 2010, we reported that 39 percent of the Census tracts qualified for the NMTC program and 36 percent of the U.S. population lived in these Census tracts. The NMTC expired at the end of 2011. Two tax expenditures—Tribal Economic Development Bonds and Indian employment credit—target Indian tribal reservations. Indian tribes are among the most economically distressed groups in the United States, and tribal reservations often lack basic infrastructure commonly found in other American communities, such as water and sewer systems as well as telecommunications lines. Created under the American Recovery and Reinvestment Act of 2009 (the Recovery Act), the temporary bond authority ($10 million in revenue losses in fiscal year 2010) provided tribal governments with greater flexibility to use tax-exempt bonds to finance economic development projects. The $2 billion bond authority was to be allocated by February 2010, but Treasury and IRS have extended deadlines to reallocate unused bond authority. The Indian employment credit expired at the end of 2011. The Recovery Act also created temporary Recovery Zone bonds— including Recovery Zone Economic Development Bonds and Recovery Zone Facility Bonds allocated among the states and counties and large municipalities within the states based on unemployment losses in 2008. These bond authorities ($60 million in outlays in fiscal year 2010) expired at the end of 2010. Four of the five community development tax expenditures targeted to economically distressed areas have a statutory limit, such as a specified number of community designations, volume cap, or allocation amount, as shown in table 1. Although the allocation processes varied, these tax expenditures resemble grants in that an agency—either a federal agency or a state or local government—selects the qualifying communities, community development entities (CDE), or projects to receive the limited allocation available. For the EZ/RC program, communities nominated by their state and local governments had to submit a strategic plan showing how they would meet key EZ program principles or a written “course of action” with commitments to carry out specific legislatively mandated RC activities. In selecting the designated communities, HUD and USDA were required to rank EZ nominees based on the effectiveness of their plans, but HUD was required to designate RCs based in part on poverty, unemployment, and, in urban areas, income statistics. For designated EZs and RCs, state and local governments were responsible for allocating certain tax provisions with specified limits, including the RC Commercial Revitalization Deduction and EZ Facility bonds. For the NMTC program, the annual tax credit allocation limit was $3.5 billion for fiscal years 2010 and 2011. The CDFI Fund awards tax credit allocations to winning CDE applicants based on application scoring by peer review panels. The CDEs, in turn, invest in qualified low-income community investments. As of November 1, 2011, the CDFI Fund had allocated $29.5 billion in NMTC authority available from 2001 to 2010 and announced $3.6 billion in 2011 tax credit allocations on February 23, 2012. For more on the selection process, see GAO, Community Development: Federal Revitalization Programs Are Being Implemented, but Data on the Use of Tax Benefits Are Limited, GAO-04-306 (Washington, D.C.: Mar. 5, 2004). For the Recovery Zone bond programs, the national volume cap was $10 billion for Recovery Zone Economic Development Bonds and $15 billion for Recovery Zone Facility Bonds. State and local governments were responsible for allocating bond issuance authority to specific projects. Tribal Economic Development Bonds had a national volume cap of $2 billion. Tribal governments applied for allocations to issue bonds for specific projects. Other tax expenditures available in economically distressed communities are comparable to entitlement programs for which spending is determined by statutory rules for eligibility, benefit formulas, and other parameters rather than by Congress appropriating specific dollar amounts each year.taxes) available to all qualified claimants, regardless of how many taxpayers claim the tax expenditures, how much they claim collectively, or how much federal revenue is reduced by these claims. For example, businesses may claim Indian employment tax credits for employing Indian tribal members and their spouses without limit on the numbers or total Such tax expenditures typically make funds (through reduced amounts of claims. Similarly, businesses located in EZs and RCs may claim the EZ/RC Employment Credit and the Work Opportunity Tax Credit for employing eligible residents within an EZ or RC area without an aggregate limit on such tax credits. The term "brownfield site" means real property, the expansion, redevelopment, or reuse of which may be complicated by the presence or potential presence of a hazardous substance, pollutant, or contaminant. Both tax credits cannot be claimed for a single rehabilitation project. Eligible expenditures include costs incurred for rehabilitation and reconstruction of certain older buildings. Rehabilitation includes renovation, restoration, and reconstruction and does not include expansion or new construction. end of 2009, and the expensing of environmental remediation costs expired at the end of 2011. Two tax expenditures fund production of affordable rental housing for low-income households—the Low-Income Housing Tax Credit (LIHTC) and tax-exempt rental housing bonds. Under the LIHTC, a 9 percent tax credit is available for new construction or substantial rehabilitation projects not otherwise subsidized by the federal government, and a 4 percent tax credit is available for the projects receiving other federal subsidies including rental bond financing. Affordable housing projects must satisfy one of two income-targeting requirements: 40 percent or more of the units must be occupied by households whose incomes are 60 percent or less of the area median gross income, or 20 percent or more of the units are occupied by households whose incomes are 50 percent or less of the area median gross income. For fiscal year 2010, two grant programs also helped provide gap financing for LIHTC housing development following disruption of the tax credit market in 2008. Federally tax-exempt and tax credit bonds issued by state and local governments also contribute to community development and other federal mission areas by financing infrastructure improvements and other projects. For example, state and local governments may issue private activity bonds to finance airports, docks, and other transportation infrastructure; large business projects tied to the employment of residents in Empowerment Zones; and water or wastewater facilities that enable communities to meet community facilities needs and support development. Qualified Zone Academy Bonds (QZAB)—the authority for which expired at the end of 2011—may be used for renovating school facilities, purchasing equipment, developing course materials, or training personnel at qualified public schools in economically distressed areas including designated EZs or RCs. Whereas private activity bonds are used to support specific private activities and facilities often intended to generate economic development, state and local governments may also issue tax-exempt public-purpose state and local bonds and Build America Bonds (BAB) to help finance public infrastructure and facilities. In 2008, we reported that a majority of state and local bonds issued in 2006 were allocated for education or general purposes; for the latter category, it was not clear what activities or facilities were funded by the bonds.that community development activities comprise only a portion of governmental bonds, we did not sum the revenue losses for the two Given general bond provisions to avoid overstating federal support for community development. As shown in table 2, all of the multipurpose community development tax expenditures involve other entities in addition to IRS in administering the tax benefits. Five multipurpose tax expenditures resemble grants in that state and local governments oversee the allocation process to select qualifying projects to receive the limited allocation available. For the LIHTC for example, state housing finance agencies (HFA) award 9 percent credits to developers for low-income housing projects based on each state’s qualified allocation plan, which generally establishes a state’s funding priorities and selection criteria. Although the federal government does not set specific limits for general-purpose state and local bonds and BABs, private activity bond financing—including for rental housing and water systems—is generally subject to an annual volume cap for each state, and QZABs and bond financing for certain transportation facilities also have statutory allocation limits. The rehabilitation and brownfields tax expenditures resemble entitlement programs in that these tax incentives have no allocation limits and are available to all eligible claimants. In addition to IRS’s role in administering tax law, other federal and state agencies play a role in certifying that the properties are eligible for tax benefits. For the 20 percent rehabilitation tax credit for certified historic structures, the NPS, with the assistance of State Historic Preservation Offices, certifies historic structures, approves rehabilitation applications, and confirms that completed rehabilitation projects meet the Secretary of Interior’s Standards of Rehabilitation. For the brownfields tax expenditures, state environmental agencies certify eligible properties. The federal government has offered various mixes of temporary tax incentives and special rules to stimulate business recovery and provide relief to individuals after certain major disasters. See appendix VI for a detailed list of 45 tax benefits made available for specific disaster areas. Business recovery is a key element of a community’s recovery after a major disaster. To assist New York in recovering from the September 11, 2001, terrorist attacks, Congress passed a 2002 package with seven tax benefits targeted to the Liberty Zone in lower Manhattan. In the aftermath of the 2005 Gulf Coast hurricanes, Congress enacted the Gulf Opportunity Zone Act of 2005 (GO Zone Act) offering 33 tax benefits in part to promote business recovery and provide debt relief for states. A 2007 Kansas disaster relief package provided 13 tax benefits for 24 counties in Kansas affected by storms and tornadoes that began on May 4, 2007. A 2008 midwest disaster relief package targeted 26 tax benefits for selected counties in 10 states affected by tornadoes, severe storms, and flooding from May 20 through July 31, 2008. Also in 2008, Congress enacted a package offering eight tax benefits available to any individual or business located in any presidentially declared disaster area during calendar years 2008 and 2009. The preponderance of the disaster tax incentives offered in the six legislative packages we examined were modifications of existing tax expenditures, including increased allocations for the NMTC, LIHTC, rehabilitation tax credits, and tax-exempt bond financing. Several tax packages have offered accelerated first-year depreciation allowing businesses to more quickly deduct costs of qualified property, as well as partial expensing for qualified disaster cleanup and environmental remediation costs. Other tax incentives available for individuals in disaster areas included increased tax credits for higher education expenses and relief from the additional 10 percent tax on early withdrawals of retirement funds. An eligible disaster area may encompass communities that were economically distressed before the disaster as well as other communities, and taxpayers in the qualified area may be eligible for some tax incentives even if they did not necessarily sustain losses in the disaster. For those disaster tax incentives available to individuals and businesses as long as they meet specified federal requirements, the full cost to the federal government depends on how many taxpayers claim the provisions on their tax returns. For community development, tax expenditures are not necessarily an either/or alternative, and they may be combined to support certain community development activities. The design of each community development tax expenditure we reviewed appears to overlap with that of at least one other tax expenditure, as the following examples illustrate. Five tax expenditures targeted similar geography—economically distressed areas including tribal areas—although the specific areas served varied. Within the EZ- and RC-designated communities, a variety of tax incentives were available to help reduce unemployment and stimulate business activity. Seven bond tax expenditures share a common goal to finance infrastructure development.necessarily duplicative in that they allow flexibility in tax-exempt bond financing for similar projects with different ownership characteristics. For example, water and sewer facilities can be financed through public-purpose governmental bonds if a governmental entity is the owner and operator or through private activity bonds if the owner and operator is a private business. The various bond authorities are not Multiple tax expenditures—including the NMTC, several EZ/RC incentives, as well as the rehabilitation and brownfields tax expenditures—can be used to fund commercial buildings. Within this broad area of overlap, the tax expenditures are not necessarily duplicative in that some target certain types of buildings. The various tax expenditures that can be used to fund commercial buildings have geographic or other targets that sometimes coincide and sometimes do not. Therefore, for example, the 20 percent rehabilitation tax credit targets certified historic structures and the 10 percent rehabilitation credit is available for other older structures, but these eligible structures may or may not fall within the low-income communities eligible for NMTC assistance. Various tax benefits made available for certain disaster areas were largely modifications of existing tax expenditures. The community development tax expenditures we reviewed also may potentially overlap with federal spending programs. As discussed above, our May 2011 report identified overlap among 80 economic development spending programs administered by four agencies—Commerce, HUD, SBA, and USDA.economic development spending programs that are similar to the areas of community development tax expenditure overlap discussed above. Appendix VII discusses areas of overlap among the Disaster tax aid may also potentially overlap with federal financial assistance offered through disaster assistance grants and loans. Areas of overlap with multiple tax expenditures funding the same community development project may not represent unnecessary duplication, in part, because some tax expenditures are designed to be used in combination. As an example, the 4 percent LIHTC is designed to be used in combination with rental housing bonds. In another example, the 20 percent historic preservation tax credit may be used in combination with other community development tax expenditures, including the NMTC and LIHTC. Under the Housing and Economic Recovery Act of 2008, state HFAs are allowed to consider historic preservation as a selection factor in their qualified allocation plans to promote redeveloping historic structures as affordable housing. As shown in table 3, federal tax laws and regulations impose limits on how community development tax expenditures can be combined with each other and spending programs to fund the same individual or project. For example, employers cannot double dip by claiming two employment tax credits for the same wages paid to an individual. Whereas business investors may claim accelerated depreciation for LIHTC and NMTC projects, businesses generally may not claim accelerated depreciation for For the rehabilitation private facilities financed with tax-preferred bonds.tax credits and brownfield tax incentives, taxpayers may not claim costs funded by federal or state grants. Also, rehabilitation costs claimed for the 20 percent credit cannot be counted towards the adjusted basis of a property for the purposes of calculating the amount of other federal tax credits claimed for the same project; as a result, the effective tax savings on using the 20 percent credit with other federal tax credits are less than the sum of tax savings provided by each of the credits and deductions if they could be used together without this restriction. The information on tax law and regulatory limits listed in table 3 is not exhaustive; additional limits may apply in other federal laws and regulations. An area of potential overlap also exists among the tax expenditures subsidizing community development activities and CRA regulatory requirements for depository institutions in helping to meet the credit needs of the communities in which they operate. Banks earn positive consideration toward their CRA regulatory ratings by investing in projects also receiving certain tax benefits. In 2007, we reported that investors used NMTC and LIHTC to meet their CRA requirements. At that time, over 40 percent of NMTC investors reported that they used the tax credit to remain compliant with CRA. NMTC investors using the tax credit to meet CRA requirements also viewed it as very or somewhat important in their decision to make the investment. Nearly half of NMTC investors we surveyed in 2007 reported that they made other investments eligible for LIHTC, and nearly three-quarters of those investors using both tax credits were also required to comply with the CRA. Federal community development financing is fragmented with multiple federal agencies administering related spending programs as well as with multiple federal, state, and local agencies helping administer certain tax expenditures. As we have previously reported, mission fragmentation and program overlap may sometimes be necessary when the resources and expertise of more than one agency are required to address a For example, IRS, NPS, and state historic complex public need.preservation offices are involved in administering the 20 percent historic preservation tax credit for rehabilitating historic structures. NPS oversees compliance with technical standards for historic preservation, and IRS oversees financial aspects of the tax credit. NPS and IRS have partnered with IRS providing guidance including frequently asked questions about the tax credit on the NPS website. At the same time, fragmentation can sometimes result in administrative burdens, duplication of efforts, and inefficient use of resources. Applicants may need to apply for tax expenditures and spending programs at multiple agencies to address the needs of a distressed area or finance a specific project. For example, owners and developers seeking to restore an historic structure for use as affordable rental housing would need to apply separately to NPS for the 20 percent historic rehabilitation credit as well as to the state HFA for a LIHTC allocation. Achieving results for the nation increasingly requires that federal agencies work together to identify ways to deliver results more efficiently and in a way that is consistent with limited budgetary resources. Agencies and programs working collaboratively can often achieve more public value than when they work in isolation. To address the potential for overlap and fragmentation among federal programs, we have previously identified collaborative practices agencies should consider implementing in order to maximize the performance and results of federal programs that share common outcomes. These practices include defining common outcomes; agreeing on roles and responsibilities for collaborative efforts; establishing compatible policies and procedures; and developing mechanisms to monitor, assess, and report on performance results. GAO-11-318SP. the extent possible, data sharing is a way to reduce collection costs and paperwork burdens imposed on the public. In general, IRS only collects information necessary for tax administration or for other purposes required by law. As a result, IRS does not collect basic information about the numbers of taxpayers using some community development tax expenditures. We have consistently reported that IRS does not have data on the use of various expensing and special depreciation incentives available to encourage investment in EZ/RC communities, tribal reservations, and disaster areas. For tax credits, IRS has data on the numbers of taxpayers and aggregate amounts claimed, but data often do not tie use of the tax credits to specific communities. Location information is critical to identifying the community where an incentive is used and determining the effect of the tax benefit on local economic development. For bonds, IRS collects data on the amount of bonds issued and broad purpose categories for governmental bonds and allowable uses for qualified private activity bonds. As we reported in 2008, while the information collected is useful for presenting summary information, it provides only a broad picture of the facilities and activities for which the bonds are used.sufficient for IRS to administer the tax code, it provides little information for use in measuring performance. As a result, information often has not been available to help Congress determine the effectiveness of some tax expenditures or even identify the numbers of taxpayers using some provisions. Table 4 summarizes the types of information, including limitations and potential gaps, IRS collects for different types of community development tax expenditures. Our systematic review of literature for select community development tax expenditures generally found few studies that attempted to assess the effectiveness of programs in promoting certain measures of community development, such as reducing poverty or unemployment rates. We reviewed government studies and academic literature on the following community development tax expenditures: the NMTC, EZ tax program, disaster relief tax provisions, and the rehabilitation tax credits. In reviewing this literature, we focused on studies that attempted to analyze the impact of the tax expenditures on community development through empirical methods. We also summarized our prior observations and recommendations on options to improve tax expenditure design and considerations in authorizing similar community development tax programs. For the NMTC, we did not identify any empirical studies issued since our last report in January 2010. For the EZ program, we identified several studies published since our most recent report in March 2010 that attempted to measure the effect of the program on some measure of community development, as described below. We identified one study on the rehabilitation tax credits that attempted to measure one aspect of community development. We did not identify any empirical studies on disaster tax relief provisions. The scarcity of literature on some tax expenditures may be due to the fact that establishing that a community development tax expenditure or spending program has causal impact on economic growth in a specific community can be challenging. Table 6 below summarizes key methodological issues in attempting to measure effectiveness of the tax expenditures we selected. As we reported in 2010, making definitive assessments about the extent to which benefits flow to targeted communities as a direct result of NMTC investments presented challenges. For example, the small size of the NMTC projects relative to the total economic activity within an area made it difficult to detect the separate effect of a particular project. Many of the eligible communities may already have significant business activities that could mask NMTC impacts. Limitations associated with available data also made it difficult to determine whether benefits generated in a low- income community outside the scope of a particular project are the direct result of the NMTC program. As discussed above, CDFI Fund is collecting additional data on the use of the NMTC that may provide further insights into its use and impact on communities. For example, CDFI Fund is now collecting data on the amount of equity that CDEs estimate will be left in the businesses at the end of the 7-year period in which tax credits can be claimed. Collecting this information may provide CDFI Fund with additional information on the credit’s cost-effectiveness. Our 2007 NMTC report used statistical methods to attempt to measure the credit’s effectiveness, but determined that further analysis is needed to determine whether the economic costs of shifting investment are justified. Our analysis did find that the credit may be increasing investment in low-income communities, although this finding was not, in and of itself, sufficient to determine that the credit was effective. Increased investment in low-income communities can occur when NMTC investors increase their total funds available for investment or when they shift funds from other uses. A complete evaluation of the program’s effectiveness would require determining the costs of the program, including any behavioral changes by taxpayers that may be introduced by shifted investment funds. Neither our statistical analysis nor the results of a survey we administered allowed us to determine definitively whether shifted investment funds came from higher-income communities or from other low-income community investments. The related entities test requires that the CDE have no more than a 50 percent ownership stake in a qualified low-income community business. program which expired at the end of 2011.should require Treasury’s CDFI Fund to gather data to assess whether and to what extent the grant program increases the amount of federal subsidy provided to low-income community businesses compared to the NMTC; how costs for administering the program incurred by the CDFI Fund; CDEs, and investors would change; and whether the grant program otherwise affects the success of efforts to assist low-income communities. If it does so, Congress We did not identify any empirical studies on the effectiveness of the NMTC since our last report, but CDFI Fund has contracted with the Urban Institute for an evaluation of the NMTC that may lead to additional insights into the program’s effectiveness. In 2010, the Urban Institute published a literature review to inform a forthcoming evaluation, including challenges inherent in evaluating economic and community development CDFI Fund reports that the Urban Institute is programs in general. primarily relying on surveys to CDEs and businesses to conduct the evaluation. The Urban Institute conducted a preliminary briefing on the study's results with CDFI Fund in January 2012. After submitting a draft report to CDFI Fund, the Urban Institute will issue a final report in the spring 2012. Martin D. Abravanel, Nancy M. Pindus, Brett Theodus, Evaluating Community and Economic Development Programs: A Literature Review to Inform Evaluation of the New Markets Tax Credit Program, The Urban Institute, September 2010. Our prior work has found improvements in certain measures of community development in EZ communities, but data and methodological challenges make it difficult to establish causal links. Our 2006 report found that Round 1 EZs that received a combination of grant and tax benefits did show improvements in poverty and unemployment, but we did not find a definitive connection between these changes and the EZ program. Our 2010 report on the EZ/RC program reviewed seven academic studies of Round 1 projects and found that the evaluations used different methods and reported varying results with regard to poverty and unemployment. For example, one study concluded that the program reduces poverty and unemployment, while another study found that the program did not improve those measures of community development. As with the NMTC, our prior EZ/RC work has demonstrated challenges in measuring the effects of the program. For example, data limitations make it difficult to thoroughly evaluate the program’s effectiveness in that use of the EZ/RC Employment Credit cannot be tied to specific communities. Demonstrating what would have happened in the absence of the credit is difficult. External factors, such as national and local economic trends, can make it difficult to isolate the effects of the EZ/RC tax incentives. Since our 2010 EZ/RC report, we noted that more recent studies comparing employment, housing values, and poverty rates in EZ communities with similarly economically distressed areas have yielded mixed results. Two studies have found lower unemployment in the designated areas where the provisions have been used relative to similar non-EZ areas. Specifically, one study reviewed federal and state enterprise zones and found positive impacts on local labor markets in terms of the unemployment rate and poverty rate. In addition, the researchers found positive, but statistically insignificant, spillover effects to neighboring Census tracts. The second study focused on Round 1 of the EZ program and found that the EZ designation substantially increased employment in zone neighborhoods, particularly for zone residents. Importantly, the researchers examined Round 1 of the program that relied on a mix of tax benefits and grant funding. In addition, another study found that EZ program results seem to vary among different types of businesses within the designated zones. For example, researchers found that EZ tax incentives increase the share of retail and service sector establishments but decreases the share of transportation, finance, and real estate industries. They noted that the effectiveness of the EZ wage credit may be affected by the types of industries that are located in the designated area. However, while these studies have found that certain economic outcomes are associated with an area being eligible for EZ incentives, due to data limitations the studies cannot estimate the extent to which these outcomes vary with the amount of incentives actually used in an area. Both JCT and the Congressional Research Service (CRS) conducted literature reviews and reported modest effects and methodological limitations in making any definite assessments on the effectiveness of EZs. JCT reported that studies generally found modest effects overall with relatively high costs. In addition, it is difficult to determine whether the spending or tax incentives were responsible for any increases in economic activity. CRS’s review of academic literature found modest, if any, effects of the program and called into the question their cost- effectiveness. According to CRS, one persistent issue in evaluating the potential impact of EZs is the inherent difficulty of identifying the effect of the programs apart from overall economic conditions. With the expiration of the RCs at the end of 2009 and EZs at the end of 2011, we have made observations in prior work that Congress can consider if these or similar programs are authorized in the future. Without adequate data on the use of program grant funds or tax benefits, neither the responsible federal agencies nor we could determine whether the EZ/EC funds had been spent effectively or that the tax benefits had in fact been used as intended. If Congress authorizes similar programs that rely heavily on tax benefits in the future, it would be prudent for federal agencies responsible for administering the programs to collect information necessary for determining whether the tax benefits are effective in achieving program goals. In 2010, the U.S. Census Bureau began releasing more frequent poverty and employment updates at the Census tract level than it has traditionally provided. This information could be a useful tool in determining the effects of such programs on poverty and employment in designated Census tracts. Though we identified literature that discussed use of disaster tax provisions and their design, none of the articles attempted to measure empirically the impact the incentives had on promoting community development. A potential challenge in designing tax relief for disaster areas is that those communities within the zones most affected by the disaster may be slower to respond to the incentives than other areas within the zone. Our prior work on the GO Zone reported that bonds were awarded on a first-come, first-served basis that led to awarding bond allocation to projects in less damaged areas in the zone because businesses in these areas were ready to apply for and issue bonds before businesses in more damaged areas could make use of the incentive. Thus, assessing the impact of disaster relief on an entire zone may not reflect how the provisions affected specific areas within the zone. Another key challenge in evaluating disaster relief tax expenditures is the difficulty in establishing a comparison area where a “comparable” disaster has taken place but government programs or tax provisions were not available. Moreover, evaluations of disaster relief tax expenditures may be difficult because IRS collects limited information on the use of temporary disaster aid, as discussed above. While we identified numerous articles focused on historic restoration funded with the federal rehabilitation tax credits and the potential benefits of historic preservation in adapting currently vacant or underused property, we identified only one study that attempted to empirically measure the impact of the tax credit on community development. The study analyzed rehabilitation investment in the Boston office building market between 1978 and 1991 and found that the percentage of investment spending that would have occurred without the tax credit varied over time from about 60 to 90 percent. Another study we reviewed used economic modeling to quantify some community development outputs associated with the 20 percent rehabilitation tax credit, such as estimated jobs and projected income data.the study did not assess whether a rehabilitation project would have occurred in the absence of the credit nor did it compare community development in a project community with development in similar communities. As we previously reported, a complete evaluation of a credit’s effectiveness also requires determining the costs of the program and an assessment of the program’s economic and social benefits. A challenge in attempting to evaluate how the rehabilitation tax credits affect measures of community development is that the credits have a dual purpose and are not solely intended to promote community development. Evaluators may have difficulty reviewing the program’s effectiveness because they lack specific data on the geographic locations of the projects. In addition, the small size of the rehabilitation tax credit projects relative to the total activity in the area’s economy makes it difficult to isolate the economic effects of the credit. The annual federal commitment to community development is substantial, with revenue losses from community development-related tax expenditures alone totaling many billions of dollars. However, all too often even basic information is not available about who claims tax benefits from community development tax expenditures and which communities benefit from the activities supported by the tax expenditures. Further, relatively few evaluations of the effectiveness of community development tax expenditures have been done and when they have been done, results have often been mixed about their effects. These issues are familiar and long-standing for tax expenditures generally. We have made recommendations to OMB in 1994 and 2005 to move the Executive Branch forward in obtaining and using information to evaluate tax expenditures’ performance, which can help in comparing their performance to that of related federal efforts. GPRAMA offers a new opportunity to make progress on these issues. For those limited areas where OMB sets long-term, outcome-oriented, crosscutting priority goals for the federal government, a more coordinated and focused effort should ensue to identify, collect, and use the information needed to assess how well the government is achieving the goals and how those efforts can be improved. We look forward to progress in achieving GPRAMA’s vision for a more robust basis for judging how well the government is achieving its priority goals. The Administration’s interim crosscutting policy goals include some that identify tax expenditures among the contributing programs and activities. OMB’s forthcoming guidance should be helpful in further drawing tax expenditures into the GPRAMA crosscutting performance framework. Clearly, community development is but one of many areas where OMB could choose to set priority goals, and the interim goals to date encompass 1 of the 23 tax expenditures we reviewed. In this regard, Congress has a continuing opportunity to express its priorities about the goals that should be selected, including whether community development should be among the next cycle of goals. Whether or not OMB selects community development as a priority goal area, Congress also has the opportunity to urge more evaluation and focus Executive Branch efforts on addressing community development performance issues through oversight activities, such as hearings and formal and informal meetings with agency officials. Given the overlap and fragmentation across community development tax and spending programs, coordinated congressional efforts, such as joint hearings, may facilitate crosscutting reviews and ensure Executive Branch efforts are mutually reinforcing. While GPRAMA provides a powerful opportunity to review how tax expenditures contribute to crosscutting goals, progress is likely to be incremental and require sustained focus. Evaluating the impact of community development efforts is inherently difficult and definitive performance conclusions often cannot be drawn. Data limitations are not easy or inexpensive to overcome, and resources to evaluate programs must compete with other priorities even as the federal government copes with significant fiscal challenges. Thus, judicious choices will need to be made as efforts to improve tax expenditure performance information available to policymakers continue. Congress may wish to use GPRAMA’s consultation process to provide guidance on whether community development should be among OMB’s long-term crosscutting priority goals as well as stress the need for evaluations whether or not community development is on the crosscutting priority list. Congress may also wish to focus attention on addressing community development tax expenditure performance issues through its oversight activities. We provided a draft of this report for review and comment to the Director of OMB, the Secretary of the Treasury, the Commissioner of Internal Revenue, as well as representatives of three federal agencies helping administer certain community development tax expenditures—the Director of the CDFI Fund, the Secretary of Housing and Urban Development (HUD), and the Secretary of the Interior (Interior). The Deputy General Counsel of OMB, the Director of HUD’s Office of Community Renewal, the GAO Audit Liaison of Interior, and the Director of the CDFI Fund provided general comments. The first three provided email comments and the last provided a comment letter which is reprinted in appendix VIII. Only the HUD comments addressed our matters for congressional consideration directly, stating that the report provided minimal justification for them. Although the Secretary of the Treasury and Commissioner of Internal Revenue did not provide written comments, Treasury’s Office of Tax Analysis and IRS’s Office of Legislative Affairs provided technical changes, which we incorporated where appropriate. While not commenting on our matters for congressional consideration, OMB staff reiterated the view that the Administration has made significant progress in addressing tax expenditures. OMB staff cited assorted Fiscal Year 2013 budget proposals which it estimated would save billions of dollars by eliminating certain spending through the tax code and modifying other tax provisions. Some of the budget proposals relate to tax expenditures covered in this report, and we updated the text to reflect the President’s latest proposals. We also updated our report to reflect the release of new interim crosscutting priority goals and that the Administration has identified some tax expenditures that contribute to these goals, as required under GPRAMA. OMB staff said that this is a significant step forward and will be important for broader GPRAMA implementation over 2012 and 2013. We agree that this inclusion of tax expenditures along with related other programs in the GPRAMA goals is an important step toward providing policymakers with the breadth of information needed to understand the full federal effort to accomplish national objectives. Finally, OMB staff expressed concern that we were suggesting that tax expenditures be addressed through a “one size fits all” framework. We do not believe this report or earlier products suggest that assessing the performance of tax expenditures be done in only one way. We have emphasized the need for greater scrutiny of tax expenditures and more transparency over how well they work and how they compare to other related federal programs. In its comments, HUD described the report as substantive and comprehensive in addressing community development tax incentives with accurate information about the EZ/RC tax expenditures and HUD’s role in their administration. However, HUD expressed the view that we had minimal justification for our matters for Congress to consider using the GPRAMA consultation process to express congressional priorities related to community development and to focus attention on community development tax expenditures’ performance through its oversight activities. We disagree. The basic issues we found in this review—the all too often lack of even basic information about tax expenditures’ use and the relative paucity of evaluations of their performance—are among the key issues that could be mitigated through GPRAMA crosscutting goals and Congress’s oversight activities. HUD also said we had skirted the issue of identifying programs with the greatest probability for elimination due to duplication, fragmentation, and overlap. This was not among our review’s objectives and we believe the type of information we present can assist Congress in understanding what information is available to support such decisions. As we have previously reported, agencies engaging Congress in identifying which issues to address and what to measure are critical, and GPRAMA significantly enhances requirements on the consultation process. With the release of the interim crosscutting goals, we believe that Congress has a continuing opportunity to express its priorities regarding community development ahead of the next goal cycle due in February 2014. HUD also noted the expiration of some tax expenditures and sought clarification about their inclusion in the report. Our report includes recently expired tax expenditures and where applicable discusses our prior findings and suggestions for Congress to consider if it wishes to extend the tax expenditures that have expired or create similar new ones. HUD also provided technical and editorial comments which we incorporated as appropriate. In its comments, Interior disagreed with several findings. Interior characterized our report as expressing the view that unwarranted overlap, fragmentation, or duplication existed involving the 20 percent historic rehabilitation credit that Interior’s NPS helps administer. Interior agreed that the tax credit—which has a primary purpose to preserve and rehabilitate historic buildings—has a two-fold mission to also promote community development by revitalizing historic districts and neighborhoods. However, Interior disagreed that the historic rehabilitation tax credit overlaps or duplicates with other community development tax expenditures. Interior stated that only the tax credit has a specific purpose to preserve historic buildings, that the tax credit is not targeted to certain census tracts or low-income areas, and that Congress generally did not exclude historic tax credit users from also using other federal programs. In addition, Interior said that the administration of the historic rehabilitation tax credit was not fragmented, but instead was an example of joint administration that effectively draws upon the best resources of two federal agencies in a coordinated way to implement the law. Finally, Interior disagreed with our finding that limited information is available about the effectiveness of the 20 percent historic rehabilitation tax credit. Our report does not characterize any overlap, fragmentation, or duplication as “unwarranted.” Rather, we provide a factual description based on standard definitions used in many GAO reports of the relationships between the various tax expenditures that have at least a partial purpose of supporting community development. We make the same point that Interior raises as well—that Congress was aware of and often designed rules to govern the interrelationships among these tax expenditures. Accordingly, our report says these interrelationships do not necessarily represent unnecessary duplication. Based on Interior’s comments, however, we further clarified our text to note that one of the differences between the historic rehabilitation credit and the other community development tax expenditures is that the rehabilitation credit targets certain older structures. Regarding Interior’s comment about fragmentation in the credit’s administration, our report describes the roles of IRS and NPS and says fragmentation may sometimes be necessary when the resources and expertise of more than one agency are required, such as in the case of NPS overseeing technical standards for historic preservation. As we reported, however, fragmentation can result in administrative burdens when an applicant needs to apply at multiple agencies to finance a specific project, such as restoring a historic building as low-income housing. Finally, regarding Interior’s comments on the effectiveness of the rehabilitation tax credit, we continue to note that little is known about the effectiveness of the credit as a community development program given that we identified only one empirical analysis of the effect of the tax credit on community development. Interior pointed specifically to reports based on an economic model NPS helped fund. However, as our report states, the modeling reports did not assess what would have happened in the absence of the historic rehabilitation tax credits or compare development in tax credit project communities to similar communities. In its comment letter (reprinted in app. VIII), the CDFI Fund said that it appreciated GAO’s ongoing efforts to improve and strengthen performance measurement and evaluation of community and economic development programs. The CDFI Fund said that it has committed resources to systematically evaluate the impacts of the NMTC program and proposed to develop tools that would have provided standard benchmarking and estimation techniques for measuring outcomes and coordinating reporting for projects with multiple sources of funding. Our literature review for this report drew on a study contracted by the CDFI Fund that provided an overview of the inherent challenges in evaluating community development programs. The literature review will inform a forthcoming independent evaluation of the NMTC to be issued later this spring. The CDFI Fund also provided technical comments which we incorporated as appropriate. The CDFI Fund said that it continued to have strong reservations with our 2010 option for Congress to consider offering grants in lieu of NMTC tax credits if it extends the NMTC program. As stated in our 2010 report and reiterated as a cost saving option in our 2011 duplication report, our analysis suggests that converting the NMTC to a grant program would increase the amount of the equity investment that could be placed in low- income businesses and make the federal subsidy more cost-effective. Our 2010 report addressed both concerns that the CDFI Fund reiterated in its comments on this report. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of this report to the Director of the Office of Management and Budget, the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-9110 or brostekm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other key contributors to this report are listed in appendix IX. Our objectives were to (1) identify tax expenditures that promote community development, and areas of potential overlap and interactions among them; (2) assess data and performance measures available and used to assess performance for community development tax expenditures; and (3) determine what previous studies have found about the effectiveness of selected tax expenditures in promoting community development. While both the U.S. Department of the Treasury (Treasury) and the Joint Committee on Taxation (JCT) annually compile a list of tax expenditures and estimates of their cost, the Treasury and JCT lists differ somewhat in terms of what is listed as a tax expenditure and how many specific provisions may be combined in a listed tax expenditure. Our count of community development tax expenditures is based on the Treasury and JCT published tax expenditure lists, detailed below. Where a single tax expenditure listing encompasses more than one tax code provision, we separately describe those provisions to provide a more detailed perspective of the mix of tax assistance available for community development. Federal agencies do not have a standard definition of what constitutes community or economic development. To identify community development tax expenditures, we developed a list of community development activities based on various federal sources and compared these activities to the authorized uses of tax expenditures. As a starting point for developing the list of activities, we used the definition of the community and regional development budget function and its three subfunctions—urban community development, rural and regional development, and disaster relief and insurance.JCT list tax expenditures by budget function. We also used descriptions of spending programs under the community and regional development budget function as detailed in the 2010 Catalog of Federal Domestic Assistance (CFDA). We further reviewed descriptions of allowable uses under the Community Development Block Grant (CDBG)—the largest single spending program in the budget function. Finally, we reviewed the community development definition for the Community Reinvestment Act (CRA) and identified certain tax expenditures that banks can use in meeting CRA community investment tests. We included tax expenditures targeted to certain geographies, such as low-income areas or designated disaster areas, or specific populations, such as Native Americans. Table 7 summarizes the definition of community development for purposes of this report. We compiled a preliminary list of tax expenditures for fiscal year 2010 listed under community and regional development budget function by Treasury and JCT. Our universe included expired tax expenditures listed by either Treasury or JCT which had estimated revenue losses or outlays in fiscal year 2010. While the tax expenditure lists published by Treasury and JCT are generally similar, specific tax expenditures reported by each under the community and regional development budget function differed, as shown in table 8. Four tax expenditures were listed by both under the community and regional development budget function. Another four tax expenditures were reported by both Treasury and JCT but appeared under community and regional development function on one list and under a different budget function on the other list. Fourteen tax expenditures were reported under the community and regional development budget function by either Treasury or JCT, including eight tax expenditures supporting disaster relief and recovery. Whereas JCT lists six disaster tax packages as tax expenditures, Treasury officials told us that disaster-related revenue losses were included in Treasury estimates for specific tax expenditures made available in disaster areas. For example, revenue losses from additional allocations of the Low-Income Housing Tax Credit for the GO Zone were incorporated into Treasury’s Low-Income Housing Tax Credit estimate. To avoid double-counting, we dropped two tax expenditures—credit to holders of Gulf and Midwest tax credit bonds, and employee retention credit for employers in certain federal disaster areas—listed separately by Treasury that were included in the JCT disaster package estimates. We used JCT and Internal Revenue Service (IRS) documents to identify specific tax code provisions within the disaster relief tax expenditures on JCT’s list. Appendix VI lists 45 tax provisions and special rules in the six disaster relief tax expenditures included in JCT’s list. We did not sum disaster revenue loss estimates to avoid double counting amounts already included in estimates for specific tax expenditures. Using our list of community development activities as criteria, we also identified tax expenditures reported by Treasury under other budget functions that appeared to be at least partially intended to support activities we had identified as community development activities. Table 9 includes six tax expenditures reported by Treasury under other budget functions and our rationale for inclusion. Table 10 shows how we categorized the community development tax expenditures as primarily promoting community development versus supporting community development and other federal mission areas. We shared the preliminary universe of community development tax expenditures with Treasury, IRS, Office of Management and Budget (OMB) and CRS. We also shared the preliminary universe with federal agencies helping administer specific community development tax expenditures, including the Community Development Financial Institutions (CDFI) Fund which administers the New Markets Tax Credit; the Department of Housing and Urban Development (HUD) which helps administer the Empowerment Zones and Renewal Communities programs; and the National Park Service (NPS) which helps administer rehabilitation tax credits. We asked these agencies to review the preliminary universe and confirm that the tax expenditures could be used to promote community development, delete tax expenditures that were listed incorrectly or are duplicative, or add tax programs that we had omitted. Based on feedback from federal agencies, we refined the universe of community development tax expenditures as appropriate. We excluded six tax expenditures reported under the community and regional budget function, as shown in table 11. As discussed above, we excluded two disaster tax expenditures listed by Treasury to avoid double counting disaster aid packages listed by JCT. Similarly, we excluded a District of Columbia tax expenditure listed by JCT to avoid duplication with Treasury’s estimate for Empowerment Zones and Renewal Communities. We excluded three tax expenditures listed by Treasury or JCT under the community and regional development budget function that were not specifically linked to community development activities. Our final universe does not include various energy tax expenditures that may be claimed for bank investments used to meet CRA regulatory requirements nor tax expenditures for deductible charitable contributions. Although certain charitable contributions may fund organizations or activities that contribute to community development, we excluded charitable contribution tax deductions from the universe based on external feedback that it is not feasible to isolate the community development portion of the large charitable contributions tax expenditures or link the charitable aid to specific communities. See appendix II for our final universe of 23 community development tax expenditures. This count reflects the number of tax expenditures as reported on the Treasury or JCT lists. Whereas appendix II lists the Empowerment Zones and Renewal Communities (EZ/RC) as a single tax expenditure consistent with Treasury’s list, appendix IV details the various tax incentives available in EZs and RCs. We used Treasury revenue loss estimates for each tax expenditure except in cases where only JCT reported a tax expenditure. Where appropriate, we summed revenue loss estimates to approximate the total federal revenue forgone through tax expenditures that support community development. Certain tax expenditures, including tax credit and direct payment bonds, also have associated outlays, and we included those outlays in presenting total costs. While sufficiently reliable as a gauge of general magnitude, the sum of the individual tax expenditure estimates does not take into account interactions between individual provisions. To identify areas of potential overlap among the tax expenditures, we used the definitions from our March 2011 report on duplication in government programs: Overlap occurs when multiple agencies or programs have similar goals, similar activities or strategies to achieve them, or similar target beneficiaries; Fragmentation refers to circumstances where multiple agencies or offices are involved in serving the same broad area of national need; and Duplication occurs when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries. Using information from prior GAO products, publications from CRS, IRS, JCT, Office of the Comptroller of Currency (OCC), and OMB; as well as documentation from other federal agencies helping administer specific tax expenditures, we compiled publicly available information about each tax expenditure’s design and implementation, including descriptions; specific geographies or populations targeted; volume caps and other allocation limits; and roles of entities within and outside the federal government in administration. Based on the information we collected and the clarifications that the agencies provided, we determined that this descriptive information was sufficiently reliable for the purposes of this engagement to identify potential duplication, overlap, and fragmentation. We reviewed the Internal Revenue Code and IRS regulations to identify allowable interactions or limits on using community development tax expenditures together. Where specified in tax law and regulations, we also identified interactions and limits on using tax expenditures with other federal spending programs. The review of allowable interactions and limits was not exhaustive—we did not search documentation from all federal agencies carrying out community development programs, and regulations for related spending programs may also document interactions between those programs and the community development tax expenditures. To determine what data and performance measures are available and used to assess community development tax expenditures, we identified the data elements and types of information that IRS and federal agencies collect. We also reviewed tax forms, instructions, and other guidance and interviewed IRS officials to determine the types of information that IRS collects on how the tax expenditures in our universe are used. For certain community development tax expenditures in our universe where other federal agencies help with administration—the New Markets Tax Credit, Empowerment Zone/Renewal Community tax incentives, and the rehabilitation tax credits—we reviewed prior GAO reports, and interviewed and collected information from the CDFI Fund, HUD, and NPS to identify their roles in helping administer the tax expenditures and any measures the agencies use to review tax expenditure performance. We also interviewed officials and reviewed documentation from OMB, Treasury, IRS, HUD, and NPS about efforts to assess performance for community development tax expenditures and any crosscutting reviews of related tax and spending programs. For the purposes of this report, we focused on information collected by federal agencies. State and local entities also collect information on some of the tax expenditures included in our universe. For example, housing finance agencies collect data on low-income housing tax credit projects. Similarly, state and local bond financing authorities may have additional data on specific projects and activities funded with federally subsidized bond financing. To determine what previous studies have found about effectiveness for selected tax expenditures, we conducted a literature review for selected tax expenditures—the Empowerment Zone/Renewal Community tax programs, the New Markets Tax Credit program, and tax expenditures available for certain disaster areas. We selected these tax expenditures because they account for most of the 2010 revenue loss for the tax expenditures that primarily promote community development. The EZ tax incentives and the NMTC expired after December 31, 2011. For the EZ/RC and NMTC programs, we focused on literature published since our 2010 reports on these programs. We also selected the rehabilitation tax credits; these multipurpose tax expenditures support community development as well another federal mission area, and they can be used in combination with other community development tax expenditures. We searched databases, such as Proquest, Google Scholar, and Econlit, for studies through May 2011. To target our literature review on effectiveness, we identified studies that attempted to measure the impact of the incentives on certain measures of community development, such as the poverty and unemployment rate. We reviewed studies that met the following criteria: studies that include original data analysis, studies based on empirical or peer-reviewed research, and studies not derived from or sponsored by associations representing industry groups and other organizations that may benefit from adjustments to laws and regulations concerning community development tax expenditures. Using these criteria, we identified and reviewed eight studies on the EZ/RC programs published since our most recent report on the topic. For NMTC, although we did not identify any new studies meeting our criteria, we included a literature review study contracted by CDFI Fund that was intended to provide the groundwork for a forthcoming evaluation and provides an overview of inherent challenges in evaluating community development programs. Additionally, we summarized our prior findings about the selected tax expenditures, and these findings are not generalizable to the universe of community development tax expenditures. For the rehabilitation tax credits, we identified one study that used empirical methods to measure one aspect of community development. We also included an academic study prepared with assistance from NPS that highlights some limitations in attempting to evaluate the effectiveness of the rehabilitation tax credits. For disaster relief incentives, we identified peer reviewed articles that made potentially useful qualitative points, but the articles did not use rigorous or empirical methods to examine effectiveness. See the bibliography for a listing of the studies we reviewed in detail. We conducted this performance audit from January 2011 through February 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Budget function(s) Expiration date (if applicable) Empowerment Zones and Renewal Communities (EZ/RC) 8/10/1993 (EZ); 12/21/2000 (RC) Low-Income Housing Tax Credit (LIHTC) 20 percent credit for rehabilitation of historic structures environment (Treasury); commerce and housing (JCT) 10 percent credit for rehabilitation of structures (other than historic) N/A Community and regional development (Treasury); commerce and housing (JCT) Budget function(s) Expiration date (if applicable) 12/31/2009 environment (Treasury); commerce and housing (JCT) N/A Community and regional development (Treasury); Transportation (JCT) Exclusion of interest on bonds for water, sewage, and hazardous waste facilities environment (Treasury); community and regional development (JCT) 6/28/1968 (water and sewage facilities); 10/22/1986 (hazardous waste facilities) Credit for holders of qualified zone academy bonds (QZAB) Exclusion of interest on public purpose state and local bonds Build America Bonds $1,850 General purpose fiscal assistance (Treasury); community and regional development (JCT) Budget function(s) Expiration date (if applicable) N/A: Not applicable. The EZ and RC programs offered packages of tax incentives in specific designated communities. Appendix IV lists seven EZ and six RC tax incentives. JCT indicated a revenue loss of less than $50 million. The exclusion of interest on public-purpose state and local bonds has been in effect, in one form or another, since the enactment of the Revenue Act of 1913, ch. 16, 38 Stat. 114. JCT indicated a revenue loss of less than $50 million in fiscal year 2010. JCT did not quantify revenue losses for this tax expenditure. See Appendix VI for tax provisions and special rules available for disaster relief and recovery for specific presidentially declared disaster areas Low-Income Housing Tax Credit Exclusion of interest on rental housing bonds Rehabilitation of older structures, subtotal 20 percent credit for rehabilitation of historic structures 10 percent credit for rehabilitation of structures (other than historic) Includes both Recovery Zone Economic Development Bonds and Recovery Zone Facility Bonds. We did not sum total costs of disaster package tax expenditures listed by JCT to avoid double counting estimated revenue losses for Treasury tax expenditures we identified as promoting community development. Total includes $190 million in revenue losses, and $10 million in outlays for fiscal year 2010. Empowerment Zones and Renewal Communities (EZ/RC) Businesses in designated Empowerment Zones (EZ) or Renewal Communities (RC) are eligible to claim various tax incentives, listed below. These incentives may help reduce unemployment, generate economic growth, and stimulate community development and business activity. 30 urban EZs, 10 rural EZs, 28 urban RCs and 12 rural RCs located throughout the United States. These areas consist of Census tracts that are economically depressed and meet statutory or regulatory requirements (based on 1990 Census data) for (1) poverty level, (2) overall unemployment, (3) total population, and (4) maximum required area of EZs or RCs. Additionally, the boundaries of RCs were expanded based on 2000 Census data. The eligibility requirements differed by round, by program, and between urban and rural nominees; for example, round I urban EZs (selected in 1993) were selected using 6 indicators of general distress, including incidence of crime and narcotics use and amount of abandoned housing, while urban and rural ECs (selected in 2000) were selected using 17 indicators, including number of persons on welfare and high school dropout rates. Employment credit (EZ/RC) Businesses may claim an annual tax credit of up to $3,000 or $1,500 for each employee living and working for the employer in an EZ or RC area, respectively. Businesses in EZs and RCs, and employees living and working for the employer in EZs or RCs. Businesses may claim a tax credit of up to $2,400 for each new employee age 18 to 39 living in an EZ/RC, or up to $1,200 for a youth summer hire ages 16 or 17 living in an EZ or RC. Businesses in EZs and RCs, and employees living and working for the employer in EZs or RCs aged 18-39, or youth summer hires ages 16 or 17 living in an EZ or RC. New construction and rehabilitation projects in RCs. Businesses may claim an accelerated method of depreciation to recover certain business costs of new or substantially rehabilitated commercial buildings located in an RC; states may allocate up to $12 million annually per RC for the provision. Increased Section 179 deduction (EZ/RC) Businesses may claim an increased deduction of up to the smaller of $35,000 or the cost of eligible property purchases (including equipment and machinery) for businesses in an EZ/RC. Businesses incurring costs for tangible personal property, such as equipment and machinery, for use in EZs or RCs. Description State and local governments can issue tax-exempt bonds to provide loans to qualified businesses to finance construction costs in EZs. State and local government entities can issue up to $60 million for each rural EZ, $130 million for each urban EZ with a population of less than 100,000, and $230 million for each urban EZ with a population greater than or equal to 100,000. These bonds are not subject to state volume caps. Targeted geographies and populations Large business projects tied to the employment of residents in EZs. Rollover of capital gains (EZ) Owners of businesses located in EZs may be able to postpone part or all of the gain from the sale of a qualified EZ asset that they hold for more than 1 year. Businesses located in EZs. Increased exclusion of capital gains (EZ) Taxpayers can exclude 60 percent of their gain from the sale of small business stock in a corporation that qualifies as an enterprise zone business. Enterprise zone businesses located in EZs. Exclusion of capital gains (RC) Owners of businesses located in RCs can exclude qualified capital gains from the sale or exchange of a qualified community asset held more than 5 years. Businesses located in RCs. New Markets Tax Credit (NMTC) Investors are eligible to claim a tax credit for investing in certified Community Development Entities (CDE) for 39 percent of the investment over 7 years. CDEs, in turn, invest in qualified low-income community investments such as mixed-use facilities, housing developments, and community facilities, which may contribute to employment in low-income communities. Low-income communities defined as Census tracts (1) in which the poverty rate is at least 20 percent, or (2) outside a metropolitan area in which the median family income does not exceed 80 percent of median statewide family income or within a metropolitan area in which the median family income does not exceed 80 percent of the greater statewide or metropolitan area median family income. Low-income communities also include certain areas not within Census tracts, tracts with low population, and Census tracts with high-migration rural counties. Description State and local governments issuing Recovery Zone Economic Development Bonds (RZEDB) allow investors to claim a tax credit (equal to 45 percent of the interest rate established between the buyer and the issuer of the bond). States and localities also had the option of receiving a direct payment from the U.S. Treasury of equal value to the tax credit. Bond proceeds were to be used to fund (1) capital expenditures paid or incurred with respect to property located in the designated recovery zone (e.g., Empowerment Zones or Renewal Communities); (2) expenditures for public infrastructure and construction of public facilities; and (3) expenditures for job training and educational programs. Individuals and corporations can exclude Recovery Zone Facility Bond (RZFB) interest income from their taxable income. Bond proceeds are used by state and local governments to finance projects pertaining to any trade or business, aside from exceptions listed below. More specifically, RZFBs may be issued for any depreciable property that (1) was constructed, reconstructed, renovated, or acquired after the date of designation of a “recovery zone;” (2) the original use of which occurs in the recovery zone; and (3) substantially all of the use of the property is in the active conduct of a “qualified business,” which is defined to include any trade or business except for residential rental facilities or other specifically listed projects under Internal Revenue Code 144(c)(6)(B), including golf courses, massage parlors, and gambling facilities. Targeted geographies and populations RZEDBs and RZFBs target any area designated “recovery zones”, including (1) areas having significant poverty, unemployment, rate of home foreclosures, or general distress; (2) areas that are economically distressed by reason of the closure or realignment of a military installation pursuant to the Defense Base Closure and Realignment Act of 1990; or is (3) any area for which an Empowerment Zone or Renewal Community was in effect as of February 17, 2009. Indian reservations. temporary category of tax-exempt bonds, could exclude that interest income from their taxable income.. Indian tribal governments were allowed greater flexibility to use the bonds to finance economic development projects, which in turn were to promote development on Indian reservations. Previously, Indian tribal governments could only issue tax-exempt bonds for essential government services. Description Businesses on Indian reservations are eligible to claim a tax credit for employing Indian tribal members and their spouses. The credit is for 20 percent of the first $20,000 in wages and health benefits paid to tribal members and spouses. This credit is intended to provide businesses with an incentive to hire certain individuals living on or near an Indian reservation. Targeted geographies and populations Businesses on Indian reservations, and Indian tribal members and spouses. Low-Income Housing Tax Credit (LIHTC) State housing finance agencies (HFA) award the tax credits to owners of qualified rental properties who reserve all or a portion of their units for occupancy for low-income tenants. Once awarded LIHTCs, project owners typically attempt to obtain funding for their projects by attracting third-party investors that contribute equity to the projects. These investors can then claim the tax credits. This arrangement of providing LIHTCs in return for an equity investment is generally referred to as “selling” the tax credits. The credit is claimed over a 10-year period, but a project must comply with LIHTC requirements for 15 years. A 9 percent tax credit—intended to subsidize 70 percent of the qualified basis in present value terms—is available for the costs for new construction or substantial rehabilitation projects not otherwise subsidized by the federal government. An approximately 4 percent tax credit—intended to subsidize about 30 percent of the qualified basis in present value terms—is available for the acquisition costs for existing buildings. The 4 percent credit is also used for housing financed with tax-exempt rental housing bonds. The low-income housing tax credit program is intended to stimulate the production of affordable rental housing nationwide for low-income households. Households with income at or below 60 percent of an area’s median gross income (AMGI). Qualified Census tracts and difficult development areas are eligible for additional credits. In a qualified Census tract, 50 percent or more of the households have incomes of less than 60 percent of the area’s median income. In a difficult development area, construction, land, and utility costs are high relative to the area’s median income. Description Building owners and private investors may qualify to claim a 20 percent tax credit for costs to substantially rehabilitate buildings that are on the National Register of Historic Places or are otherwise certified as historic by the National Park Service (NPS). To be eligible for the credit, buildings must be used for offices; rental housing; or commercial, industrial, or agricultural enterprises. Building owners must hold the building for 5 years after completing the rehabilitation or pay back at least a portion of the credit. The credit is intended to attract private investment to the historic cores of cities and towns. The credit is also intended to generate jobs, enhance property values, and augment revenues for state and local governments through increased property, business and income taxes. Targeted geographies and populations Certified historic buildings either listed individually in the National Register of Historic Places, or located in a registered historic district and certified by NPS as contributing to the historic significance of that district. 10 percent credit for rehabilitation of structures (other than historic) Individuals or corporations may claim a 10 percent tax credit for costs to substantially rehabilitate nonhistoric, nonresidential buildings placed into service before 1936. These structures must retain specified proportions of the buildings’ external and internal walls and internal structural framework. To be eligible for the credit, buildings must be used for offices or commercial, industrial, or agricultural enterprises. Qualified spending must exceed the greater of $5,000 or the adjusted basis (cost less depreciation taken) of the building spent in any 24-month period. The credit is intended to attract private investment to the historic cores of cities and towns. The credit is also intended to generate jobs, enhance property values, and augment revenues for state and local governments through increased property, business and income taxes. Nonresidential buildings placed into service before 1936; especially those located in older neighborhoods and central cities. Tax-exempt organizations may exclude gains or losses from the unrelated business income tax when they acquire and sell brownfield properties on which there has been an actual or threatened release of certain hazardous substances. This exclusion reduces the total cost of remediating environmentally damaged property and may attract the capital and enterprises needed to rebuild and redevelop polluted sites. Environmentally contaminated sites identified as brownfields held for use in a trade or business on which there has been an actual or threatened release or disposal of certain hazardous substances. The exclusion does not target specific geographies or populations. Description Firms may deduct expenses related to controlling or abating hazardous substances in a qualified brownfield property. This deduction subsidizes environmental cleanup and may help develop and revitalize urban and rural areas depressed from environmental contamination. Targeted geographies and populations Environmentally contaminated sites identified as brownfields held for use in a trade or business on which there has been an actual or threatened release or disposal of certain hazardous substances. The deduction does not target specific geographies or populations. Individuals and corporations can exclude private activity bond interest income from their taxable income. Bond proceeds are used by state and local governments to finance the construction of multifamily residential rental housing units for low- and moderate-income families. Low-income housing construction partly financed with the tax-exempt bonds may be used with the 4 percent low-income housing tax credit. Households with incomes at or below 60 percent of an area’s median gross income (AMGI). Individuals and corporations can exclude private activity bond interest income from their taxable income. Bond proceeds are used by state and local governments to finance the construction of government-owned airports, docks, and wharves; mass commuting facilities such as bus depots and subway stations; and high-speed rail facilities and government-owned sport and convention facilities. Infrastructure such as airports, docks, wharves, mass commuting facilities, and intercity rail facilities. The bond provision does not target specific geographies or populations. Individuals and corporations can exclude private activity bond interest income from their taxable income. Bond proceeds are used by state and local governments to finance the construction of water, sewage, and hazardous waste facilities. Infrastructure such as water treatment plants, sewer systems and hazardous waste facilities; the bond provision does not target specific geographies or populations. Credit for holders of qualified zone academy bonds (QZAB) Description Banks, insurance companies, and other lending corporations that purchase qualified zone academy bonds are eligible to claim a tax credit equal to the dollar value of their bonds multiplied by a Treasury-set credit rate. Or, issuers had the option for qualified zone academies to receive a direct payment from the Treasury of equal value to the tax credit. School districts with qualified zone academies issue the bonds and use at least 95 percent of the bond proceeds to renovate facilities, provide equipment, develop course materials, or train personnel in such academies. Business or nonprofit partners must also provide at least a 10 percent match of QZAB funds, either in cash or in-kind donations, to qualified zone academies. The bond program helps school districts reduce the burden of financing school renovations and repairs. Targeted geographies and populations Public schools below the college level that (1) are located in an Empowerment Zone, Enterprise Community or Renewal Community, or (2) have at least 35 percent of their student body eligible for free or reduced-cost lunches. Individuals and corporations can exclude governmental bond interest income from their taxable income. State and local governments generally use bond proceeds to build capital facilities such as highways, schools, and government buildings. Capital facilities owned and operated by governmental entities that serve the public interest. The bond provision does not target specific geographies or populations. Individuals and corporations could claim a tax credit equal to 35 percent of the interest rate established between the buyer and the issuer of the bond. State and local governments issuing BABs also had the option of receiving a direct payment from the Treasury of equal value to the tax credit. Bond proceeds were intended to be used for stimulating development of public infrastructure in communities, as well as to aid state and local governments. If issuers choose to receive a direct payment, then they must use bond proceeds for capital expenditures. No specific geographies or populations are targeted. Areas of Lower Manhattan affected by terrorist attacks occurring on September 11, 2001. Hurricane Katrina disaster area (consisting of the states of Alabama, Florida, Louisiana, Mississippi), including core disaster areas determined by the President to warrant individual or individual and public assistance from the federal government following Hurricane Katrina in August 2005. Gulf Opportunity Zone (GO Zone) Counties and parishes in Alabama, Florida, Louisiana, Mississippi and Texas that warranted additional, long-term federal assistance following Hurricanes Katrina, Rita and Wilma in 2005 were designated as Katrina, Rita and/or Wilma GO Zones. Individuals and corporations affected by the September 11, 2001, terrorist attacks were eligible for seven tax provisions. These provisions included tax-exempt bonds targeted toward reconstruction and renovation; a special depreciation allowance for certain property that was damaged or destroyed; and a tax credit for businesses to hire and retain employees in the New York Liberty Zone. Individuals and corporations affected by hurricanes Katrina, Rita, and Wilma, which struck between August-October 2005, were eligible to claim 33 GO Zone tax provisions. These provisions include tax-exempt bond financing, expensing for certain clean-up and demolition costs, and additional allocations of the New Markets Tax Credit for investments that served the GO Zone. Twenty-four counties in Kansas affected by storms and tornadoes that began on May 4, 2007. Description Individuals and corporations affected by severe storms, tornadoes or flooding in 10 states from May 20-July 31, 2008 were eligible for a package of 26 tax benefits, including tax- exempt bond financing, increased rehabilitation tax credits for damaged or destroyed structures, and suspensions of limitations on claiming personal casualty losses. Qualified small or farming businesses affected by disasters in federally declared disaster areas are eligible to claim a net operating loss for up to 3 years after the loss was incurred, instead of the usual 2 years generally permitted. This credit may allow small and farming businesses in communities declared disaster areas to recoup a portion of their losses following a disaster. Targeted geographies and populations Selected counties in 10 states affected by tornadoes, severe storms and flooding occurring from May 20-July 31, 2008. Individuals and businesses located in any geography declared a disaster area in the United States during tax years 2008 and 2009. Qualified small businesses and farming businesses located in any federally declared disaster area. Qualified small businesses are sole proprietorships or partnerships with average annual gross receipts (reduced by returns and allowances) of $5 million or less during the 3-year period ending with the tax year of the net operating loss. For more information on the bond financing by Indian tribal governments, see GAO, Federal Tax Policy: Information on Selected Capital Facilities Related to the Essential Governmental Function Test, GAO-06-1082 (Washington, D.C.: Sept.13, 2006) and U.S. Department of the Treasury, Report and Recommendations to Congress reqarding Tribal Economic Development Bond provision under Section 7871 of the Internal Revenue Code (Washington, D.C.: Dec. 19, 2011). Volume cap or other allocation limits? Involves administration by a federal agency outside IRS? Involves administration by nonfederal entity? Empowerment Zones and Renewal Communities (EZ/RC) Varied. Five EZ and four RC tax incentives did not have any volume caps or allocation limits. Yes; HUD oversaw EZ programs in urban areas, and the USDA oversaw EZ programs in rural areas. HUD is responsible for outreach efforts and serves as a promoter for EZs and RCs. HUD and IRS established a partnership regarding the EZ/RC tax incentives, where both HUD and IRS provide representation at workshops and conferences. Yes; state and local governments nominate communities for EZ and RC designation. Nominated EZ communities had to submit a strategic plan showing how they would meet key program principles, while nominated RCs had to submit a written “course of action” with commitments to carry out specific legislatively mandated activities. Limit of up to an annual total of $12 million per RC. No; IRS has sole federal responsibility for the administration of the CRD program. HUD collected data from local administrators used for commercial projects in RCs. Yes; state governments allocate CRD authority to eligible businesses engaged in commercial projects within RCs. Limits on issuing EZ facility bond volume were up to $60 million for each rural EZ, up to $130 million for each urban EZ with a population of less than 100,000, and $230 million for each urban EZ with a population greater than or equal to 100,000. No; IRS has sole federal responsibility for the administration of EZ facility bond program. HUD collected information from local administrators of EZs on the use of facility bonds used for construction projects in EZs. Yes; state and local governments issue EZ facility bonds to finance construction costs. Tax expenditure New Markets Tax Credit (NMTC) Yes; the maximum amount of annual investment eligible for NMTCs was $3.5 billion each year in calendar years 2010 and 2011. Volume cap or other allocation limits? Involves administration by a federal agency outside IRS? Yes; the Treasury Community Development Financial Institutions (CDFI) Fund certifies organizations as community development entities (CDE), CDFI Fund also provides allocations of NMTCs to CDEs through a competitive process. The CDFI Fund is responsible for monitoring CDEs to ensure that CDEs are compliant with their allocation agreements through the New Markets Compliance Monitoring System and, on a more limited basis, by making site visits to selected CDEs. The CDFI Fund also provides IRS with access to CDFI data for monitoring CDEs’ compliance with NMTC laws and regulations. Involves administration by nonfederal entity? Yes; once a CDE receives an allocation of tax credits, the CDE can offer the tax credits to investors, who in turn acquire stock or a capital interest in the CDE. The investor can gain a potential return for a “qualified equity investment” in the CDE. In return for providing the tax credit to the investor, the CDE receives proceeds from the offer and must invest “substantially all” of such proceeds into qualified low-income community investments. Yes; the Recovery Zone Economic Development Bond (RZEDB) and Recovery Zone Facility Bond (RZFB) programs had national volume caps of $10 billion and $15 billion, respectively. Yes; Treasury determined the amount of RZEDB and RZFB volume cap allocations received by each state and the District of Columbia based on declines in employment levels for each state and the District during 2008 relative to declines in national employment levels during the same period. Yes; each state was responsible for allocating shares of RZEDB and RZFB volume caps to counties and large municipalities based on declines in employment levels for such areas during 2008 relative to declines in employment levels for all counties and municipalities in such states during the same period. State and local governments issued RZEDBs, and had the option of allowing investors to claim a tax credit for the bonds. States and localities also had the option of receiving a direct payment from the Treasury of equal value to the tax credit. Volume cap or other allocation limits? Yes; the bond program had a $2 billion national volume cap. Involves administration by a federal agency outside IRS? Yes; Treasury allocated bond capacity to Indian tribal governments in consultation with the Secretary of Interior, and the Department of Interior (Interior) maintains updated lists of Indian tribal entities that are eligible to apply for allocations of bond volume. Interior may also issue letters to Indian tribal entities indicating federal recognition of such entities in order to demonstrate eligibility for the bond program. Involves administration by nonfederal entity? Yes; Indian tribal governments applied for Tribal Economic Development Bonds, issued the bonds, and used proceeds from bond sales to finance economic development projects or nonessential governmental activities. Indian tribal governments had the option of allowing investors to claim a tax credit for the bonds. Indian tribal governments also had the option of receiving a direct payment from the Treasury of equal value to the tax credit. Low-Income Housing Tax Credit (LIHTC) Yes; in 2010, the allocation limit was the greater of $2.10 per- capita or $2.43 million for each state, U.S. territory, and the District of Columbia. The per capita amount is subject to cost of living adjustments. No; the IRS has sole federal responsibility for the administration of LIHTC program. However, the program is closely coordinated with HUD housing programs for the computation of the area median gross income (AMGI) used to determine household eligibility and maximum rents, as well as the definition of income. he IRS also uses HUD’s Uniform Physical Condition Standards to determine whether the low-income housing is suitable for occupancy. HUD also maintains a LIHTC database with information on the project address, number of units and low-income units, number of bedrooms, year the credit was allocated, year the project was placed in service, whether the project was new construction or rehabilitation, type of credit provided, and other sources of project financing. Yes; state housing finance agencies (HFA) award LIHTCs to owners of qualified low- income housing projects based on each state’s qualified allocation plan, which generally establishes a state’s selection criteria for how its LIHTCs will be awarded. Additionally, state HFAs monitor LIHTC properties for compliance with Internal Revenue Code requirements, such as rent ceilings and income limits for tenants, and report noncompliance to the IRS. Involves administration by a federal agency outside IRS? Yes; the Secretary of Interior sets Standards of Rehabilitation for claiming the tax credit. Within Interior, NPS maintains a National Register of Historic Places; approves applications for rehabilitation projects proposing use of the 20 percent rehabilitation tax credit; and certifies whether completed projects meet the Secretary’s standards and are eligible for the tax credit. NPS may inspect a rehabilitated property at any time during the five-year period following certification of rehabilitation for claiming the 20 percent preservation tax credit, and NPS may revoke certification if work was not done according to standards set by the agency. NPS also notifies the IRS of such revocations or dispositions so the tax credit may be recaptured. Involves administration by nonfederal entity? Yes; state historic preservation offices (SHPO) review applications and forward recommendations for historic designation of structures to NPS, provide program information and technical assistance to applicants, and conduct site visits. SHPOs may also inspect a rehabilitated property at any time during a five- year period following completion of a rehabilitation project using the tax credit. 10 percent credit for rehabilitation of structures (other than historic) Yes; NPS determines whether buildings in historic districts do not contribute to such districts and, consequently, are not deemed to be historic structures. Such decertification is required before owners of such structures can claim for the 10 percent tax credit. Yes; SHPOs review decertification applications and forward recommendations to NPS, provide program information and technical assistance to applicants. Yes; EPA maintains a National Priority List of properties; such listed properties are ineligible for the tax incentive. Yes; state environmental agencies certify brownfield properties on which there has been an actual or threatened release or disposal of certain hazardous substances. Following certification, taxpayers may incur eligible remediation expenditures and claim the tax provision. Involves administration by a federal agency outside IRS? Yes; EPA maintains a National Priority List of properties; such listed properties are ineligible for the tax incentive. Involves administration by nonfederal entity? Yes; state environmental agencies certify brownfield properties on which there has been an actual or threatened release or disposal of certain hazardous substances. Following certification, site owners may claim the tax deduction, including for some expenditures incurred from prior tax years. Yes, the bond provision is subject to the private activity bond annual volume cap for each state. Yes; state and local governments, typically housing finance agencies, may issue bonds and use proceeds from bond sales to finance the construction of multifamily residential rental housing units for low- and moderate-income families. Varied; bond for the construction of mass commuting facilities, and 25 percent of bond issues for privately-owned intercity rail facilities, are included in the private activity bond annual state volume cap (government-owned facilities are exempted). Yes; state and local governments may issue bonds, and use proceeds from bond sales to finance construction of airports, docks, wharves, mass commuting facilities and intercity rail facilities. Yes, the bond provisions are subject to the private activity bond annual volume cap for each state. Yes; state and local governments may issue bonds, and then use proceeds from bond sales to finance capital improvements for water, sewer and hazardous waste facilities. Tax expenditure Credit for holders of qualified zone academy bonds (QZAB) Volume cap or other allocation limits? Yes; the bond provision has national volume caps of $1.4 billion in 2010, and $400 million in 2011. Involves administration by a federal agency outside IRS? Yes; Treasury determines the credit rate of QZABs and allocates shares of QZAB volume to state education agencies on the basis of the states’ respective populations of individuals below the poverty line (as defined by OMB). Involves administration by nonfederal entity? Yes; state education agencies determine the share of QZAB volume allocated to qualified zone academies, and issues QZABs following approval by local education agencies. Local education agencies issue QZABs after applying for and obtaining permission from states. Business or nonprofit partners provide at least a 10 percent match of QZAB funds, either in cash or in-kind donations, to qualified zone academies. Exclusion of interest on public purpose state and local bonds Build America Bonds (BAB) Yes; state, and local governments may issue bonds, and then use proceeds from bond sales to finance eligible projects—primarily public infrastructure projects such as highways, schools, and government buildings. Volume cap or other allocation limits? Involves administration by a federal agency outside IRS? Involves administration by nonfederal entity? Varied. Authority to designate up to $8 billion in tax-exempt private activity bonds (New York Liberty bonds) and $9 billion in advance refunding bonds. Yes; the Governor of the State of New York and the Mayor of New York City were allowed to issue tax-exempt New York Liberty bonds, and use proceeds to finance reconstruction and renovation projects within the New York Liberty Zone. The Governor and Mayor were allowed to issue advance refunding bonds to pay principal, interest, or redemption price on certain prior issues of bonds issued for facilities located in New York City (and certain water facilities located outside of New York City). Katrina Emergency Act Gulf Opportunity Zone (GO Zone) Varied.within the tax expenditure package have volume caps or other revenue loss limitations. Varied; multiple provisions within the tax expenditure package involved administration by federal agencies besides IRS. Varied; multiple provisions within the tax expenditure package involved administration by state and local governments and other entities. The maximum amount of advance refunding for certain governmental and qualified 501(c)(3) bonds that may have been issued was capped at $4.5 billion in the case of Louisiana, $2.25 billion in the case of Mississippi, and $1.125 billion in the case of Alabama. State and local governments in the GO Zone— Alabama, Louisiana, and Mississippi—issued advance refunding bonds. Gulf Tax Credit Bonds had a volume cap of $200 million for Louisiana, $100 million for Mississippi, and $50 million for Alabama. Yes; Treasury determines the credit rate of Gulf Tax Credit Bonds. State and local governments in the GO Zone— Alabama, Louisiana, and Mississippi—issued Gulf Tax Credit Bonds to help pay principal, interest, and premiums on outstanding state and local government bonds. Volume cap or other allocation limits? The maximum aggregate face amount of GO Zone Bonds that may have been issued in Alabama, Louisiana or Mississippi was capped at $2,500 multiplied by the population of the respective state within the GO Zone; no other states were eligible for tax- exempt bond financing. Involves administration by nonfederal entity? State and local governments in the GO Zone— Alabama, Louisiana, and Mississippi—issued bonds, though state governments approved projects for bond financing. Increased credit cap and other modified provisions for use of the Low-Income Housing Tax Credit (LIHTC) A special allocation of the LIHTC was issued for each of three years (2006, 2007 and 2008) to each of the States within the GO Zone. Each year’s special allocation was capped at $18.00 multiplied by the population of the respective state in the GO Zone. In addition, the otherwise applicable LIHTC ceiling amount was increased for Florida and Texas by $3,500,000 per State. See above description of the LIHTC regarding the involvement of state housing finance agencies (HFA). An additional allocation of the New Markets Tax Credit (NMTC) in amounts equal to $300 million for 2005 and 2006, and $400 million for 2007, were to be allocated among qualified community development entities (CDE) to make qualified low- income community investments within the Gulf Opportunity Zone. See above description of the NMTC regarding involvement of the Community Development Financial Institutions (CDFI) Fund. See above description of the NMTC regarding the involvement of CDEs. Volume cap or other allocation limits? Varied. Multiple provisions within the tax expenditure package have volume caps or other revenue loss limitations. Involves administration by a federal agency outside IRS? Varied; multiple provisions within the tax expenditure package involved administration by federal agencies besides IRS. Involves administration by nonfederal entity? Varied; multiple provisions within the tax expenditure package involved administration by state and local governments. The maximum amount of Midwestern Tax Credit Bonds that may have been issued was capped at: (1) $100 million for any state with an aggregate population located in all Midwest disaster areas within the state of at least 2,000,000; (2) $50 million for any state with an aggregate population located in all Midwest disaster areas within the state of at least 1,000,000 but less than 2,000,000; and (3) $0 for any other state. Yes; Treasury determines the credit rate of Midwestern Tax Credit Bonds. State governments in the Midwest disaster area issued Midwestern tax credit bonds to help pay principal, interest and premiums on outstanding state and local government bonds. The maximum aggregate face amount of Midwestern disaster zone bonds that may have been issued in any state in which a Midwestern disaster area was located, was capped at $1,000 multiplied by the population of the respective state within the Midwestern disaster zone; no other states were eligible for tax- exempt bond financing. State and local governments in the Midwest disaster area issued bonds. Tax expenditure Increased credit cap and other modified provisions for use of the Low-Income Housing Tax Credit (LIHTC) Volume cap or other allocation limits? A special allocation of the LIHTC was issued for each of three years (2008, 2009, and 2010) to any state in which a Midwest disaster area was located. Each year’s special allocation was capped at $8.00 multiplied by the population of the respective state in a Midwest disaster area. Involves administration by nonfederal entity? See above description of the LIHTC regarding the involvement of state housing finance agencies (HFA). Yes; for the provision allowing expensing of environmental remediation costs from disasters, state environmental agencies certify brownfield properties on which there has been an actual or threatened release or disposal of certain hazardous substances as a result of a federally declared disaster. State and local governments had the authority to issue RZEDBs and RZFBs from February 17, 2009 through December 31, 2010. Tribal governments are authorized to issue tax-exempt bonds only if substantially all of the proceeds are used for essential governmental functions or certain manufacturing facilities. Legislation targeted towards the New York Liberty Zone and the Gulf Opportunity Zones (GO Zone) allowed an additional advance refunding to redeem certain prior tax- exempt bond issuances from state and local governments. The provision allowed state and local governments to refund, or refinance, bonds that are not redeemed within 90 days after the refunding bonds are issued. Residential rental property may be financed with tax- exempt facility bonds issued by state and local governments, if the financed project is a “qualified residential rental project” with required ratios of residents with certain income limitations. Under the provision, the operator of a qualified residential rental project may rely on the representations of prospective tenants displaced by reason of certain disasters to determine whether such individual satisfies the income limitation for a qualified residential rental project. Description Mortgage revenue bonds are tax-exempt bonds issued by state and local governments to make mortgage loans to qualified mortgagors for the purchase, improvement, or rehabilitation of owner-occupied residences, and are typically required to exclusively finance mortgages for “first-time homebuyers.” Qualified mortgage revenue bonds, may be issued in targeted disaster areas without a first-time homebuyer financing requirement. Additionally, the permitted amount of qualified home-improvement loans increases from $15,000 to $150,000 for residences in a disaster zone. State and local governments in GO Zones and the Midwest disaster area may have issued tax credit bonds in areas affected by certain disasters. 95 percent of these bonds must be used to (1) pay principal, interest, or premium on outstanding bonds (other than private activity bonds) issued by state and local governments, or (2) make a loan to any political subdivision (e.g., local government) of such state to pay principal, interest, or premium on bonds (other than private activity bonds) issued by such political subdivision. These bonds differed from tax-exempt bonds in that rather than receiving tax- exempt interest payments, bondholders were entitled to a federal tax credit equal to a certain percentage of their investment. Description In certain disaster areas, tax-exempt bonds for qualified private activities may have been issued and were not restricted by aggregate annual state private activity bond limits. These bonds allow state and local governments to finance the construction or rehabilitation of properties following a disaster. Treasury named Series I inflation-indexed savings bonds purchased through financial institutions as “Gulf Coast Recovery Bonds” from March 29-December 31, 2006, in order to encourage public support for recovery and rebuilding efforts in areas devastated by Hurricanes Katrina, Rita, and Wilma. Proceeds from the sale of the bonds were not specifically designated for hurricane relief and recovery efforts. The provision provided a temporary tax credit of 30 percent to qualified employers for the value of employer- provided lodging to qualified employees affected by certain disasters. The amount taken as a credit was not deductible by the employer. Certain disaster relief tax packages included a credit of 40 percent of the qualified wages (up to a maximum of $6,000 in qualified wages per employee) paid by an eligible employer that conducted business in a disaster zone and whose operations were rendered inoperable by the disaster. Description For 2005, the Hope Scholarship Credit rate was 100 percent on the first $1,000 of qualified tuition and related expenses, and 50 percent on the next $1,000 of qualified tuition and related expenses. For 2005, the Hope credit was temporarily increased for students attending eligible educational institutions in the GO Zone to 100 percent of the first $2,000 in qualified tuition and related expenses and 50 percent of the next $2,000 of qualified tuition and related expenses, for a maximum credit of $3,000 per student. For 2006, this provision increased the tax credit again to 100 percent of the first $2,200 of qualified tuition and related expenses (instead of $1,100 under standard law in 2006), and 50 percent of the next $2,200 of qualified tuition and related expenses (instead of $1,100) for a maximum credit of $3,300 per student (instead of $1,650). For 2008 and 2009, the Hope scholarship credit was extended to students attending eligible educational institutions in the Midwestern disaster area, based on increased credit rates enacted in 2006. Individual taxpayers are typically allowed to claim a nonrefundable credit, the Lifetime Learning Credit, equal to 20 percent of qualified tuition and related expenses of up to $10,000 (resulting in a total credit of up to $2,000) incurred during the taxable year on behalf of the taxpayer, the taxpayer’s spouse, or any dependents. The Lifetime Learning Credit rate was temporarily increased from 20 percent to 40 percent for students attending institutions in certain disaster areas. Description The provision increased from 20 to 26 percent, and from 10 to 13 percent, respectively, the preservation credits with respect to any certified historic structure or qualified rehabilitated building located in certain disaster areas, provided the qualified rehabilitation expenditures with respect to such buildings or structures were incurred during an established period of time following the disaster. The LIHTC cap amount increased for affected states within the GO Zones and the Midwestern disaster area. Also, rules concerning implementation of the LIHTC were modified for the GO Zone; in the case of property placed in service from 2006-2008 in a nonmetropolitan area within the GO Zone, LIHTC income targeting rules are applied by using a national nonmetropolitan median gross income standard instead of the area median gross income standard typically applied to low-income housing projects. The provision allowed an additional allocation of NMTCs in an amount equal to $300 million for 2005 and 2006, and $400 million for 2007, to be allocated among qualified community development entities to make qualified low- income community investments within the Katrina GO Zone. Description Individuals whose principle residence were in certain disaster areas or were otherwise displaced from their homes by disasters may have elected to calculate their Earned Income Tax Credit and Refundable Child Credit for the taxable year when the disaster occurred using their earned income from the prior taxable year. Employers hiring and retaining individuals who worked in certain disaster areas were eligible to claim up to $2,400 in Work Opportunity Tax Credits per employee (or 40 percent of up to the first $6,000 of wages). Employees in other targeted categories for the tax credit (e.g., qualified veterans or families receiving food stamps) are typically required to provide certification from a designated local agency of their inclusion in such groups on or before they begin work, or their employer provides documentation to said agencies no later than 28 days after the employee begins work. However, employees who worked and/or lived in certain disaster areas do not require certification from such agencies for employers to qualify for the tax credit. Tax provision or special rule Deductions Carryback of net operating losses (NOL) Under present law, a net operating loss (NOL) is, generally, the amount by which a taxpayer’s business deductions exceed its gross income. In general, an NOL may be carried back 2 years and carried over 20 years to offset taxable income in such years. NOLs offset taxable income in the order of the taxable years to which the NOL may be carried. This provision provided a special 5-year carryback period for NOLs to the extent of qualified disaster losses in any presidentially declared disaster area during 2008 and 2009. Individuals and corporations affected by certain disasters may have carried back NOLs, for a period of 5 years, of the sum of the aggregate amount of deductions from such disasters, including deductions for qualified casualty losses; certain moving expenses; certain temporary housing expenses; depreciation deductions with respect to qualified property in disaster areas for the taxable year the property was placed into service; and certain repair expenses resulting from applicable disasters. A NOL to a farming business may have been carried back for five years if such loss was attributable to any portion of qualified timber property which was located in the Katrina or Rita GO Zones. Description The provision provided an election for taxpayers who incurred casualty losses attributable to certain disasters with respect to public utility property located in applicable disaster zones. Under the election, such losses may be carried back 5 years immediately preceding the taxable year in which the loss occurred. If the application of this provision resulted in the creation or increase of a NOL for the year in which the casualty loss is taken into account, the NOL may be carried back or carried over as under present law applicable to NOLs for such year. The provision provided an election for taxpayers to treat any GO Zone public utility casualty loss caused by Hurricane Katrina as a specified liability loss to which the present-law 10-year carryback period applies. The amount of the casualty loss is reduced by the amount of any gain recognized by the taxpayer from involuntary conversions of public utility property (e.g. physical destruction of such property) located in the GO Zone caused by Hurricane Katrina. Taxpayers who elect to use this provision are not eligible to treat the loss as part of the 5-year net operating loss carryback provided under another provision of the GO Zone Act (see 5-year NOL carryback of public utility casualty losses mentioned above). The provision suspended two limitations on personal casualty or theft losses to the extent those losses arise in certain disaster areas and are attributable to such disasters. First, personal casualty or theft losses meeting the above requirements needed not exceed $100 per casualty or theft; present law at the time contained a required threshold of $100. Second, such losses were deductible without regard to whether aggregate net losses exceed 10 percent of a taxpayer’s adjusted gross income, which was standard under present law at the time the disasters took place. The provision treats personal casualty or theft losses from the pertinent disaster as a deduction separate from other casualty losses. Description The provision removed one limitation on personal casualty or theft losses to the extent those losses arise in federally declared disaster areas during 2008 and 2009. More specifically, losses were deductible without regard to whether aggregate net losses exceed 10 percent of a taxpayer’s adjusted gross income, which was standard under present law at the time the disasters took place. The provision treats personal casualty or theft losses from federally declared disasters as a deduction separate from other casualty losses. However, present law at the time contained a required threshold of $100 for meeting requirements to claim losses, and this provision increases the threshold to $500. These rules are in effect for all federally declared disaster areas in 2008 and 2009 aside from those areas declared “Midwestern disaster areas” from flooding, tornadoes, and storms in 2008. The portion of the provision increasing the limitation per casualty to $500 only applies to 2009. Under present law, a taxpayer’s deduction for charitable contributions of inventory generally is limited to the taxpayer’s basis (typically cost) in the inventory, or if less, the fair market value of the inventory. Under this provision, a C Corporation was eligible to claim an enhanced deduction for qualified book donations. An enhanced deduction is equal to the lesser of (1) basis plus one-half of the item’s appreciation (basis plus one-half of fair market value in excess of basis) or (2) two times basis. Description Under present law, a taxpayer’s deduction for charitable contributions of inventory generally is limited to the taxpayer’s basis (typically cost) in the inventory, or if less, the fair market value of the inventory. Under this provision, any taxpayer, whether or not a C corporation, engaged in a trade or business was eligible to claim an enhanced deduction for donations of food inventory. An enhanced deduction is equal to the lesser of (1) basis plus one-half of the item’s appreciation (i.e., basis plus one- half of fair market value in excess of basis) or (2) two times basis. For taxpayers other than C corporations, the total deduction for donations of food inventory in a taxable year generally may not exceed 10 percent of the taxpayer’s net income for such taxable year from which contributions of apparently wholesome food are made. The provision allowed a taxpayer using a vehicle while donating services to charity for the provision of relief related to certain disasters to compute charitable mileage deduction using a rate equal to 70 percent of the business mileage rate in effect on the date of the contribution, rather than the charitable standard mileage rate generally in effect under law. The provision allowed for qualified contributions up to the amount by which an individual’s contribution base (adjusted gross income without regard to any NOL carryback) or corporation’s taxable income exceeds the deduction for other charitable contributions. Contributions in excess of this amount are carried over to succeeding taxable years subject to limitations under law. The provision allowed an additional first-year depreciation deduction equal to a percentage of the adjusted basis of qualified property; the percentage varies depending on the disaster area where the property is located, e.g., 30 percent for New York Liberty Zone, 50 percent for GO Zones, Kansas Disaster Zone, and other areas in the U.S. declared disaster areas under national disaster relief. A taxpayer was permitted a deduction for 50 percent of qualified disaster clean-up costs, such as removal of debris or demolition of structures, paid or incurred for an established period of time following certain disasters. Under the provision, a taxpayer may have elected to treat any repair of business-related property affected by presidentially declared disasters, including repairs that are paid or incurred by the taxpayer, as a deduction for the taxable year in which paid or incurred. Description Taxpayers may typically elect to deduct (or “expense”) certain environmental remediation expenditures that would otherwise be chargeable to a capital account, in the year paid or incurred. The deduction applies for both regular and alternative minimum tax purposes. The expenditure must be incurred in connection with the abatement or control of hazardous substances at a qualified contaminated site. The provision was extended beyond present law for qualified contaminated sites located in the GO Zone and Midwestern disaster zones, as well as federally declared disaster areas in 2008 and 2009. The length of such extensions depended on the applicable disaster zone. Qualified improvements made on leasehold property in the New York Liberty Zone could have been depreciated over a 5-year period using the straight-line method of depreciation, instead of the 39-year period standard under present law. Qualified leasehold property improvements included improvements to nonresidential real property, such as additional walls and plumbing and electrical improvements made to an interior portion of a building. Description In lieu of depreciation, a taxpayer with a sufficiently small amount of annual investment may elect to expense qualified property placed in service for the taxable year under section 179 of the Internal Revenue Code. Taxpayers in certain disaster areas were eligible to increase the maximum dollar amount of Section 179 expensing for qualified property, which is generally defined as depreciable tangible personal property that is purchased for use in the active conduct of a trade or business. Taxpayers in the New York Liberty Zone could deduct an additional amount up to the lesser of $35,000 or the cost of the qualified Section 179 property put into service during the calendar year. Taxpayers in the GO Zone, Kansas Disaster Zone or disaster zones covered under “National Disaster Relief” could deduct an additional amount up to the lesser of $100,000 or the cost of the qualified Section 179 property put into service during the calendar year. The provision doubled, for certain taxpayers, the present- law expensing limit of $10,000 for reforestation expenditures paid or incurred by such taxpayers for certain periods of time with respect to qualified timber property in the Katrina, Rita and Wilma GO Zones. For example, single taxpayers may have claimed $20,000 instead of $10,000 for eligible reforestation expenditures. Description The Internal Revenue Code allowed an additional first- year depreciation deduction equal to 30 or 50 percent of the adjusted basis of qualified property, including (1) property to which the modified accelerated cost recovery system applies with an applicable recovery period of 20 years or less, (2) water utility property, (3) certain computer software, or (4) qualified leasehold improvement property placed in service by December 31, 2005. Under this provision, the Secretary of Treasury had authority to further extend the placed-in-service date (beyond Dec. 31, 2005), on a case-by-case basis, for up to 1 year for certain property eligible for the December 31, 2005 placed-in-service date under present law. The authority extended only to property placed in service or manufactured in the Katrina, Rita or Wilma GO Zones. In addition, the authority extended only to circumstances in which the taxpayer was unable to meet the December 31, 2005 deadline as a result of Hurricanes Katrina, Rita, and/or Wilma. The provision provided an additional exemption of $500 for each displaced individual of a taxpayer affected by certain disasters. The taxpayer may have claimed the additional exemption for no more than four individuals; thus the maximum additional exemption amount was $2,000. Individuals whose principal residence was located in the Hurricane Katrina core disaster area or certain portions of the Midwestern disaster area on the date that a disaster was declared may generally exclude any nonbusiness debt from gross income, such as a mortgage, that is discharged by an applicable entity on or after the applicable disaster date for an established time period. If the individual’s primary residence was located in the Hurricane Katrina disaster area (outside the core disaster area) or other portions of the Midwestern disaster area, the individual must also have had an economic loss because of the disaster. A taxpayer may have elected not to recognize gain with respect to property that was involuntarily converted, or destroyed, if the taxpayer acquired qualified replacement property within an applicable period, which is typically 2 years. The replacement period for property that was involuntarily converted in certain disaster areas is 5 years after the end of the taxable year in which a gain is realized. Substantially all of the use of the replacement property must be within the affected area. Description The provision provided a temporary income exclusion for the value of in-kind lodging provided for a month to a qualified employee (and the employee’s spouse or dependents) affected by certain disasters by or on behalf of a qualified employer. The amount of the exclusion for any month for which lodging is furnished could not have exceeded $600. The exclusion did not apply for purposes of Social Security and Medicare taxes or unemployment tax. Under the provision, reimbursement by charitable organizations to a volunteer for the costs of using a passenger automobile in providing donated services to charity for relief of certain disasters was excludable from the gross income of the volunteer. The reimbursement was allowed up to an amount that did not exceed the business standard mileage rate prescribed for business use. The provision provided an exception to the 10 percent early withdrawal tax in the case of a qualified distribution of up to $100,000 from a qualified retirement plan, such as a 401(k) plan), a 403(b) annuity, or an IRA. Income from a qualified distribution may have been included in income ratably over 3 years, and the amount of a qualified distribution may have been recontributed to an eligible retirement plan within 3 years. Description In general, under the provision, a qualified distribution received from certain retirement plans in order to purchase a home in certain disaster areas may be recontributed to such plans in certain circumstances. The provision applies to an individual who receives a qualified distribution that was to be used to purchase or construct a principal residence in a disaster area, but the residence is not purchased or constructed on account of the disaster. Under this provision, residents whose principal residence was located in designated disaster areas and who suffered economic loss as a result of such disasters may borrow up to $100,000 from their employer plan. In addition to increasing the aggregate plan loan limit from the usual $50,000, the provision also relaxed other requirements relating to plan loans. The provision permits certain retirement plan amendments made pursuant to changes made under Section 1400Q of the Internal Revenue Code, or regulations issued there under, to be retroactively effective. In order for this treatment to apply, the plan amendment is required to be made on or before the last day of the first plan year beginning on or after January 1, 2007, or such later date as provided by the Secretary of the Treasury. Governmental plans are given an additional 2 years in which to make required plan amendments. The Secretary of the Treasury was required to provide certain administrative relief to taxpayers affected by certain presidentially declared disasters. Such relief allows for postponement of actions required by law, such as filing tax returns, paying taxes, or filing a claim for credit or refund of tax, for an applicable period of time following a disaster. The provision authorized the Secretary of the Treasury to make such adjustments in the application of federal tax laws to ensure that taxpayers did not lose any deduction or credit or experience a change of filing status by reason of temporary relocations caused by applicable disasters. Any adjustments made under this provision must insure that an individual is not taken into account by more than one taxpayer with respect to the same tax benefit. (h) The Katrina Emergency Act package was enacted by the Katrina Emergency Tax Relief Act of 2005 (Pub. L. No. 109-73), and targeted the Hurricane Katrina disaster area (consisting of the states of Alabama, Florida, Louisiana, and Mississippi), including core disaster areas determined by the President to warrant individual or individual and public assistance from the federal government following Hurricane Katrina in August 2005. On enactment, JCT projected total budget effects of $6,109 million for fiscal years 2006 through 2015. The Gulf Opportunity Zone package was enacted by the Gulf Opportunity (GO) Zone Act of 2005 (Pub. L. No. 109-135). Counties and parishes in Alabama, Florida, Louisiana, Mississippi and Texas that warranted additional, long-term federal assistance following Hurricanes Katrina, Rita and Wilma in 2005 were designated as Katrina, Rita and/or Wilma GO Zones. Portions of the Katrina and Rita GO Zones overlapped with counties and parishes eligible for relief under the Katrina Emergency Tax Relief Act. The Gulf Opportunity Zone tax package also included some nondisaster-related tax provisions: election to treat combat pay as earned income for purposes of the Earned Income Tax Credit; modifications of suspension of interest and penalties where IRS fails to contact taxpayer; authority for undercover operations; disclosure of tax information to facilitate combined employment tax reporting; disclosure of return information regarding terrorist activities; disclosure of return information to carry out contingent repayment of student loans; and various tax technical corrections. On enactment, JCT projected total budget effects of $8,715 million for the disaster provisions for fiscal years 2006 through 2015. The Kansas disaster relief package was enacted by the Food, Conservation, and Energy Act of 2008 (Pub. L. No. 110-246). The Kansas disaster relief package targeted 24 counties in Kansas affected by storms and tornadoes that began on May 4, 2007. On enactment, JCT projected total revenue effects of $63 million for the disaster provisions for fiscal years 2008 though 2018. The Midwest disaster relief package was enacted by the Emergency Economic Stabilization Act of 2008, Energy Improvement and Extension Act of 2008, and Tax Extenders and the Alternative Minimum Tax Relief Act of 2008 (Pub. L. No. 110-343). The Midwest disaster relief package targeted selected counties in 10 states affected by tornadoes, severe storms and flooding occurring from May 20-July 31, 2008. The listed components associated with the Midwest disaster relief package do not include rules outlining IRS reporting requirements for contributions to disaster relief; these rules apply for tax returns due after December 31, 2008. On enactment, JCT projected total revenue effects of $4,576 million for the Midwest disaster provisions for fiscal years 2009 through 2018. The National disaster relief package was enacted by the Emergency Economic Stabilization Act of 2008, Energy Improvement and Extension Act of 2008, and Tax Extenders and the Alternative Minimum Tax Relief Act of 2008 (Pub. L. No. 110-343). The National disaster relief package targeted individuals and businesses located in any geography declared a disaster area in the United States during tax years 2008 and 2009. Certain provisions of the National Disaster Relief Act of 2008 do not apply to the Midwest disaster areas because the Heartland and Hurricane Ike Disaster Relief Act, part of the same legislation that resulted in the National Disaster Relief Act, provides other tax benefits. On enactment, JCT projected total revenue effects of $8,091 million for fiscal years 2009 through 2018. The numbers of provisions across the six disaster relief packages exceeds the total number of provisions because some tax provisions and special rules were part of more than one disaster package. In March 2011 and more recently in May 2011, we reported on the potential for duplication among 80 federal economic development programs at four agencies—the Departments of Commerce (Commerce), Housing and Urban Development (HUD), and Agriculture (USDA) and the Small Business Administration (SBA). According to the agencies, funding provided for these 80 programs in fiscal year 2010 amounted to $6.2 billion, of which about $2.9 billion was for economic development efforts, largely in the form of grants, loan guarantees, and direct loans. Some of these 80 programs can fund a variety of activities, including such noneconomic development activities as rehabilitating housing and building community parks. Our work as of May 2011 suggested that the design of each of these 80 economic development programs appears to overlap with that of at least one other spending program in terms of the economic development activity that they are authorized to fund, as shown in table 12. For example, 35 programs can fund infrastructure, and 27 programs can fund commercial buildings. Some of the 80 economic development programs are targeted to economically distressed areas. In February 2012, we reported our findings to date on overlap and fragmentation among 53 economic development programs that support entrepreneurial efforts. Based on a review of the missions and other related program information for these 53 programs, we determined that these programs overlap based not only on their shared purpose of serving entrepreneurs but also on the type of assistance they offer. Much of the overlap and fragmentation among these 53 programs is concentrated among programs that support economically distressed and disadvantaged businesses. In ongoing work that will be published as a separate report, we plan to examine the extent of potential duplication among the 53 programs. In addition to the contact named above MaryLynn Sergent, Assistant Director; Elizabeth Curda; Jeffrey DeMarco; Edward Nannenhorn; Melanie Papasian; Mark Ryan; and Sabrina Streagle made key contributions to this report. To determine what is known about the effectiveness of selected community development tax expenditures, we conducted a literature review of studies that addressed the following tax expenditure provisions: (1) the Empowerment Zone/Renewal Community tax programs; (2) the New Markets Tax Credit program; (3) tax expenditures available for certain disaster areas; and (4) rehabilitation tax credits, including the 20 percent tax credit for preservation of historic structures and the 10 percent tax credit for the rehabilitation of structures (other than historic). We searched databases, including Proquest, Google Scholar, and Econlit, for studies through May 2011. We focused on studies that attempted to measure the impact of the selected tax incentives on certain measures of community development, such as the poverty and unemployment rate. Abravanel, Martin D., Nancy M. Pindus, and Brett Theodos. Evaluating Community and Economic Development Programs: A Literature Review to Inform Evaluation of the New Markets Tax Credit Program. Prepared for the Department of the Treasury Community Development Financial Institutions Fund. The Urban Institute. September 2010. Aprill, Ellen P., and Richard Schmalbeck. “Post-Disaster Tax Legislation: A Series of Unfortunate Events.” Duke Law Journal, vol. 56, no. 1 (2006): 51-100. Bartik, Timothy J. “Bringing Jobs to People: How Federal Policy Can Target Job Creation for Economically Distressed Areas.” Discussion paper prepared for The Hamiltion Project (2010). Busso, Matias, Jesse Gregory., and Patrick M Kline. “Assessing the Incidence and Efficiency of a Prominent Place Based Policy.” National Bureau of Economic Research Working paper no. 16096 (2010). Chernick, Howard and Andrew F. Haughwout. “Tax Policy and the Fiscal Cost of Disasters: NY and 9/11.” National Tax Journal, vol. 59, no. 3 (2006): 561-577. Congressional Research Service. Empowerment Zones, Enterprise Communities, and Renewal Communities: Comparative Overview and Analysis. Washington, D.C.: 2011. Gotham, Kevin F., and Miriam Greenberg. “From 9/11 to 8/29: Post- Disaster Recovery and Rebuilding in New York and New Orleans.” Social Forces, vol. 87, no. 2 (2008): 1039-1062. Ham, John C., Charles Swenson, Ayse Imrohoroglu, and Heonjae Song. “Government Programs Can Improve Local Labor Markets: Evidence from State Enterprise Zones, Federal Empowerment Zones and Federal Enterprise Community. Journal of Public Economics, vol. 95, no. 7-8 (August 2011): 779-797. Hanson, Andrew. “Utilization of Employment Tax Credits: An Analysis of the Empowerment Zone Wage Tax Credit.” Public Budgeting & Finance, vol. 31, no. 1 (2011): 23-36. Hanson, Andrew and Shawn Rohlin. “The Effect of Location-Based Tax Incentives on Establishment Location and Employment across Industry Sectors.” Public Finance Review, vol. 39, no. 2 (2011): 195-225. Hebert, Scott, Avis Vidal, Greg Mills, Franklin James, and Debbie Gruenstein. Interim Assessment of the Empowerment Zones and Enterprise Communities (EZ/EC) Program: A Progress Report. A report prepared for the U.S. Department of Housing and Urban Development. November 2001. Jennings, James. “The Empowerment Zone in Boston, Massachusetts, 2000-2009.” Review of Black Political Economy, vol. 38, no. 1 (2011): 63- 81. Joint Committee on Taxation. Incentives for Distressed Communities: Empowerment Zones and Renewal Communities (JCX-38-09), October 5, 2009. Kolko, Jed and David Neumark. “Do Some Enterprise Zones Create Jobs?” Journal of Policy Analysis and Management, vol. 29, no. 1 (2010): 5-38. Listokin, David, Michael L. Lahr, Charles Heydt, and David Stanek. Second Annual Report on the Economic Impact of the Federal Historic Tax Credit. A report prepared for the Historic Tax Credit Coalition. May 2011. Rich, Michael J., and Robert P. Stoker. “Rethinking Empowerment: Evidence from Local Empowerment Zone Programs.” Urban Affairs Review, vol 45, no. 6 (2010): 775-796. Richardson, James A. “Katrina/Rita: The Ultimate Test for Tax Policy.” National Tax Journal, vol. 59, no. 3 (September 2006): 551-560. Schilling, James D., Kerry D. Vandell, Ruslan Koesman, and Zhenguo Lin. “How Tax Credits Have Affected the Rehabilitation of the Boston Office Market.” Journal of Real Estate Research, vol. 28, no. 4 (2006): 321-348. Stead, Meredith M. “Implementing Disaster Relief Through Tax Expenditures: An Assessment of the Katrina Emergency Tax Relief Measures.” New York University Law Review, vol. 81, no. 6 (2006): 2158- 2191. Tolan, Patrick E, Jr. “The Flurry of Tax Law Changes Following the 2005 Hurricanes: A Strategy for More Predictable and Equitable Tax Treatment of Victims.” Brooklyn Law Review, vol. 72, no. 3 (2007): 799-870. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap, and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. Follow-up on 2011 Report: Status of Actions Taken to Reduce Duplication, Overlap, and Fragmentation, Save Tax Dollars, and Enhance Revenue. GAO-12-453SP. Washington, D.C.: February 28, 2012. Managing for Results: Opportunities for Congress to Address Government Performance Issues. GAO-12-215R. Washington, D.C.: December 9, 2011. Economic Development: Efficiency and Effectiveness of Fragmented Programs Are Unclear. GAO-11-872T. Washington, D.C.: July 27, 2011. Efficiency and Effectiveness of Fragmented Economic Development Programs Are Unclear. GAO-11-477R. Washington, D.C.: May 19, 2011. Managing for Results: GPRA Modernization Act Implementation Provides Important Opportunities to Address Government Challenges. GAO-11-617T. Washington, D.C.: May 10, 2011. Performance Measurement and Evaluation: Definitions and Relationships. GAO-11-646SP. Washington, D.C.: May 2, 2011. Indian Issues: Observations on Some Unique Factors that May Affect Economic Activity on Tribal Lands. GAO-11-543T. Washington, D.C.: April 7, 2011. Government Performance: GPRA Modernization Act Provides Opportunities to Help Address Fiscal, Performance, and Management Challenges. GAO-11-466T. Washington, D.C.: March 16, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington D.C.: March 1, 2011. Recovery Act: Opportunities to Improve Management and Strengthen Accountability over States’ and Localities’ Uses of Funds. GAO-10-999. Washington, D.C.: September 20, 2010. Community Development Block Grants: Entitlement Communities’ and States’ Methods of Distributing Funds Reflect Program Flexibility. GAO-10-1011. Washington, D.C.: September 15, 2010. Revitalization Programs: Empowerment Zones, Enterprise Communities, and Renewal Communities. GAO-10-464R. Washington, D.C.: March 12, 2010. New Markets Tax Credit: The Credit Helps Fund a Variety of Projects in Low-Income Communities, but Could Be Simplified. GAO-10-334. Washington, D.C.: January 29, 2010. Disaster Recovery: Past Experiences Offer Insights for Recovering from Hurricanes Ike and Gustav and Other Recent Natural Disasters. GAO-08-1120. Washington, D.C.: September 26, 2008. Gulf Opportunity Zone: States Are Allocating Federal Tax Incentives to Finance Low-Income Housing and a Wide Range of Private Facilities. GAO-08-913. Washington, D.C.: July 16, 2008. Tax Expenditures: Available Data Are Insufficient to Determine the Use and Impact of Indian Reservation Depreciation. GAO-08-731. Washington, D.C.: June 26, 2008. Tax Policy: Tax-Exempt Status of Certain Bonds Merits Reconsideration, and Apparent Noncompliance with Issuance Cost Limitations Should Be Addressed. GAO-08-364. Washington, D.C.: February 15, 2008. HUD and Treasury Programs: More Information on Leverage Measures’ Accuracy and Linkage to Program Goals is Needed in Assessing Performance. GAO-08-136. Washington, D.C.: January 18, 2008. 21st Century Challenges: How Performance Budgeting Can Help. GAO-07-1194T. Washington, D.C.: September 20, 2007. Leveraging Federal Funds for Housing, Community, and Economic Development. GAO-07-768R. Washington, D.C.: May 25, 2007. Tax Policy: New Markets Tax Credit Appears to Increase Investment by Investors in Low-Income Communities, but Opportunities Exist to Better Monitor Compliance. GAO-07-296. Washington, D.C.: January 31, 2007. Empowerment Zone and Enterprise Community Program: Improvements Occurred in Communities, but the Effect of the Program is Unclear. GAO-06-727. Washington, D.C.: September 22, 2006. Federal Tax Policy: Information on Selected Capital Facilities Related to the Essential Governmental Function Test. GAO-06-1082. Washington, D.C.: September 13, 2006. Rural Economic Development: More Assurance Is Needed That Grant Funding Information Is Accurately Reported. GAO-06-294. Washington D.C.: February 24, 2006. Telecommunications: Challenges to Assessing and Improving Telecommunications for Native Americans on Tribal Lands. GAO-06-189. (Washington, D.C.: January, 11, 2006). Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005. Government Performance and Accountability: Tax Expenditures Represent a Substantial Federal Commitment and Need to Be Reexamined. GAO-05-690. Washington, D.C.: September 23, 2005. A Glossary of Terms Used in the Federal Budget Process. GAO-05-734SP. Washington, D.C.: September 2005. Community Development: Federal Revitalization Programs Are Being Implemented, but Data on the Use of Tax Benefits Are Limited. GAO-04-306. Washington, D.C.: March 5, 2004. New Markets Tax Credit Program: Progress Made in Implementation, but Further Actions Needed to Monitor Compliance. GAO-04-326. Washington, D.C.: January 30, 2004. September 11: Overview of Federal Disaster Assistance to the New York City Area. GAO-04-72. Washington, D.C.: October 31, 2003. Tax Administration: Information Is Not Available to Determine Whether $5 Billion in Liberty Zone Tax Benefits Will Be Realized. GAO-03-1102. Washington, D.C.: September 30, 2003. Economic Development: Multiple Federal Programs Fund Similar Economic Development Activities. GAO/RCED/GGD-00-220. Washington, D.C.: September 29, 2000. Tax Policy: Tax Expenditures Deserve More Scrutiny. GAO/GGD/AIMD-94-122. Washington, D.C.: June 3, 1994.
Tax expenditures—exclusions, credits, deductions, deferrals, and preferential tax rates—are one tool the government uses to promote community development. Multiple tax expenditures contribute to community development. GAO (1) identified community development tax expenditures and potential overlap and interactions among them; (2) assessed the data and performance measures available and used to assess their performance; and (3) determined what previous studies have found about selected tax expenditures’ performance. GAO identified community development activities using criteria based on various federal sources and compared them with authorized uses of tax expenditures. GAO reviewed agency documents and interviewed officials from the Internal Revenue Service (IRS) and five other agencies. GAO also reviewed empirical studies for selected tax expenditures, including the New Markets Tax Credit and Empowerment Zone program which expired in 2011. GAO identified 23 community development tax expenditures available in fiscal year 2010. For example, five ($1.5 billion) targeted economically distressed areas, and nine ($8.7 billion) supported specific activities such as rehabilitating structures for business use. The design of each community development tax expenditure appears to overlap with that of at least one other tax expenditure in terms of the areas or activities funded. Federal tax laws and regulations permit use of multiple tax expenditures or tax expenditures with other federal spending programs, but often with limits. For instance, employers cannot claim more than one employment tax credit for the same wages paid to an individual. Besides IRS, administering many community development tax expenditures involves other federal agencies as well as state and local governments. For example, the National Park Service oversees preservation standards for the 20 percent historic rehabilitation tax credit. Fragmented administration and program overlap can result in administrative burden, such as applications to multiple federal agencies to fund the needs of a distressed area or finance a specific. Limited data and measures are available to assess community development tax expenditures’ performance. IRS only collects information needed to administer the tax code or otherwise required by law, and IRS data often do not identify the specific communities assisted. Other federal agencies helping administer community development tax expenditures also collect limited information on projects and associated outcomes. GAO has long recommended that the Executive Branch improve its ability to assess tax expenditures, but little progress has been made in developing an evaluation framework. Generally, neither these agencies, nor the Department of the Treasury or the Office of Management and Budget (OMB) have assessed or plan to assess community development tax expenditures individually or as part of a crosscutting review. The Government Performance and Results Act Modernization Act of 2010 (GPRAMA) calls for a more coordinated approach to focusing on results and improving performance. OMB is to select a limited number of long-term, outcome-oriented crosscutting priority goals and assess whether the relevant federal agencies and activities—including tax expenditures—are contributing to these goals. These assessments could help identify data needed to assess tax expenditures and generate evaluations of tax expenditures’ effect on community development. Through related GPRAMA consultations agencies are to have with Congress, Congress has a continuing opportunity to say whether it believes community development should be among the limited number of governmentwide goals. While community development was not on the interim priority list, Congress also can urge more evaluation and focus attention on community development performance issues through oversight activities. In part due to data and methodological limitations, previous studies have not produced definitive results about the effectiveness of the New Markets Tax Credit, Empowerment Zone tax incentives, historic rehabilitation tax credits, and tax aid for certain disaster areas. A key methodological challenge is\ demonstrating a causal relationship between community development efforts and economic growth in a specific community. As a result, policymakers have limited information about the tax expenditures reviewed, including those that expired after 2011, and ways to increase effectiveness. Congress may wish to provide OMB guidance on whether community development should be among OMB’s long-term crosscutting priority goals, stress the need for evaluations, and focus attention on addressing community development tax expenditure performance issues through its oversight activities. Two agencies questioned the matters for congressional consideration or findings. GAO believes its analysis and matters remain valid as discussed in the report.
GPRA is intended to shift the focus of government decisionmaking, management, and accountability from activities and processes to the results and outcomes achieved by federal programs. New and valuable information on the plans, goals, and strategies of federal agencies has been provided since federal agencies began implementing GPRA. Under GPRA, annual performance plans are to clearly inform the Congress and the public of (1) the annual performance goals for agencies’ major programs and activities, (2) the measures that will be used to gauge performance, (3) the strategies and resources required to achieve the performance goals, and (4) the procedures that will be used to verify and validate performance information. These annual plans, issued soon after transmittal of the president’s budget, provide a direct link between an agency’s longer-term goals and mission and its day-to-day activities.Annual performance reports are to report subsequently on the degree to which performance goals were met. The issuance of the agencies’ performance reports, due by March 31, represents a new and potentially more substantive phase in the implementation of GPRA—the opportunity to assess federal agencies’ actual performance for the prior fiscal year and to consider what steps are needed to improve performance and reduce costs in the future. HHS has a broad and challenging mission that touches the lives of Americans from every economic stratum: enhancing the health and well- being of all Americans by providing for effective health and human services, and fostering strong, sustained advances in the sciences underlying medicine, public health, and social services. With a budget of $376 billion and a direct workforce of 59,000, HHS administers some 300 health and social programs, including Medicare, Medicaid, the State Children’s Health Insurance Program, Temporary Assistance for Needy Families, and food and drug safety. HHS’ programs often require operating components to coordinate with partners such as state, local, and tribal governments; grantees; and contractors. For example, HCFA shares responsibility with states for administering Medicaid—a program that provides health care to certain low-income persons. HCFA also monitors the approximately 50 Medicare contractors that pay claims for the program’s elderly and disabled beneficiaries and that establish local medical coverage policies. SAMHSA administers a grant program to states for treatment and prevention services for persons at risk of or actually abusing alcohol or other drugs. Finally, the Administration for Children and Families partners with states to provide support to needy children and transition their parents to work. This section discusses our analysis of HHS’ performance in achieving its selected key outcomes and the strategies it has in place—including human capital and information technology—for accomplishing these outcomes.We also provide information drawn from our prior work about the credibility of the agency’s performance information. While HCFA’s performance report and plan indicate that it is making some progress toward achieving its Medicare program integrity outcome, progress is difficult to measure because of continual goal changes that are sometimes hard to track or that are made with insufficient explanation. Of the five fiscal year 2000 program integrity goals it discussed, HCFA reported that three were met, a fourth unmet goal was revised to reflect a new focus, and performance data for the fifth will not be available until mid-2001. HCFA plans to discontinue three of these goals. Although the federal share of Medicaid is projected to be $124 billion in fiscal year 2001, HCFA had no program integrity goal for Medicaid for fiscal year 2000. HCFA has since added a developmental goal concerning Medicaid payment accuracy. One of HCFA’s key Medicare program integrity goals is to pay claims properly the first time. Therefore, HCFA has set the performance goal of reducing improper payments as a priority for Medicare. The central measure of progress for this goal is the rate of improper fee-for-service payments, which is now estimated by the HHS OIG. HCFA will assume responsibility for measuring this error rate in fiscal year 2002. HCFA reported meeting its fiscal year 2000 error rate target of 7 percent with a rate of 6.8 percent. HCFA reported that it did not meet its fiscal year 2000 goal to perform medical reviews of 100 million claims, and it is difficult to determine whether its revised goal is being continued. In its narrative, HCFA explained that it revised its goal to focus on improving the accuracy and appropriateness of medical reviews rather than simply to increase the number of reviews conducted. But later, in a chart describing changes in GPRA goals, HCFA noted that this goal was subsumed in a fiscal year 2001 goal to improve the effectiveness of program integrity activities through successful implementation of this and nine other initiatives contained in the Comprehensive Plan for Program Integrity. The current performance report and plan only cursorily mention the Comprehensive Plan initiatives but indicate that the goal will be reached in fiscal year 2001 and therefore will not be continued. HCFA discontinued two of its fiscal year 2000 performance goals for which it reported making progress. Although data will not be available until mid-2001 on its discontinued goal to decrease the improper payment rate for home health services, HCFA reported “expected achievement” of its 10-percent target as justification for dropping the goal. Nevertheless, this area remains on the HHS OIG’s list of major management challenges. HCFA also discontinued, with little explanation, the goal of increasing the ratio of dollars recovered through the audit process to dollars spent on auditing activities. It reported it dropped the goal because of data source concerns (which it did not discuss) but also said it is examining other ways to measure progress on this issue. HCFA nevertheless reported that it exceeded its fiscal year 2000 target for this goal. In commenting on a draft of this report, HHS noted that HCFA has discontinued certain goals because they are ultimately part of the overall error rate measure and do not reflect the accomplishments of HCFA’s overall program integrity efforts. We have previously reported on two general weaknesses that hinder HCFA’s efforts to ensure proper payments of Medicare claims: outmoded information systems and weak financial management procedures.Without effective systems, HCFA is not well positioned for sound financial or programmatic management. HCFA has taken steps to modernize its systems and strengthen its financial management but many challenges remain. For example, HCFA’s fiscal year 2000 performance report notes progress made in addressing weaknesses related to its financial information, such as improvements in controls over Medicare contractor data. However, HCFA is still not in compliance with the Federal Financial Management Improvement Act (FFMIA) and continues to have material weaknesses related to reliability and documentation of its financial information. HCFA acknowledges that its ability to fully address underlying financial weaknesses remains impaired because it lacks a fully integrated financial management system. Despite repeated instances of noncompliance and the need for an integrated general ledger system to address major financial management weaknesses, HCFA’s performance report does not include specific goals and targets for achieving compliance with FFMIA, a situation we also noted in prior performance plan reviews. While HCFA’s Chief Financial Officer Comprehensive Plan for Financial Management includes goals for developing an integrated general ledger system, this document and the related costs and resources for implementing the system are not referred to in HCFA’ s performance report or plan. HCFA’s strategies for achieving many goals related to minimizing fraud, waste, and error appear to be clear and reasonable. One important HCFA strategy is to establish new goals and revise existing goals that will enhance program integrity efforts. Recognizing limitations in the usefulness of the national Medicare error rate as a management tool, HCFA’s strategy is to develop a subnational error rate. Thus, it established a fiscal year 2001 goal of developing a separate error rate for each Medicare claims contractor and of implementing a provider compliance rate. It is also developing a method for estimating a fraud rate among providers within its contractors’ service areas. Finally, HCFA introduced a fiscal year 2002 goal intended to improve the provider enrollment process by ensuring that only qualified and legitimate providers are permitted to participate in Medicare. Because many of the baselines and measures for these new and revised goals are under development, HCFA’s intended performance regarding them is unclear. For example, HCFA’s fiscal year 2002 plan contains a developmental goal to improve its oversight of Medicare fee-for-service contractors. Its fiscal year 2002 target is to build on progress achieved in fiscal years 1999, 2000, and 2001. Similarly, HCFA’s fiscal year 2001 and 2002 plans include a developmental goal to help states conduct Medicaid payment accuracy studies in order to measure and ultimately reduce Medicaid payment error rates. The fiscal year 2001 target is to establish the feasibility of conducting pilot projects within states and, for fiscal year 2002, to assess the pilots initiated by two states. With respect to one fiscal year 2001 goal, HCFA notes human capital and information technology limitations but does not discuss strategies for addressing them. Thus, HCFA reports that because of limited resources and funding, it only audits a small percentage of providers regarding credit balance recoveries and that it lacks the database needed to track provider activity in this area. In prior reviews of this key outcome, we noted that HCFA did not adequately address the need for coordination with other organizations. While HCFA includes a brief coordination section in the individual goal narratives, it does not consistently provide details about planned coordination strategies. For example, one coordination strategy reads: “We will continue to work with our partners in conducting our everyday business of ensuring Medicare claims are paid properly.” HCFA’s performance report and plan indicate that it continues to make progress toward its outcome of ensuring that nursing home residents receive high-quality care but its focus on just 3 goals under this outcome is surprisingly narrow, given the broad range of its approximately 30 initiatives to improve the quality of care in America’s nursing homes. The lack of recognition of the Nursing Home Oversight Improvement Program initiatives is even more notable because the Senate Committee on Aging requested that HCFA establish benchmarks and track progress in implementing each of these initiatives. In commenting on a draft of this report, HCFA noted that its performance goals are not intended to be a comprehensive list of its performance measures. On the basis of interim data, HCFA reported that the prevalence of restraints used in nursing homes decreased during fiscal year 2000. This decrease represents the second consecutive year in which the goal of reducing the use of restraints was surpassed. Final data were expected after the publication of HCFA’s performance report. Regarding its second goal, HCFA reported, for the first time, the prevalence of nursing home residents suffering from pressure sores (bedsores) and established future- year performance targets for reducing their prevalence. HCFA reported making progress toward its third goal of modifying the survey and certification budgeting process to develop national standard measures and costs. Once developed, these standards can be used to more effectively price each state’s survey workload and to develop workload expectations for each state. However, when we compared HCFA’s current and prior-year plans for implementing this new budget methodology, we determined that the modification will likely take HCFA longer to implement than it planned. For instance, although its earlier plan indicated that its price-based methodology would be complete in fiscal year 2001, its current-year plan shows that future-year targets for this goal are yet to be determined. Nevertheless, in fiscal year 2001, HCFA said it will allocate budget increases to states with unit survey hours that do not exceed 15 percent above the combined national average for nursing home surveys. HCFA also eventually plans to use the standards for setting state performance measures to assess the quality of nursing home surveys performed by each state. As we noted in last year’s report, the critical step of assessing states’ performance could begin sooner if HCFA used existing data. For instance, one of HCFA’s regional offices has analyzed data for several years to help evaluate the performance of state survey agencies in its region in areas such as whether states vary the timing of surveys to ensure that nursing homes are unable to predict the date of their next survey. In a report issued in September 2000, we highlighted HCFA’s commitment to begin using data currently available to compile periodic reports on state performance and to supplement these reports with on-site work to assess state performance. Data inconsistencies we and the HHS OIG identified raise questions about the accuracy of HCFA’s information on the prevalence of restraint use and pressure sores. However, HCFA did not note any concerns about the reliability of the On-Line Survey and Certification Reporting (OSCAR) System database, nor did it discuss the concerns about minimum data set accuracy raised by the HHS OIG. Our prior reports on nursing home quality have noted the considerable variation across states in the reporting of nursing home deficiencies in OSCAR—a situation that suggests some states may be better than others at identifying problems. The HHS OIG recently found several problems related to the use of the minimum data set, including differences between information on residents contained in the data set and data maintained in the residents’ medical records. We also noted last year that HCFA recognized the need to be cautious with its use of data in the minimum data set until it assesses the data set’s accuracy and completeness. HCFA intends to award a contract this year to begin minimum data set validation work in 2002. In commenting on a draft of this report, HCFA said it found our discussion of this proposed validation contract inconsistent with our finding that it had not discussed concerns about minimum data set accuracy in its GPRA report. We believe that HCFA’s GPRA report should have acknowledged the proposed validation contract since it is directly relevant to a discussion of the reliability of data used to measure progress in achieving goals under the nursing home quality outcome. HCFA also expressed concern about the reliability of the HHS OIG’s findings on minimum data set accuracy. The fact that HCFA has a proposed validation contract suggests that it, too, has concerns about minimum data set accuracy. Despite its narrow focus on only three goals, HCFA’s strategies to achieve them are generally clear and reasonable. For example, to decrease the prevalence of pressure sores, HCFA is working to improve surveyors’ ability to assess residents’ conditions by conducting educational seminars for surveyors and adding a new investigative protocol to help surveyors detect pressure sores during a survey. It is also strengthening enforcement activities against homes that fail to prevent avoidable pressure sores. However, HCFA’s discussion of its strategy to ensure that nursing home residents are not unnecessarily restrained is incomplete. It notes that it relies on the state survey and certification process but does not discuss the role of outside groups, which also have sponsored a large number of provider and consumer education projects to demonstrate ways to reduce restraint use. To improve the overall management of the survey and certification process, HCFA’s strategy has been to conduct studies to identify significant differences in survey time and resource utilization among state survey teams. HCFA plans to research these variations, determine which have the strongest relationship to cost and performance, establish standard measures of cost and workload, and develop future survey and certification budgets on the basis of standard prices. HCFA’s new budgeting approach will address the importance of human capital by ensuring that states have an appropriate number of qualified surveyors. Disparities in staffing might have been a contributing factor to deficiencies in state oversight activities. During our 2000 review of HCFA’s implementation of the Clinton Administration’s nursing home initiatives, we noted that a number of states had hired additional surveyors to promote more timely complaint investigations as well as to ensure that nursing homes are inspected an average of every 12 months. Furthermore, although HCFA did not address this in its plan, it has taken steps to improve its information technology systems to enhance oversight of nursing home quality of care. For instance, HCFA is in the process of redesigning its OSCAR database to make it easier to generate analytical reports. Similar to last year, the Administration for Children and Families (ACF) reported that it lacked fiscal year 2000 performance data for 18 of the 26 measures associated with programs whose performance is critical in reaching this key outcome. As a result, we were unable to fully assess ACF’s progress. ACF largely attributes missing performance data to the time lag in receiving and validating data reports from its program partners, including states and localities. Specifically, no fiscal year 2000 performance data were reported for the Temporary Assistance for Needy Families (TANF), Child Support Enforcement, Child Care, and Refugee Resettlement programs. The limited performance data that were available in ACF’s report and plan indicate that its progress has been mixed. ACF reported that it achieved its target for four of the eight measures that had fiscal year 2000 performance data, including two measures related to the Developmental Disabilities Employment and Housing programs and measures related to increasing nondiscriminatory access to and participation in HHS programs. Target levels that ACF reported not meeting in fiscal year 2000 include two measures associated with increasing the number of HHS grantees and providers found to be in compliance with title IV in limited English proficiency reviews and investigations. For measures without fiscal year 2000 performance data, fiscal year 1999 performance data, which are now available, showed that 7 of 16 measures met or exceeded their targets and 2 measures came very close to meeting their targets. ACF may not be positioned to meet some future target levels, which appear to be set beyond what it can reasonably expect to achieve. Some measures, for example, have shown a recent decline and ACF may continue to not meet its targets for these in the future. These measures include (1) the earning gains rate and the employment retention rate under the TANF program; (2) the number of refugees becoming employed, the number of refugee cash assistance cases closed because the recipient became employed, and the number of 90-day job retentions under the Refugee Assistance program; and (3) the cost-effectiveness ratio of the process to collect medical and financial support under the Child Support Enforcement Program. Other measures, while showing recent improvement, may not meet their targets in the future, including the proportion of states that meet the TANF two-parent work participation rate of 90 percent and the number of children served by Child Care and Development Fund subsidies. In commenting on a draft of this report, ACF suggested that we favored a downward revision of the above targets. This was not the case. We recognize that ACF officials have encouraged programs, such as TANF, to intentionally set ambitious targets for some goals. Our comments were only meant to alert the Congress to the fact that certain goals may not be achieved in the future, information that ACF should have provided to assist congressional decisionmaking. For the examples cited, GAO relied on the multi-year data presented in ACF’s fiscal year 2002 performance plan, not on a single year’s performance as suggested by ACF. Few ACF strategies for achieving this outcome (1) were directly linked to specific performance that fell below fiscal year 2000 or 1999 target levels or (2) were aimed at overcoming ACF management challenges identified by us or the HHS OIG. Because it administers most of its programs in conjunction with states and/or other entities, ACF involves its partners in establishment of performance measures to help ensure their achievement. For example, other ACF strategies include providing technical assistance, disseminating the results of program evaluations and other research, and using rewards and penalties to improve performance. Finally, the fiscal year 2002 plan indicates that ACF will continue its ongoing evaluation of various aspects of welfare reform; in particular, ACF plans to evaluate performance measures related to increasing parental responsibility and increasing affordable child-care. ACF’s fiscal year 2002 plan offers no concrete strategy to overcome the time lag in receiving and validating performance data from program partners, and it generally does not report on the results of data validation efforts. ACF’s report acknowledges that such time lags make it difficult to provide a comprehensive summary of fiscal year 2000 performance until later in fiscal year 2001. ACF indicated it would develop a plan with HHS and the Office of Management and Budget in fiscal year 2001 for reducing the delay in the availability of state administrative data, where appropriate. Until this plan is developed and implemented, however, obtaining timely data for measures pertaining to helping individuals and families become self-sufficient will continue to impede assessments of ACF’s performance. In commenting on a draft of this report, ACF cited grant-reporting timeframes as a constraint in the timely availability of performance data. The fact remains, however, that ACF offered no concrete strategy to overcome the reporting time lags. By indicating that it will work with the Office of Management and Budget to reduce delays in the availability of administrative data, ACF underscores the need for more timely information. We do point out in our conclusions, however, that the issue of data lags may become less critical as trends emerge from data over longer time periods. ACF broadly discusses its human capital and information technology strategies in its fiscal year 2000 report and 2002 plan. ACF reported that it did not achieve an increase in the manager-to-staff ratio—its one human capital performance measure in fiscal year 2000—because of limits on hiring new staff and on reducing the number of managers already on board. However, ACF did meet its one performance measure related to information technology in fiscal year 2000 by replacing an outmoded “audit resolution tracking process” with an updated, integrated system. The fiscal year 2002 plan says little about how ACF intends to use human capital and information technologies to achieve this key outcome. In commenting on a draft of this report, ACF noted the use of human capital strategies such as training employees in marketing, negotiating, and consulting; using and improving automated technology, databases, and electronic communications; and implementing team-based work procedures. In the report, however, ACF does not tie such strategies to specific TANF-related measures with targets that might be set too high. Nor does it indicate how these strategies will help overcome problems, such as the 26 percent shortfall found in fiscal year 1999 in states that meet the TANF two parent families work participation rate. Similarly, we believe that ACF’s reference to its information technology investments presents a broad discussion of the role of information technology. We noted in January 2001 that sweeping changes brought about by welfare reform make better information systems and data collection necessary to improve program management and to help HHS measure its state partners’ performance in this area. In particular, we highlighted the importance of addressing the need for states to have access to information across states on individuals’ receipt of welfare to enforce the 5-year TANF time limit. Because adequate automated systems are critical to the success of welfare reform, we recommended that HHS work with other federal agencies, including the Departments of Agriculture and Labor, to address issues surrounding state automated data systems. ACF reported that it continues to work on correcting performance information and strengthening partnerships with states and grantees and that it gives high priority to creating mature data collection strategies. ACF also noted that it is working with other HHS components to assess unmet data needs and is committed to increasing its investment in data collection and information systems. The fiscal year 2000 report does not, however, offer targeted strategies for improving states’ automated systems, including the capacity to support enforcement of the 5-year TANF time limit. In commenting on a draft of this report, ACF pointed out that it (1) reported to Congress in 1997 that additional program authority and resources would be required to implement a tracking system to enforce TANF time limits; (2) developed a system that potentially will allow states to track the 5-year limit; and (3) is working with states to identify their automated system needs. The ACF report, however, did not contain adequate information on these strategies that would allow us to comment on the extent to which they address our past concerns. The performance reports and plans of HHS components indicate that they have made mixed progress toward achieving the 15 infectious disease prevention goals associated with this outcome and, in some cases, that data to measure progress are unavailable. The goals, many of which have multiple targets, include reductions in HIV, AIDS, other sexually transmitted diseases, and vaccine-preventable diseases. The five HHS components responsible for implementing infectious disease prevention goals are the Centers for Disease Control and Prevention (CDC), HCFA, the Health Resources and Services Administration (HRSA), the Indian Health Service (IHS), and the National Institutes of Health (NIH). Three of these agencies have goals to reduce vaccine-preventable diseases. Provisional data indicate that, for most targeted diseases, CDC met its goal of achieving a 90-percent vaccination rate for 2-year-olds. It provided a reasonable explanation of why the target for the diphtheria, tetanus, and pertussis vaccine was missed by a few percentage points. IHS also came close to meeting its children’s immunization completion rate. HCFA’s goal to increase the rate of fully immunized Medicaid 2-year-olds is state- specific, and measurement methods are still being developed. CDC, HCFA, and IHS generally did not yet have data to assess their progress in increasing pneumococcal pneumonia and influenza vaccination rates among the elderly, but interim progress data were cited. A data lag impedes the measurement of progress toward reducing the incidence of HIV and AIDS. Trend data indicate that CDC and HRSA are making progress in reducing perinatal transmission of HIV. Relying on process descriptions, NIH reports progress toward achieving its goal of developing an AIDS vaccine by 2007. CDC reported mixed progress toward its goals of reducing sexually transmitted diseases. In general, fiscal year 2000 data were not available at the time performance reports were published, but fiscal year 1999 data indicated in different target populations more progress toward reducing some sexually-transmitted diseases (congenital syphilis) than others (chlamydia). Data lags are common for many prevention goals, and it may be unrealistic to expect HHS to include complete data at the same time it issues its annual performance report and plan. As HHS continues to report its results, we will in turn receive more accumulated trend data to portray its progress. Data verification and validation remain important issues. The HHS agencies with infectious disease prevention goals tend to provide general information on the credibility of their performance measures and of their methodological approaches. For example, HRSA notes how the electronic submission of data, starting in fiscal year 2000, will address the reliability and validity concerns we raised previously. All of these agencies discuss measurement in the context of specific goals, but they do not always discuss why particular goals may be poorly measured. However, CDC’s report broadly discusses the measurement issues relevant to particular prevention goals, such as an account of HIV surveillance efforts. Similarly, HCFA describes the surveys it uses for assessing vaccination rates, including their limitations, and IHS explains the criteria it used to select its prevention indicators. While the components’ strategies for achieving some goals are clear and reasonable, they do not always include information about how they plan to attain unmet goals, and some strategies are not directly tied to goal attainment. With respect to specific goals or groups of goals, CDC often includes an informative discussion of its performance strategies. For example, it summarizes how it plans to eradicate syphilis in the United States. Furthermore, CDC states some of its goals in terms of the strategy to attain them, such as using “screening” and “treatment” in the goal descriptions for sexually transmitted diseases. HCFA includes a detailed discussion of strategies to foster higher immunization rates among seniors, including sponsoring outreach projects in health care venues and implementing routine procedures for providing certain immunizations without direct physician involvement. Its discussion of the goal of increasing the percentage of 2-year olds who are fully immunized focuses primarily on outreach and increasing enrollment as ways to effect the increases. The IHS report explains why it did not achieve certain goals but does not always articulate strategies for overcoming problems that impede progress. IHS noted, for example, that complex immunization schedules and incomplete tracking due to multiple sources of health care were a problem in meeting its goals. IHS’s report does discuss strategies for meeting its goals for childhood immunizations, but it discusses adult vaccination levels chiefly in terms of baseline and target rates, not in terms of vaccinating more people. Rather than identifying ways of vaccinating more people, however, it discusses establishing the appropriate baseline and adjusting the targets. Similarly, when CDC does not meet a goal, it does not always discuss specific strategies for attaining that goal. NIH’s strategies also are general rather than goal specific. Thus, its report highlights a number of broad strategies related to its overall mission, such as providing scientific leadership, facilitating the development of health- related products, and collaborating and coordinating with others. When the issues of human capital, information technology, the contributions of others, and program evaluations were included in the GPRA reports and plans of HHS components, their importance in helping to achieve goals was only discussed in general terms. Thus, while both HCFA and HRSA discuss human resources, they do not talk about them in the context of particular infectious disease prevention goals. Furthermore, IHS simply notes that human resource development is an essential component of its performance planning and management and provides some details about its activities in this area. Similarly, CDC, HCFA, and HRSA acknowledge generally the importance of information technology as it relates to their missions and goals. In contrast, IHS has specific measures addressing the development of improved automated data capabilities that are designed, in part, to improve performance measurement and GPRA compliance. While HHS components discuss the contributions of others by referring to “partnerships and coordination with other organizations,” IHS specifically notes its efforts to address HIV and vaccine-preventable infectious diseases through an agreement with CDC. Finally, regarding the use of program evaluations prepared by each component or others, the discussions usually are not related to specific infectious disease prevention goals. SAMHSA’s performance plan and report indicates that it has made some progress in achieving this outcome. While it continues to have problems collecting data for about half of its 80 goals, SAMHSA reported that it met or exceeded its target for nearly 90 percent of the goals for which it had data. Delays in reporting performance data were attributed to time lags in data collection, analysis, and reporting by states and the relatively large number of targets being measured. SAMHSA plans to have final data for most performance goals later in 2001. SAMHSA reported that it met many of the substance abuse and prevention treatment goals for which data were available. For example, SAMHSA indicated that it exceeded its target of increasing the number of states to 19 that voluntarily report critical outcome performance measures in Substance Abuse Prevention and Treatment Block Grant applications, as 24 states voluntarily reported at least partial outcome data. It also indicated that the number of states that incorporate needs assessment data increased from 26 states in fiscal year 1999 to 34 states in fiscal year 2000, meeting its fiscal year 2000 target. The incorporation of needs assessment data is critical for prevention planning, resource allocation, and selection of appropriate prevention strategies. Finally, SAMHSA reported that the percentage of states that use funds in each of six prevention strategy areas, which track progress in addressing the substance abuse prevention needs of populations, met the fiscal year 2000 target of 90 percent. SAMHSA gave a credible explanation for not meeting another goal related to the Substance Abuse Prevention and Treatment Block Grant program—continuing dialogue over the appropriateness of the targets—and indicated that the type and form of performance reporting will be decided by fiscal year 2002. SAMHSA’s performance report and plan indicate that it was far less successful in reporting important state-level performance data on the effectiveness of substance abuse treatment services for fiscal year 2000. States were to voluntarily report the percentage of substance abuse treatment clients who had reduced substance abuse and criminal involvement, had a permanent place to live, and were employed. However, fewer states than SAMHSA anticipated reported this information, and some states used different data collection methods to report information, raising questions about the reliability of the data. Consequently, SAMHSA dropped these goals and will develop new ones jointly with the states. Although development of goals will continue, client-related outcome data cannot be collected until SAMHSA complies with statutory requirements under the Children’s Health Act of 2000. The law requires SAMHSA to develop a plan, due by fiscal year 2002, that gives states flexibility in reporting outcome data based on a common set of performance goals, while preserving accountability. SAMHSA anticipates that the new goals will be approved in fiscal year 2003 and that collection and outcome measurement reporting will begin in fiscal year 2004. SAMHSA’s performance report does not provide assurance that all information contained in it is credible. Several performance measurements lack discussions of the specific procedures used to verify and validate data in the systems. For example, the description of data sources and validity of data supporting the measurement on treating adult marijuana users notes that the performance data were collected with standard instruments administered to clients by trained interviewers. Another measurement to develop and apply statistical models associated with client retention and outcomes under the Wrap-Around Services program asserts that project records documenting progress of statistical work are expected to be reliable. However, neither performance measurement discusses how and by whom the validity assessments are performed, the strengths and weaknesses of the data, or the external factors that may affect data reliability. In addition, SAMHSA did not report strategies for achieving several planned goals. For example, it cites measurable targets and time frames for achieving goals related to reducing the size of the drug treatment gap; increasing employment and education, and lowering illegal activity for graduates of treatment programs; and reversing the trend in marijuana use among youth. However, it omits details about how its prevention and treatment programs will attain these goals. Furthermore, SAMHSA describes the role of human capital management and information technology strategies but does not tie these activities to specific goals. For example, SAMHSA expects to complete a workforce plan in August 2001 that includes recommendations on ensuring that staffing levels are sufficient to manage program growth, maintain a well-trained workforce, and provide a high-quality work life. It also plans to develop benchmarks for best practices in government and nongovernment human capital management processes and incorporate them into its workforce plan. The performance plan also notes that SAMHSA has reorganized numerous functions and programs to streamline operations and conserve program management and other resources. SAMSHA also has invested in information technologies to enhance professional resources. Several communications and data management system improvements recently completed or under way include the redesign or conversion of SAMHSA’s Web site, intranet, and grants management system. Finally, SAMHSA’s report describes coordination with its partners and stakeholders, including the states, CDC, the Department of Veterans Affairs, NIH’s National Institute on Drug Abuse, and the Office of National Drug Control Policy, to determine priorities and help formulate certain goals. FDA’s performance report and plan indicate that it has made significant progress toward achieving this outcome. While performance data were unavailable for nearly 60 percent of its fiscal year 1999 goals, FDA reported results for 17 out of 19 goals in its fiscal year 2000 performance report. FDA reported that it met or exceeded 14 goals, did not meet 3 goals, and lacked outcome data for 2 goals. FDA reported making progress in meeting its goals for both the Human Drug and the Medical Device programs. For the Human Drug program, FDA noted that it had met several goals by streamlining its adverse drug event reporting system, providing the public with improved labeling information on over-the-counter drugs, and initiating collaborations with the scientific community on assessing product quality and manufacturing processes through the Product Quality Research Institute. This research institute is a first-ever partnership between the Human Drug program and industry scientists to conduct research in various aspects of the pharmaceutical development process. The objective is to streamline the drug development and approval process for industry and FDA while ensuring high product quality. The Human Drug program reported initiating seven working groups to address key drug regulatory issues, which surpassed its goal of beginning research on at least three projects identified by the Product Quality Research Institute. FDA included updated fiscal year 1999 data in its performance report, which showed that the Human Drug program exceeded most of its goals with respect to reviewing drug applications. Final performance data are not yet available for multiple targets under a goal on reviewing standard new drug submissions and generic drug applications. FDA expects to have these data by early 2002. According to FDA, late reporting of outcomes generally occurs because of time lags for reporting final data for premarket review goals. Regarding the Medical Device program, FDA reported that it exceeded targets for several goals on premarket device approval applications and surpassed a target on inspecting domestic medical device manufacturing establishments (at least 90 percent conformance with FDA requirements). Equally important was that at least 97 percent of mammography facilities met inspection standards, a target met in fiscal year 2000 and the previous fiscal year. The high percentage of facilities meeting standards is expected to enhance the quality of images, leading to more accurate interpretation by physicians and, ultimately, improved early detection of breast cancer. FDA’s report does not always instill confidence that its performance information is credible. For example, for the Human Drug program, it did not discuss the steps taken to verify and validate procedures for tracking the number of pediatric drug studies FDA requested under the Food and Drug Administration Modernization Act of 1997 (FDAMA) or inspections of drug establishments, including medical gas re-packers. Similarly, the Medical Device program did not discuss procedures used to verify and validate data in its medical device adverse event reporting system, which, as we reported in our last assessment, has experienced serious data management challenges related to the quality of reporting, processing, and reviewing reports. The report also did not describe procedures that were used to ensure data integrity for other databases, such as the Center for Devices and Radiological Health Field Data systems and the Field Accomplishments Tracking System. FDA’s strategies for achieving this outcome are clear and reasonable. When FDA did not meet a goal, it generally explained why and discussed strategies for improving future performance, including human capital strategies. For example, the Medical Device program did not achieve its goal of inspecting 22 percent of manufacturers of class II and III domestic medical devices in fiscal year 2000. According to FDA, the growth of the device industry, the complexity of devices, and dwindling resources have resulted in lower inspection coverage and higher violation rates. Initially FDA addressed this shortfall by focusing enforcement actions on high-risk devices. However, FDA now believes that resource limitations have put inspection coverage below critical mass, so it is requesting an appropriated funding increase for domestic inspections in fiscal year 2002. Inspection of foreign medical device manufacturers is also reportedly very low, and FDA is proposing a strategy to address the problem. While FDA managed to meet its goal of inspecting 9 percent of foreign manufacturers of class II and III medical devices, it expects the foreign workload to increase and inspection coverage to decline. The Mutual Recognition Agreement is one of the major initiatives introduced to assist in reducing FDA’s workload. However, FDA says it cannot maintain foreign inspections or successfully implement the agreement with current resources because it expects European Union assessment bodies will require extensive training. As a result, for fiscal year 2002, FDA is requesting budget authority for foreign inspections to cover the cost of training associated with the Mutual Recognition Agreeement. While FDA did not explain why the Medical Device program fell significantly short of its target on developing the Medical Device Surveillance Network system, it did propose a strategy to achieve its target. FDA plans to use fiscal year 2001 funding to increase user facility participation to target levels and extend the program to other types of facilities, such as ambulatory care surgical centers. For the selected key outcomes, this section describes major improvements or remaining weaknesses in HHS’ (1) fiscal year 2000 performance reports compared with its fiscal year 1999 reports, and (2) fiscal year 2002 performance plans compared with its fiscal year 2001 plans. It also discusses the degree to which HHS’ fiscal year 2000 reports and fiscal year 2002 plans address concerns and recommendations by the Congress, us, the HHS OIG, and others. For fiscal years 2001 and 2002, HCFA issued a single document integrating the appropriate performance report with the current year’s revised performance plan and the next year’s plan. With respect to the fraud outcome, neither HCFA’s fiscal year 1999 report nor its fiscal year 2000 annual performance report provided a comprehensive list of the relevant year’s performance goals, targets, and actual performance, making it difficult to fully track goals and measure progress. For example, earlier we discussed the difficulty in tracking HCFA’s goal on medical review. HCFA also acknowledged in both reports that timeliness of data is a challenge in its analysis of performance data. For example, data are incomplete for the goal of reducing the percentage of improper payments made under the Medicare fee-for-service program in the fiscal year 1999 report and, as mentioned earlier, for the goal of reducing the improper payment rate for home health services in the fiscal year 2000 report. HCFA has changed some of its performance goals and measures each year, which makes it difficult to track its progress in reducing fraud, waste, and error in Medicare and Medicaid. In both the fiscal year 2001 and 2002 annual performance plans, goals are dropped, revised, subsumed into other goals, and added. Two key weaknesses we identified in prior-year HCFA performance plans are that goals were not consistently measurable and that the strategies and resources needed to achieve these goals were not adequately addressed. These problems continue. In some instances, HCFA is still developing the baselines and appropriate measures. In others, HCFA states generally that the accomplishment of a goal is the target and does not explain in sufficient detail what its strategies are to ensure goal accomplishment. An improvement of the fiscal year 2002 plan over the prior plan is that the goal narratives, which are included, are generally more concise and in many cases include illustrative charts that indicate targets and previous performance. Both performance plans reflect HCFA’s efforts to strengthen coordination with other organizations and to enhance data verification and validations. In some areas of performance, however, sufficient detail is not consistently provided on coordination strategies—a problem we also noted with the prior year’s performance plan. Regarding data issues, HCFA cites and describes data sources for each goal and includes some of the particular data concerns or limitations. HCFA’s fiscal year 1999 and fiscal year 2000 annual performance reports clearly and consistently identify the results of its goals, targets, and actual performance with respect to nursing home services. The introduction of graphics in the fiscal year 2000 report was a positive step. While HCFA’s reports have a general discussion of data sources, they do not address known concerns about the validity of data used to measure progress. HCFA’s current plan addressed a concern we raised about the prior plan— the lack of measurable targets for two of the three goals. Thus, it established a baseline and targets for one goal and a fiscal year 2001 target for the other goal. However, as discussed in our June 30, 2000 report and emphasized earlier in this report, we question whether the goals in HCFA’s 2001 performance plan sufficiently address its overall performance in implementing about 30 nursing home quality-of-care initiatives that HCFA has had under way since 1998 under the Nursing Home Oversight Improvement Program. We noted in last year’s report that HCFA’s 2001 performance plan did not provide information on measuring its performance on the 30 initiatives. HCFA’s fiscal year 2002 performance plan is likewise silent on measuring such performance. There is little difference between ACF’s fiscal year 1999 and fiscal year 2000 performance reports. Both reports make effective use of tables to list performance goals, measures, and fiscal year target levels. Changes were made to the measures themselves, which we characterize below. While there is little substantive difference between ACF’s fiscal year 2001 and fiscal year 2002 performance plans in terms of strategies, the most recent plan added an executive summary, which provides a helpful overview of the document. Moreover, in some instances, the strategies in the 2002 plan for improving performance and program coordination are more fully developed. For example, the 2002 plan contains more projects for helping states produce desired TANF outcomes and strategies to better utilize human capital and information technology. The plan also discusses technical assistance to, and partnerships between, ACF’s Housing for the Developmentally Disabled program and the state Developmental Disabilities Councils. A strength of the fiscal year 2000 performance report is the inclusion of an updated performance data chart that was not available for the fiscal year 1999 performance report. In commenting on a draft of this report, ACF cited the inclusion of workplans that provide detailed strategies to achieve its targets in the fiscal year 2000 performance report. While not necessarily referred to as a priority workplan, the fiscal year 1999 performance report lists many of the same strategies in identical language. The number and wording of performance measures between the two ACF plans is similar. However, where target levels in the 2002 plan differed, they were generally set at higher levels. In many cases, the targets represented modest increases. Elsewhere, differences represented a significant change over a 1-year period. For example, the Child Care program’s goal of increasing the number of children served by Child Care and Development Fund subsidies rose from 2.1 million in fiscal year 2001 to 2.6 million in fiscal year 2002. The HHS Office for Civil Rights’ fiscal year 2002 performance plan, however, collapsed several objectives and measures into a single objective with fewer measures. Some of the new targets established, however, only provide an indirect indication of compliance and can actually mask the extent to which compliance is, or is not, achieved. In commenting on a draft of this report, OCR noted that it would continue to report tabular information that specifically identifies each of the outputs that make up the new composite measure. We remain concerned that the tabular information will be too general to directly assess compliance. ACF’s fiscal year 2002 plan continues the refocused human capital strategy it began in prior years. In light of its shrinking workforce and increasing workload, ACF refocused its human capital measure (manager-to-staff ratio) in fiscal year 2001 toward the development of a highly skilled, strongly motivated, and diversified staff. The single measure for this reorientation is “each ACF staff member participates in at least one Distance Learning or other training opportunity directly related to increasing his/her job skills.” However, the extent to which this measure captures ACF’s progress toward meeting its human capital goals remains to be determined. The fiscal year 2002 plan contains an information technology measure related to ACF’s continued implementation of an electronic grant-making system. The measure is to develop and implement a system that will allow ACF to capture and validate grant information submitted by grantees using the Web. The plan does not specify particular targets, such as a high percentage of applications validated by the system, reduced time to process an application, or grant awards made earlier in the year. ACF’s fiscal year 2002 plan does not fully respond to concerns we raised in our HHS GPRA review last year or those identified by the HHS OIG. We reported last year that ACF did not indicate how it planned to address the data-reporting lag. Although ACF included a somewhat fuller discussion of this matter in the fiscal year 2002 plan, we continue to believe that more specific actions and timelines are warranted. In addition, as discussed in appendix I, ACF makes little mention of how it intends to respond to several OIG recommendations and suggestions related to child support enforcement. In commenting on a draft of this report, ACF said that neither Office of Management and Budget nor HHS guidance directed them to respond to concerns expressed by the HHS OIG or GAO. However, our discussions with HHS officials responsible for coordinating the Department’s comments on our report suggest that HHS does take our analysis of its GPRA reports into account and attempts to correct shortcomings we have identified. For each HHS component reviewed with respect to infectious disease prevention, the fiscal year 1999 and fiscal year 2000 performance reports are similar. The agencies employed the same general format to summarize goals, targets, and actual performance, and in referring to an additional source of information (typically the budget justifications). This summary is generally accompanied by informative narrative that expands on the goal and related performance. For each of the relevant HHS components, the fiscal year 2001 and fiscal year 2002 performance plans are similar in content and organization. However, in both plans, the strategies and resources used to achieve goals were not always adequately addressed. Some components made revisions to or increased the number of their infectious disease prevention goals, and each provided a general discussion of plan changes. When goals or targets were revised, they generally provided rationales for these changes. None of the changes substantially strengthened or weakened the product. CDC, however, improved its fiscal year 2002 performance plan by making extensive revisions that more effectively communicated and linked its goals, measures, and targets with the strategies for achieving them. CDC also addressed most of the data quality concerns expressed by us and the Congress. As noted earlier, HRSA indicated that the electronic submission of data addresses reliability and validity concerns we had raised previously. Despite these specific data-quality improvements, the components do not always discuss why particular goals may be poorly measured. SAMHSA’s fiscal year 2000 performance report demonstrates little progress in overcoming a major weakness we noted in its previous report. As in last year’s report, it continues to rely on states to validate the information they reported in block grant applications for their goal related to the 20 percent Substance Abuse Prevention and Treatment Block Grant Prevention Set Aside program. While the current report notes that states must certify the accuracy of block grant data, SAMHSA does not describe states’ procedures for this or how SAMHSA project officers verify the states’ certifications. Another continuing limitation is SAMHSA’s failure to discuss the findings and recommendations of evaluations or how results were used to assess performance. Both we and the HHS OIG have recommended that SAMHSA perform such evaluations. In its fiscal year 2002 plan, SAMHSA continued its practice of highlighting changes and improvements over its prior-year plan. Thus, SAMHSA has adopted a more comprehensive approach to performance management by reporting on performance goals for all significant programs. Two key performance goals were added to its 2002 plan to increase SAMHSA’s ability to assess Substance Abuse and Treatment Prevention Block Grant customer satisfaction. SAMHSA is also working on initiatives to enhance the performance reporting process. These initiatives include establishing a requirement for states to report performance data in SAMHSA grant funding applications, and developing analysis plans for GPRA assessments to better manage programs and measure their effectiveness. However, the 2002 plan does not discuss SAMHSA’s efforts to verify the quality of the performance data reported by states—an observation that we made about the prior-year plan. We did find that when goals were added or modified for clarity, SAMHSA described the reasons and the results to be achieved from the change. In addition, when goals were dropped or modified, the 2002 plan stated that either the goal had been completed or revisions had been made to better focus the goal on outcomes. FDA’s performance reports have been consistently well organized, clear and concise. However, several goals in both the fiscal year 1999 and fiscal year 2000 performance reports lack adequate descriptions of the benefits to public safety and health attained by FDA’s performance. For example, both the Human Drugs and the Medical Device programs established goals of ensuring that inspections of domestic medical drug and device manufacturing facilities resulted in timely correction of serious deficiencies in accordance with FDA requirements. However, neither program in either report elaborated on the expected benefits beyond reporting attainment of the statistical goal. In contrast, FDA’s description of the mammography facility performance goal explained that inspections were expected to enhance the quality of images leading to the more accurate and timely detection of breast cancer. In its most recent plan, FDA has continued to improve its presentation. FDA made strong use of graphics interspersed with narrative to present its strategies and also included a helpful program overview. It also discussed its strategies for accomplishing goals and the consequences of not achieving them—overcoming a weakness we noted in the fiscal year 2001 plan. FDA’s fiscal year 2002 performance plan added new goals and slightly modified or reiterated others. New goals included increased inspections of medical device studies, which resulted from a heightened concern about clinical abuses; stepped up foreign inspections and expanded import coverage of all medical products to improve the safety of imported products; and enhanced surveillance of FDA-regulated products to prevent deaths and injuries related to the use of medical products. However, these new goals did not include baselines or concrete targets against which to measure progress. As noted earlier, FDA did not always address concerns we raised last year about the validity of performance information. We have identified two governmentwide high-risk areas: strategic human capital management and information security. Regarding human capital, HHS does not have departmental performance goals related to this high- risk area. Although it is engaged in workforce planning, HHS only briefly outlines this effort. Several HHS components, however, have such goals and measures in their plans, and some cite progress. Similarly, HHS has no departmental goals related to information security, but HCFA has established an aggressive program to address problems in this area. In addition, we identified five other major management challenges facing HHS. The performance reports and plans of HHS components included goals and measures directly related to four of these challenges. We found that HCFA is making some progress in addressing fraud in Medicare and Medicaid and that, while its goals are very narrow, it continues to make progress toward improving nursing home quality. With regard to the outcome of promoting self-sufficiency among the poor, we could not fully assess ACF’s progress because most goals lacked the necessary fiscal year 2000 performance data. For those goals with data, results were mixed. Only FDA’s outcome of ensuring prompt access to safe and effective medical drugs and devices demonstrated significant progress. For the fifth challenge—ensuring a well-designed and administered Medicare program—HCFA has a workforce planning goal to reduce the gap between the current and the targeted levels of skills and is using outside assistance to develop a comprehensive database documenting its employee positions, skills, and functions. On its own, HCFA cannot address other aspects of the human capital challenges we identified. In summary, we found that the HHS reports discussed making at least some progress for all seven major management challenges (including the two high-risk areas). Of the seven major management challenges identified by GAO, HHS’ performance plans had (1) goals and measures directly related to six of the challenges, and (2) goals and measures that indirectly related to one of the challenges. It is difficult to fully assess HHS’ progress in fiscal year 2000 toward achieving the outcomes we reviewed because lags in reporting performance data are common for many of its components such as ACF, CDC, SAMHSA, and FDA. In some cases, the delays are associated with the need to obtain performance data from states and local organizations. Some HHS components are working to improve the timeliness of data submitted by others and, in some instances, have reported trend data to show that progress is being made. For example, both ACF and CDC supplied fiscal year 1999 performance data in their current performance reports—data that were not available until this year. It is likely that ACF’s and CDC’s fiscal year 2001 performance reports will include fiscal year 2000 performance data that were not available this year. While it may not always be realistic to expect the availability of complete data at the same time annual performance reports and plans are issued, trends will become apparent as the number of performance reports grows with each passing year. The six HHS outcomes that were used as the basis for our review were identified by the Ranking Minority Member of the Senate Committee on Governmental Affairs as important mission areas and do not reflect the outcomes for all of HHS’ programs or activities. Given the outcomes selected by the Committee and the management challenges we examined, our review focused on about 150 goals discussed in the reports and plans of 10 components—Administration on Aging, ACF, CDC, FDA, HCFA, HRSA, IHS, NIH, OCR, and SAMHSA. We also reviewed the overall HHS summary, which highlights the reports of its operating components. As agreed, our evaluation was generally based on the requirements of GPRA, the Reports Consolidation Act of 2000, guidance to agencies from the Office of Management and Budget (Circular A-11, Part 2) for developing performance plans and reports, and previous reports and evaluations by us and others. We also relied on our knowledge of HHS’ operations and programs, our identification of best practices concerning performance planning and reporting, and our observations on HHS’ other GPRA-related efforts. We discussed our review with HHS officials, including the HHS OIG. We identified the major management challenges confronting HHS, including the governmentwide high-risk areas of strategic human capital management and information security, in our January 2001 performance and accountability series and high-risk update. The HHS OIG identified major management challenges confronting HHS in a December 2000 letter to the Congress. We did not independently verify the information contained in the performance reports and plans, although we did draw from our other work in assessing the validity, reliability, and timeliness of HHS’ performance data. We conducted our review from April 2001 through June 2001 in accordance with generally accepted government auditing standards. In commenting on a draft of this report, HHS said it found our report “fair, thorough, and comprehensive.” We have addressed specific comments that HHS suggested would increase the report’s accuracy as well as other technical comments in the corresponding sections of the report. HHS’ comments are included as appendix II. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to appropriate congressional committees; the Secretary of Health and Human Services; and the Director, Office of Management and Budget. Copies will also be made available on request. If you or your staff have any questions, please call me at (312) 220-7600. Key contributors to this report were John Brennan, Bonnie Brown, Kim Brooks, Brett Fallavollita, Darryl Joyce, Don Keller, Clarita Mrena, Walter Ochinko, and William Thompson. The following table identifies the major management challenges confronting HHS, which include the governmentwide high-risk areas of strategic human capital management and information security. The first column lists the challenges that we and/or the HHS OIG have identified. The second column discusses what progress, as identified in its fiscal year 2000 performance reports, HHS components have made in resolving the challenges. The third column discusses the extent to which the fiscal year 2002 performance plans of the HHS components include performance goals and measures to address the challenges that we and the HHS OIG identified. We found that the performance reports of HHS’ components discussed the progress in resolving some challenges, but did not discuss progress in resolving the following: abuses in Medicaid payment systems; Medicare equipment and supplies; Medicare payments for mental health services; Medicare prescription drugs; oversight of prospective payment systems; and child support enforcement. Of HHS’ 19 major management challenges, its performance plans had (1) goals and measures that were directly related to 10 of the challenges; (2) goals and measures that were indirectly applicable to 2 of the challenges; and (3) no goals, measures, or strategies to address 7 of the challenges.
This report reviews the Department of Health and Human Service's (HHS) fiscal year 2000 performance report and fiscal year 2002 performance plan required by the Government Performance and Results Act of 1993 to assess HHS' progress in achieving selected key outcomes that are important to its mission. It is difficult to fully assess the HHS' progress in fiscal year 2000 toward achieving the outcomes GAO reviewed because lags in reporting performance data are common for many of its components such as the Administration for Children and Families (ACF), Centers for Disease Control and Prevention (CDC), the Substance Abuse and Mental Health Services Administration, and the Food and Drug Administration. In some cases, the delays are associated with the need to obtain performance data from states and local organizations. Some HHS components are working to improve the timeliness of data submitted by others and, in some instances, have reported trend data to show that progress is being made. For example, both ACF and CDC supplied fiscal year 1999 performance data in their current performance reports--data that were not available until this year. It is likely that ACF's and CDC's fiscal year 2001 performance reports will include fiscal year 2000 performance data that were not available this year. While it may not always be realistic to expect the availability of complete data at the same time annual performance reports and plans are issued, trends will become apparent as the number of performance reports grows with each passing year.
Budget-scorekeeping rules were developed by the executive and legislative branches in connection with the Budget Enforcement Act of 1990. These rules are to be used by the scorekeepers to assure compliance with budget laws. Their purpose is to ensure that the scorekeepers measure the effects of legislation consistent with scorekeeping conventions and specific legal requirements. The rules are reviewed annually and revised as necessary to achieve that purpose. Leases may be of two general types—operating and capital. The Office of Management and Budget (OMB) identifies six criteria that a lease must meet in order to be considered an operating lease rather than a capital lease. Ownership of the asset remains with the lessor during the term of the lease and is not transferred to the government at or shortly after the end of the lease term. The lease does not contain a bargain-price purchase option. The lease term does not exceed 75 percent of the estimated economic life of the asset. The asset is a general purpose asset rather than being for a special purpose of the government and is not built to unique specifications of the government lessee. There is a private sector market for the asset. The present value of the minimum lease payments over the life of the lease does not exceed 90 percent of the FMV of the asset at the beginning of the lease term. If a lease does not meet all six criteria above, it must be treated as a capital lease for budget-scoring purposes. For a capital lease, the net present value of the total cost of the lease is scored as budget authority in the year budget authority is first made available for the lease. For GSA operating leases, only the budget authority needed to cover the annual payment is required to be scored. As we previously reported, in general, capital facilities should be funded up front at the time the federal government enters into the commitment. In June 1991, GSA wrote to OMB generally describing the policies and procedures it would follow to ensure the proper implementation of the new budget scoring rules. These rules were incorporated in OMB Circular A-11. Appendix B of the circular contains the scoring rules for lease- purchases and leases of capital assets. In March 1992, GSA wrote to OMB saying that after reviewing its nonprospectus inventory, as well as OMB policies and procedures, GSA concluded that nonprospectus leases should be considered operating leases for scoring purposes without the necessity of a case-by-case determination. In this letter, GSA stated that there was no practical way to implement a policy of determining whether each nonprospectus lease met the criteria for being considered an operating lease without severely damaging its ability to meet client-agency needs. GSA considered this view consistent with OMB’s intent, as well as an operational necessity. In April 1992, GSA issued guidance on lease scoring in which it stated that all nonprospectus leases are to be considered operating leases unless the lease is a lease-purchase, the lease contains a nominal or bargain purchase price, or the lease is on government-owned land. All nonprospectus leases that met one of these exceptions were to be scored as capital leases by the regions. All prospectus-level leases were to be scored at GSA’s central office. In October 1998, GSA announced it was no longer following the policy of considering most nonprospectus leases as operating leases. Since then, according to GSA officials, GSA has required regional offices to apply the appropriate criteria to all prospectus and nonprospectus leases and that copies of the resulting scoring be retained in the regionally maintained lease file. GSA headquarters is to review the scoring of all prospectus-level leases. Two of the six scoring criteria used to determine an operating lease concern the term of a lease: that the lease term not exceed 75 percent of the estimated economic life of the asset and that the present value of the minimum lease payments over the life of the lease not exceed 90 percent of the FMV of the asset at the beginning of the lease term. According to GSA officials, GSA’s leases generally meet the first of these two criteria. If GSA rents new space, it meets this criterion because it only has 20-year leasing authority and tax law specifies that a new building’s economic life is longer than 30 years (30 yrs. times 75% = 22.50 yrs.). If GSA rents older space, it generally requires it to be upgraded, which extends the building’s estimated economic life, thereby meeting this criterion. Thus, the remaining criterion that could affect the lease term is that the present value of the minimum lease payments over the life of the lease not exceed 90 percent of the FMV of the asset at the beginning of the lease term. For example, if a lease has a 20-year term whose present value of the minimum lease payments exceeds 90 percent of FMV then by reducing the 20-year term, the present value of the minimum lease payments is reduced while the FMV of the asset remains the same. This lowering of the percentage relationship between present value of the minimum lease payments and 90 percent of FMV allows a lease to meet this scoring criterion. However, if a lease does not meet any one of the other four scoring criteria, the lease would be a capital lease no matter what the term. Six of GSA’s 11 regions identified 12 projects or leases for which the scoring process had affected the term of the lease. In one other region, according to a GSA official, GSA thought that the term of about eight other leases had been affected in the last 2 years but they could neither identify those leases nor the impact of budget scoring on the lease term. GSA officials from the other four regions said they could not identify any projects affected by budget scoring. Only 2 of the identified 12 projects—a lease for the Immigration and Naturalization Service and a lease for the Secret Service—were among the 39 prospectus-level projects reviewed, and none came from the 102 lease files we reviewed. According to GSA officials, other factors, such as the agency or the market, determined the term of these other leases. Table 1 lists the leases or lease projects that we or GSA identified as being affected by scoring. According to GSA officials, during the planning for the Department of Transportation lease, it was realized that due to the rental rates in the District of Columbia, a 20-year lease would probably not satisfy the 90 percent scoring criterion. In order to address this issue, GSA reduced the lease term to 15 years, estimating that the present value of the minimum lease payments for a 15-year lease would not exceed 90 percent of the FMV. For the SSA lease, according to officials, it was originally submitted as new construction but not approved. GSA then decided to do it as a 20- year build-to-suit lease, but when reviewed it was determined that it would be a capital lease because it did not satisfy the 90-percent scoring criterion. At OMB’s direction, the lease was awarded as a 10-year lease because OMB thought that SSA space needs might be reduced in the future because of automation. Four factors limited the identification of leases affected by budget scoring, according to GSA officials. First, GSA did not begin determining whether each nonprospectus lease met the scoring criteria for being considered an operating lease until about October 1998. GSA issued guidance in 1992 that stated there was no practical way to implement a policy of determining whether each nonprospectus lease met the criteria for being considered an operating lease without severely damaging its ability to meet client-agency needs. Nonprospectus leases were to be considered operating leases unless the lease was a lease-purchase, the lease contained a nominal or bargain purchase price, or the lease was on government-owned land. Thus, it is unknown if nonprospectus leases would have been affected by scoring between 1992 and 1998. Second, prospectus-level leases were scored in headquarters until September 1998, and scoring records were not kept in the lease files that are maintained by GSA’s regional offices. Third, GSA headquarters does not maintain documentation on whether the scoring process affected the lease term. According to a headquarters official, although GSA kept copies of scoring for prospectus leases, the records do not show whether the term was directly affected by scoring. Fourth, according to GSA officials, budget-scoring rules affect an unknown number of leases because if staff believe a project will be affected by budget-scoring rules, they reduce the term to avoid the potential scoring conflict. However, they do not formally score the lease and do not use the scoring rules as a tool to identify the best term. As of October 1998, GSA’s regional offices were to score and document the scoring of both prospectus and nonprospectus leases, according to officials. However, the officials said that the files will contain only the final scoring sheets and not preliminary runs that might identify situations where a lease term was adjusted in order for the lease to score as an operating lease. We could not determine the actual monetary impact of reducing the lease term. However, we found two leases in which GSA requested 10-year and 20-year and 15-year and 20-year lease costs. GSA provided a consultant’s report showing the difference between 10- and 20-year lease costs for another project and the SEC lease had 15- and 20-year lease costs. GSA did not identify these two lease terms as being affected by budget scoring. However, the SEC lease term was affected by scoring. According to GSA officials, GSA does not generally seek comparisons of short- and long-term lease costs in the solicitation process. Also, GSA officials stated that the use of a 20-year lease is only appropriate in certain situations, such as if the agency has a long-term need and the federal presence is large enough in the market to backfill the space with other federal employees if the needs of the requesting agency change over time. Also, they pointed out that in most cases it would be less costly to construct a federal facility to meet a long-term need than it is to lease. We previously reported that construction was usually the least costly approach for meeting long-term space needs. Further, GSA pointed out that other factors, such as market, location, and the agency’s desires, affect the selection of the lease term. While reviewing files, we identified two leases for which GSA had solicited offers for both 10- and 20-year and 15- and 20-year leases. The first lease was for a 20-year lease structured as either a 10-year lease with a 10-year option or a 20-year lease. The 20-year lease term was 3.24 percent less expensive per NUSF than the 10-year lease that was awarded. This lease was awarded as a 10-year lease with a 10-year option because the agency’s long-range plans were unknown. Eight final offers showed that the 20-year lease ranged from 0 to 12.9 percent less expensive per NUSF than the 10- year lease. However, for two other final offers the 20-year lease ranged from .06 percent to 1.19 percent more expensive per NUSF than the 10- year lease. The second lease was for a 20-year lease structured as a 20-year lease with cancellation rights at 15 years or a 20-year lease. The 20-year lease term was 5.56 percent less expensive per NUSF than the 20-year lease with cancellation rights at 15 years for the offer selected for award. The contract was awarded as a 20-year lease with cancellation rights at 15- years. It is not clear from the file why this option was chosen. Four final offers showed that the 20-year lease ranged from 5.56 percent to 7.75 percent less expensive per NUSF than the 20-year lease with cancellation rights at 15 years. However, for two other final offers the 20-year lease was 5.99 percent and 7.97 percent more expensive per NUSF than the 20-year lease with cancellation rights after 15 years. Furthermore, a consultant’s report on locating an FBI building in Texas showed that a 20-year lease was 32 percent less expensive per square foot than a 10-year lease. The consultant pointed out that the cost difference might be due to the specialized nature of the FBI building. The SEC lease project had offers ranging from 10 to 20 years. For the successful offer, the 20-year lease costs and the 15-year lease costs were the same per RSF. Three other final offers showed that the 20-year lease costs ranged from 1.3 percent to 4.1 percent less expensive per RSF than the 15-year lease costs. One final offer included 10-year lease costs. This offer showed that 20-year lease costs were 8.8 percent less expensive per RSF than 10-year lease costs per RSF. Further, 15-year lease costs were 7.4 percent less expensive per RSF than 10-year lease costs per RSF. The difference identified between terms and costs in these three examples are not projectable to other leases because other factors such as market condition (whether rental rates are high or low) affect the cost of a lease. In testimony before the Subcommittee on Public Buildings and Economic Development, House Committee on Transportation and Infrastructure, on May 15,1997, a private industry real estate official testified that a 20-year lease term could have annual rental rates as much as 33 percent less expensive than a 10-year lease and 13 percent less expensive than a 15- year lease. Also, he testified that a 15-year lease term can be as much as 23 percent less expensive than a 10-year lease. He further stated that renewal options in a lease are more advantageous than having to renegotiate a new lease for the same location. While GSA officials agree that a long-term lease generally has a lower cost than a short-term lease, they could not quantify the difference between a 20-year lease or 15-year lease and a 10- year lease. Also, they stated that it is generally less costly to construct a federal facility to meet a long-term need—20 years or more—than it is to lease. Furthermore, they pointed out that other factors such as the desires of the agency and the market must be considered along with cost. For nine GSA lease acquisitions, we previously reported that construction would have been less costly in eight of the nine cases, with the range of cost differences being from a negative $.2 million to a positive $48.1 million for construction. For 11 cities throughout the country, we reported that to build a hypothetical 100,000 square foot office building versus obtaining a 20-year lease, the estimated range of cost savings for construction versus leasing was from $.3 million in St Louis, MO, to $14 million in Washington, D.C. Also, we previously reported that the budget scoring rules favor leasing and that one option for scorekeeping that could be considered would be to recognize that many operating leases are used for long-term needs and should be treated on the same basis as purchases. This would entail scoring up front the present value of lease payments covering the same time period used to analyze ownership options. Applying the principle of up-front full recognition of long-term costs to all options for satisfying long-term space needs—purchases, lease-purchase or operating leases—is more likely to result in selection of the most cost-effective alternative than the current scoring rules would. According GSA officials, while scoring does affect the term of some leases, the term of most leases is determined by various factors other than budget scoring, such as the type of space—existing or build-to-suit—lease term desired by the agency, rental market condition, and location of the structure. The importance of each variable may be different for each lease. GSA officials said that for existing space, lease terms do not usually exceed 10 years. This has been a standard practice for some time. If the requirement is for build-to-suit space, then the term of the lease may have to be longer than 10 years to accommodate the lessor’s ability to finance the building. It is these build-to-suit leases that are most likely to be affected by scoring because the lessor must have a longer term lease to get financing for a new structure. The lease term to which the agency is willing to commit is another important factor. GSA officials stated that some agencies told GSA that the agency only has authority to commit to a maximum of a 10-year lease. Other agencies only want leases of 10 years or less because of the changes occurring within the agency, such as downsizing or consolidation. An example, according to GSA officials, is the Internal Revenue Service; because of downsizing it does not want to sign a lease longer than 10 years. The rental market conditions also affect a lease’s term. GSA does not want to commit to a long-term lease when the market rent is considered high. Conversely, if market rent is low, GSA will consider a longer term lease, according to officials. An example is a lease for the Customs Service in Seattle, WA, for which GSA did not want a long-term lease because the current rental rates were high. Location becomes an important factor because GSA is required to take space back from an agency with only 120 days notice. So in areas with a limited federal presence, GSA does not want to commit to leases where the space cannot be easily back filled with other federal agency employees, according to GSA officials. For example, in small towns, GSA would not want to commit to a lease term longer than an agency wanted when it is the only federal agency in the location. GSA would not be able to find another federal tenant for this space. Although efforts to address budget-scoring rules did result in shorter term leases in some cases, we could not determine the total number of leases where the term was actually affected by budget scoring because of GSA’s documentation process for scoring leases. Further, while a shorter term lease can be more costly than a longer term lease, we could not determine the actual overall monetary impact of shorter lease terms because GSA does not generally seek comparisons of short- and long-term lease costs in the solicitation process. In addition to having some effect on the lease term, our previous work has shown that budget scoring can affect the government’s decision whether to construct or lease a facility. Also, we have previously reported that the budget-scoring rules have the effect of favoring leasing and that one option for scorekeeping that could be considered would be to recognize that many operating leases are used for long-term needs and should be treated on the same basis as purchases or construction. Because of the overall effect budget scoring appears to be having on the acquisition of real property, we plan to address the effects of budget scoring on real property acquisition as part of a govermentwide review of real property management we recently initiated. To address identifying leases affected by scoring, we reviewed OMB’s guidance on scoring leases (Circular A-11, Appendix B, and Circular A-94), interviewed GSA officials in headquarters and all 11 regions, and reviewed 102 active lease files with terms from 10 to19 years and 100,000 RSF or more in GSA regions 3, 7, 8, and 11, which were the 4 regions with the most leases meeting our criteria of 10 to 19 years and 100,000 RSF or more. We dropped 8 lease files from our original selection of 110 files because the files could not be located during our visit or had been moved to other locations prior to our visiting the region. We did not verify the accuracy of the data used to select the lease files. To determine monetary impact of scoring on the lease term, we reviewed congressional testimony, previous GAO reports, 102 GSA lease files, and 8 final offers for the SEC lease; and we interviewed officials in GSA headquarters, all 11 GSA regions, and SEC. To identify other factors influencing lease term, we reviewed 102 active GSA lease files and interviewed GSA headquarters and regional officials. We conducted our review at GSA and SEC between October 2000 and July 2001 in accordance with generally accepted government auditing standards. We obtained comments on a draft of the report from GSA and SEC. On August 10 and 14, 2001, we received written comments from the Associate Executive Director, SEC, and the Commissioner of GSA’s Public Buildings Service (PBS), respectively. The SEC official provided clarifying information, which has been included in the report. The PBS Commissioner basically agreed with us that budget scoring is affecting the lease term and provided additional comments, which he believes support this position. The first comment stated that seasoned leasing specialists said that the use of 20-year leases had declined since the Congress passed the Budget Enforcement Act of 1990. While this may be true, GSA did not have documentation on the impact of budget scoring on the lease term, other than for the cases cited. Also, it is possible that other factors, such as market conditions, contributed to the decline in the use of 20-year leases. Second, GSA stated that the National Capital Region sets the term of all the above-prospectus leases it submits as part of its capital plan at 10-years, except in certain cases, to avoid budget-scoring problems. For the fiscal year 2000 and 2001 prospectus-level leases that we reviewed, this is accurate. However, prior to fiscal year 2000, both the Patent and Trademark Office and the Department of Transportation leases were submitted for longer terms, 20 and 15 years, respectively. Third, GSA said while options to renew a lease were advantageous, it did not generally seek them for leases with 10-year terms because options are scored as part of the 90-percent scoring criterion and could result in a capital lease. While GSA is correct that OMB guidance requires options to be considered in scoring leases, there is an exception to this rule. According to OMB’s guidance, agencies do not have to include an option for budget scoring if exercising the option would require additional legislative action. Lastly, GSA raised the issue of short-term leases resulting in increased rental costs in some cases because they lead to shorter amortization periods and higher mortgage payments for lessors who use federal leases as collateral for financing. While the report shows that in certain cases shorter term leases are more expensive than long-term leases, we did not look at whether this increased cost was driven by shorter amortization periods and higher mortgage payments. GSA also made some technical comments, which we have reflected in the report where appropriate. We have included GSA’s written comments in appendix I. We are sending copies of this report to the Chairmen and Ranking Minority Members of congressional committees with jurisdiction over GSA and SEC. We are also sending copies to the Administrator, GSA, and the Chairman, SEC. Copies will also be made available to others upon request. Key contributors to this report were Ronald L. King and Thomas G. Keightley. If you have any questions, please contact me or Ron King on (202) 512-2834.
This report responds to a concern that budget-scoring restrictions were forcing the General Services Administration (GSA) to rely on shorter term leases that increase the costs to the Federal Buildings Fund because their per-square-foot costs are greater than longer term leases. Budget-scorekeeping rules are to be used by the scorekeepers to ensure compliance with budget laws and that legislation are consistent with scorekeeping conventions and that specific legal requirements. The rules are reviewed annually and revised as necessary to achieve those purposes. The way in which budget-scoring rules were implemented affected the lease or lease project term of at least 13 of the 39 federal agency leases GAO reviewed. Since GSA officials do not generally seek comparisons of long-term versus short-term leases in the solicitation process, GAO could not determine the overall monetary impact of budget scoring in the lease term. However, GAO identified three isolated cases that had comparisons of long term versus short-term leases in the solicitation process, and, in each case, the price per net useable square foot was lower with the longer term lease. GSA officials said that while budget scoring affects the term of some leases, the term of most leases is determined by various factors, either individually or in combination, such as rental market conditions, location, and the term desired by the agency.
To achieve its primary debt management objective of financing the federal government’s borrowing needs at the lowest cost over time, Treasury issues debt through a regular and predictable schedule of auctions across a wide range of securities. Most of the securities that are issued to the public are marketable, meaning that once the government issues them they can be resold by whoever owns them.consists of bills, notes, bonds, Treasury Inflation-Protected Securities (TIPS), and, since January 2014, FRNs (see figure 1). Currently, Treasury issues bills with maturities ranging from a few days to 52 weeks; notes with maturities of 2, 3, 5, 7, and 10 years; bonds that mature in 30 years; TIPS with maturities of 5, 10, and 30 years; and FRNs that mature in 2 years. We analyzed the potential cost to Treasury of issuing 2-year FRNs and found they are likely to have interest costs lower than 2-year fixed-rate notes and not substantially different than 13-week bills. As a result, FRNs will likely result in savings over the long run, helping Treasury achieve its goal of borrowing at the lowest cost over time. Our simulations found interest costs and savings varied depending on the security to which the FRN is compared, how the FRN is treated by investors, and the interest rate environment. We found that the cost of 2-year FRNs was generally less than that of fixed-rate 2-year notes, but that it could be either more or less than the cost of 13-week bills depending on assumptions regarding investor treatment of the FRN. In addition, in all cases and in all environments, savings tended to be greater—or added costs lower— under a model that sets the FRN spread based on its weekly reset than under an alternative model where the FRN spread is influenced by its final maturity of 2 years. We also found that, while issuing 2-year FRNs generally results in cost savings, they may be more costly than other alternatives in certain rate environments, such as rising rate environments. Prior to issuance of the first FRN, Treasury conducted its own analysis of the potential cost of FRNs. Treasury’s analysis found that from 1982 to 2010, issuance of 2-year FRNs would have led to cost savings compared to fixed-rate notes. Treasury’s analysis, however: (1) compared the cost of 2-year FRNs only to 2-year notes and not to other alternatives, and (2) assumed a fixed spread of 15 basis points (or 0.15 percentage points). Floating Rate Note (FRN) Index Rate The rate to which the interest rate of an FRN is indexed. Treasury’s 2-year FRN is indexed to the rate from Treasury’s most-recent 13- week bill auction. To estimate the potential cost of FRNs to Treasury, we compared the cost of hypothetical 2-year FRNs both to the cost of 2-year fixed-rate notes and to series of rolling 13-week bills, using historical auction data from January 1980 to March 2014 (see figure 5 below). We made these comparisons using two models, each with different assumptions about the spread over the index rate that Treasury would pay. We also compared the cost of FRNs in the various interest rate environments. Although it is uncertain what Treasury would issue in the absence of FRNs, Treasury has indicated that, at least initially, the FRNs would be a substitute for Treasury bill issuance. Both in interviews and in our survey of large holders of Treasury securities, market participants also indicated that they see the FRNs as a substitute for bills. However, Treasury has also indicated that it intends to reduce the share of debt funded by bills in order to increase its WAM. Without the 2-year FRN, Treasury might have increased the WAM by the same amount by instead increasing its issuance of 2-year fixed-rate notes, making them an appropriate benchmark with which to compare the costs of the FRNs. Our analysis used two models for how the FRN spread—the spread between the index rate and the interest rate for the FRN—may vary over time. The FRN spread is set at auction and is expected to vary in response to changes in the level and volatility of interest rates. Because there is uncertainty about how market participants will price the FRN relative to other products, we considered two different models of the response of spreads to changes in different interest rates: A “maturity-based” model, where the spread estimate is influenced by the 2-year term of the FRN. A “reset-based” model, where the spread estimate is derived from the weekly reset term, which determines the nature of the interest rate risk faced by investors in FRNs. These two models are designed to approximate the range of potential spreads at which the 2-year FRN would have been expected to have been auctioned in historical interest rate environments. For more details on our models for FRN cost, including other models we considered, see appendix I. Because interest rate environments vary substantially over time, we also compared how the cost of FRNs may vary based on changes in the level and volatility of interest rates. Although these views are not generalizable, market participants and experts we interviewed expect the demand for FRNs to vary based on the interest rate environment. In addition, 58 of 62 respondents to our survey indicated that FRNs would be more attractive when interest rates are expected to rise; 49 of 62 indicated that FRNs would be less attractive when interest rates are expected to fall. We found that compared to 2-year fixed-rate notes, FRNs are likely to result in interest savings to Treasury regardless of how the FRN is treated by market participants; however, compared to 13-week bills, they could result in either savings or additional costs (see figure 6). Compared to 2- year fixed-rate notes, 2-year FRNs historically would have saved between $8.1 million in interest costs annually per billion in issuance under our maturity-based model, and $13.6 million under our reset-based model. Compared to 13-week bills, the FRN would have resulted in annual savings of $2.4 million per billion of issuance under our reset-based model but additional annual costs of $3.1 million per billion of issuance under our maturity-based model. In addition to examining estimates of the relative savings and costs from issuing 2-year FRNs, we also analyzed the share of cases in our simulations where FRNs save or add to interest costs across different interest rate environments (see figure 7). We found that compared to 2- year fixed-rate notes, the 2-year FRN would have resulted in savings in 82 percent of cases under our reset-based model and in 72 percent of cases under our maturity-based model. Compared to 13-week bills, 2- year FRNs would have resulted in savings in 85 percent of cases under our reset-based model but added to costs in 81 percent of cases under our maturity-based model. We also found that the interest savings or added costs from 2-year FRNs varied with the interest rate environment regardless of how the FRN is treated or whether it is being compared to 2-year fixed-rate notes or 13- week bills. Relative to 2-year fixed-rate notes, FRNs tended to be more costly in rising rate environments compared to other environments. Compared to 13-week bills, FRNs tended to be more costly (in the case of our maturity-based model) or to produce less savings (in the case of our reset-based model). The extra cost or reduced savings in rising rate environments, however, tended to be less than the savings in steady and falling rate environments. As shown in figures 6 and 7 above, under our maturity-based model in rising rate environments, 2-year FRNs were less costly than 2-year fixed-rate notes in only 24 percent of cases and, on average, increased Treasury interest costs by 0.48 percentage points, resulting in $4.8 million in annual interest costs per billion in issuance; and in falling rate environments, 2-year FRNs were less costly than 2-year fixed-rate notes in all cases and, on average, reduced interest costs by 2.07 percentage points, resulting in $20.7 million in annual interest savings per billion in issuance. We also analyzed the potential costs and savings from FRNs in environments with different levels of rate volatility and found that, at all levels of volatility, there was little variation between our two models. In periods of low, moderate, and high volatility, 2-year FRNs tended to produce savings compared to 2-year fixed-rate notes, but compared to 13-week bills, could produce either costs or savings, depending on which model is used. In periods of extreme (i.e., higher than “high”) volatility, FRNs produced savings under both models. For more information on the results of this analysis, see appendix I. Factors other than interest rates may affect demand for FRNs, and Treasury could realize additional savings from FRNs due to these elements of technical demand. Both of the models we used to estimate the cost of FRNs assume the FRN spread is based solely on the relative value of FRNs compared to other Treasury securities. However, both our interviews with market participants and our survey responses indicate that demand for FRNs is also likely to be affected by technical factors, such as investment guidelines or regulatory requirements to hold certain types of investments. For example, Treasury officials and market participants told us that Treasury structured the FRNs in a way that makes them especially attractive to money market investors. To meet investment guidelines and regulatory requirements, these funds tend to hold mostly short-term securities like Treasury bills and, because their interest rate resets frequently, FRNs.demand for Treasury FRNs that is less sensitive to the relative value of the FRN. This generally would lower Treasury’s costs since some investors would be willing to accept a lower interest rate at auction. Technical Demand Technical demand is driven by factors such as investment guidelines or regulatory requirements and is less sensitive to the relative value of the security. Our survey results confirm that technical factors affect the attractiveness of FRNs for at least some investors. Twenty-seven of the sixty-two survey respondents said that FRNs’ consistency with client or fund investment guidelines make them attractive to a great or very great extent. Results of our survey also show that 2-year FRNs are more attractive because they conform to regulatory requirements for certain sectors. Six of the seven money market mutual fund managers that responded to our survey indicated that conformance with limits on their holdings make the FRNs attractive to a great or very great extent. Similarly, five of the nine retail and commercial banks that responded to our survey indicated that conformance with new capital requirements made the FRNs attractive to a great or very great extent. Treasury’s costs could be increased if Treasury FRNs have a higher liquidity premium than other Treasury securities. Debt issuers, including Treasury, generally have to pay a liquidity premium on less liquid products—products that cannot be easily bought and sold in large volumes without meaningfully affecting the price—to compensate investors for the possibility that they might not be able to sell the security as readily as a more liquid product. A liquidity premium on FRNs that is greater than the premium on other Treasury securities could increase costs compared to our estimates. Although Treasury securities are generally considered very liquid and have very low liquidity premiums, market participants we interviewed said that FRNs might be less liquid than bills—Treasury’s most liquid product—but more liquid than TIPS—its least liquid product.because (1) investors are more likely to buy and hold rather than to trade FRNs, and (2) FRNs are expected to have a smaller relative market size. Several market participants said that liquidity is likely to be lower initially and to improve as Treasury issues more FRNs. The results of Treasury’s first three FRN auctions were within the range estimated by our models. At the first FRN auction in January 2014, FRNs were auctioned with an FRN spread of 0.045 percentage points. At the February and March 2014 auctions, FRNs were auctioned with discount margins of 0.064 and 0.069 percentage points, respectively. The actual auction results appear linked to the spreads predicted by our reset-based model. In each of the three auctions, the actual auction results equaled the spread predicted by our reset-based model plus a small and consistent premium. One element of the design of the Treasury 2-year FRN is that it is what the market refers to as a “mismatched floater.” The difference (i.e., the mismatch) between the term of its index rate (13 weeks) and the length of its reset period (stated as daily, but effectively weekly) may introduce the risk of price instability on the reset date that is not typical of most floating rate securities. This is particularly the case if market participants treat the FRNs more like series of rolling 1-week bills. This might affect demand for the product in certain interest rate environments and, if so, could raise Treasury’s borrowing costs. In a Treasury FRN auction, bids are made in terms of a desired discount margin. The highest accepted discount margin in the initial auction for a given FRN (which we refer to as the FRN spread) becomes the spread for that FRN, and bidders pay the full value of the FRN. At subsequent reopening auctions of the FRN, the spread is fixed based on the results of the initial auction. Bidders at the auction still bid on a discount margin basis and may pay more, less, or the same as the full value, depending on whether the discount margin is less, more, or the same as the initial auction. most common index for non-Treasury floating rate notes—would typically reset every 3 months. Absent a change in the credit risk of an issuer, the value of a typical floating rate security returns to par—the value at maturity—at each reset. This leads to a higher level of price stability in floating rate securities compared to fixed-rate securities of the same maturity. This price stability is highly desirable to some investors. Yield Curve Risk The risk that interest rates at different maturity points—for example the rates for a one-week bill and a 13-week bill—will change relative to one another. The Treasury 2-year FRN is different from a typical floating rate security in that it will reset every week to a 13-week rate. This mismatch introduces a tradeoff between yield curve risk and interest rate risk. Unlike a typical FRN, the price of the Treasury 2-year FRN will not reliably return precisely to par at each reset date before its 2 year maturity. This is because investors factor in changes between the 1-week bill rate and the 13-week bill rate. However, the price of the Treasury 2-year FRN should return close to par weekly, which is more frequent than if it had a 13-week reset. Treasury officials told us they believe that the frequent resets provide increased price stability for the FRN. They said that they expect investors to price the 2-year FRN in a way that reflects the expectation that the yield curve risk for Treasury’s 2-year FRN is likely to be small relative to its reduced interest rate risk. However, if the difference between the 1-week rate and the 13-week rate changes substantially over the two year term, either in fact or in expectations, then the yield curve risk that the investor faces would be more substantial. It is possible that in higher and changing interest rate environments, the tradeoff between yield curve risk and interest rate risk may not be favorable to investors. This could be reflected in the spread, as investors bid for FRNs at auction in a way that compensates them for this additional risk, which could raise Treasury’s borrowing costs. Money Market Fund A money market fund is a type of investment fund that is required by law to invest in low- risk securities. These funds have relatively low risks compared to other mutual funds and pay dividends that generally reflect short-term interest rates. Money market funds typically invest in government securities (including Treasury bills and notes), certificates of deposit, commercial paper of companies, or other highly liquid and low-risk securities. The mismatch between the index rate maturity and the frequency of the interest rate reset could have adverse effects on the costs of FRNs to Treasury. Treasury officials told us they discussed the design of the 2- year FRN both internally and with market participants and structured the 2-year FRN in this way for two reasons. First, as both those who commented on Treasury’s proposal and Treasury have noted, the 13- week bill market is a large, liquid, and transparent market. Second, Treasury designed the 2-year FRN to meet high demand for short-term securities, and both Treasury officials and the market participants we spoke with cited the 2-year FRN’s frequent reset as a reason for greater demand from money market funds. These funds face constraints on the average maturity of their holdings, which the weekly reset of the Treasury 2-year FRN helps address. This additional demand would likely result in lower costs and helps establish the new product for Treasury, which may outweigh the potential cost of the mismatch. Results of our survey show that overall, the FRN’s index rate and the frequency of its interest rate reset chosen by Treasury—as well as the difference between the two— made the FRN more attractive to investors (see table 1). Although Treasury officials told us they discussed the potential benefits and risks of the mismatch, Treasury had not analyzed how the mismatch could affect pricing. After we briefed Treasury officials on the issue in April 2014, Treasury began taking steps to study the mismatch to more fully understand its potential pricing risks. While its practice of regular and predicable issuance means Treasury issues all products in all environments, it is important that the risks of different securities are considered when making decisions about the mix of securities to issue. Treasury did analyze and consider how other design elements would affect pricing of the 2-year FRN and incorporated the results of that analysis into their final design. For example, Treasury analyzed how setting a minimum spread for the FRN would affect pricing. This analysis led Treasury officials to conclude that a minimum spread would unnecessarily complicate pricing, and it was excluded from the final structure of the FRN. Weighted Average Maturity (WAM) The WAM of outstanding marketable Treasury securities is calculated by averaging the remaining maturity of all outstanding marketable Treasury securities, weighted by the dollar value of the securities. increase the maturity profile of the debt portfolio while meeting high demand for high-quality, short-term securities. Treasury could extend the average maturity of the portfolio by replacing issuance of shorter term notes and bills with longer term fixed-rate notes and bonds, rather than issue FRNs. In deciding what to issue, however, Treasury is confronted with making prudent decisions about investor demand by product. If Treasury issues the wrong mix of products, its overall cost of funding would increase, as investors would express their preferences in prices bid at auction. Interest rate risk For a borrower, such as Treasury, interest rate risk is the risk of having to refinance its debt at less favorable interest rates and, for floating rate debt, of interest rates rising during the life of the security. The risk associated with coming back to the market to refinance the debt. In times of federal budget deficits, maturing federal debt must be rolled over into new issuance. Treasury tracks the WAM of outstanding marketable securities and publicly releases WAM data quarterly. Treasury debt managers do not have a WAM target, but over the past 30 years they have generally kept the WAM between 50 and 70 months (see figure 8). As of February 28, 2014, the WAM of the Treasury’s outstanding marketable debt was 67 months, well above the historical average of 58.6 months. As of January 2014, Treasury continued to increase the WAM in a way that Treasury officials stated is consistent with their long-term objectives of financing the government at the lowest cost over time and ensuring regular and predictable management of the debt portfolio. Marketable Debt Marketable securities can be resold by whoever owns them. In addition to marketable securities, Treasury issues nonmarketable securities that cannot be resold, such as U.S. savings bonds and special securities for state and local governments. FRNs provide Treasury with additional flexibility in its debt issuance by adding a new type of security to Treasury’s debt portfolio and by increasing overall demand for Treasury securities. If a new security brings incremental demand for Treasury securities, Treasury can grow its debt portfolio without increasing by as much as it might otherwise have had to the amount needed to finance the debt through existing securities. Our interviews and survey results found that although market participants will likely primarily purchase Treasury FRNs as a substitute for other Treasury securities (especially bills), market participants will also purchase Treasury FRNs as a substitute for other investment options, including FRNs from other issuers and repurchase agreements (see figure 9). Bid-to-Cover Ratio In a Treasury auction the bid-to-cover ratio is the dollar value of all bids received in the auction, divided by the dollar value of the securities auctioned. three FRN auctions; nevertheless, the rates quoted in the when-issued market were very close to the auction results, an indicator that the auctions came very close to market expectations. This suggests that the price discovery mechanism of the market was functioning well for FRNs and that the market embraces and understands the security, which in turn indicates strong current and continuing demand that helps Treasury borrow at lower cost over time. When-Issued Market When-issued trades are contracts for the purchase and sale of a new security before the security has been auctioned. When-issued trades settle on the issue date of the new security, when the security is first available for delivery. Our survey results suggest demand for Treasury FRNs is likely to grow. Eighteen out of 61 survey respondents participated in the first Treasury FRN auction, but more said they plan to purchase Treasury FRNs this year. About half of all respondents (32 of 62) said their organizations Survey definitely or probably will purchase Treasury FRNs in 2014.respondents anticipate that money market mutual funds, corporate treasuries, and foreign central banks are likely to have the most demand for 2-year FRNs. Survey respondents noted a number of reasons why Treasury FRNs are an attractive investment option, including the interest rate risk protection they provide the purchaser, their price stability, their use as a cash management tool, their consistency with investment guidelines and regulatory requirements, and the liquidity of the securities. The successful launch of a new type of security relies both on the readiness of investors and on Treasury’s own operational readiness. Overall, market participants felt prepared for the introduction of a new security. According to almost all of the market participants we surveyed, Treasury provided sufficient information regarding its plans to issue FRNs (53 out of 62 respondents noted that Treasury provided sufficient information and the remaining 9 noted that they had “no opinion or no basis to judge.”) In addition, of the 48 survey respondents that said that they would need to make systems changes to purchase FRNs, 36 said that Treasury or the Federal Reserve had provided adequate assistance or information to make the necessary changes. Some respondents noted that as of March 2014, they had not yet completed systems changes that will be needed to purchase FRNs. Demand for FRNs may increase as additional investors complete systems changes. Although issuance of FRNs brings incremental demand for Treasury securities and demand in the initial auctions was high and is likely to grow, one design feature of the 2-year FRN may constrain Treasury’s flexibility in the issuance of 13-week bills. Treasury officials and market participants both told us that because the FRN is indexed to the 13-week Treasury bill rate, Treasury will have to be more judicious in adjusting the size or timing of Treasury auctions of 13-week bills. As some comments on the proposed rule noted, there is some risk in indexing a floating rate note to a product from the same issuer. However, given that the 13-week bill is one of Treasury’s largest and most liquid markets, its selection as the index rate minimizes this risk. As our prior work has found, communication with investors is essential as Treasury faces the need to finance historically large deficits expected in the medium and long term. Overall, survey respondents said that Treasury provides sufficient information to investors on its debt management plans. Forty-three out of the 62 survey respondents said communication from Treasury occurred to a great or very great extent; no respondents said communication occurred to little or no extent (one had no basis to judge). In addition, most survey respondents said that they were able to provide sufficient input to Treasury, but respondents from some sectors reported lower levels of opportunity to provide input. The 26 respondents who reported opportunities existed to some or little to no extent included 10 state or local government retirement fund managers, 4 money market mutual fund managers, and 3 life insurance providers (see figure 10 below). To manage risks associated with borrowing, Treasury monitors market activity and, if necessary, responds with appropriate changes in debt issuance based on analysis and consultation with market participants. Treasury offers a number of ways for market participants to give input, such as providing comments on regulations solicited through the Federal Register and through the email box on the Treasury website. The Treasury Borrowing Advisory Committee (TBAC) is comprised of senior representatives from investment funds and banks and holds quarterly meetings to provide insights to Treasury on the overall strength of the U.S. economy and recommendations on debt management issues. In addition, FRBNY administers the network of primary dealers that also provide market information and analysis to Treasury. However, Treasury’s Office of Debt Management does not meet regularly with all sectors, such as state and local government retirement fund managers. Survey respondent suggestions for improving communication with Treasury included administering surveys, holding regular meetings or calls with investors outside of the TBAC, polling investors on new product ideas, and providing a mechanism for submitting annual recommendations to Treasury from large investors. Without targeted outreach to all major sectors of investors in Treasury securities, Treasury could miss important insights to improve its debt management plans. Responses from our survey of market participants indicate an interest in FRNs of both shorter- and medium-term maturities, but respondents expressed more limited interest in 7- and 10-year FRNs than in shorter- term FRNs (see figure 11). Survey respondents expressed the most interest in the introduction of a 1-year FRN. Interest in the 1-year FRN varied by sector, with mutual funds (including money market funds) expressing substantial interest in this maturity, while retail and commercial banks had little interest. Securities broker-dealers and state and local retirement fund managers expressed the most interest in FRNs with maturities other than 2 years, but other sectors—such as banks and property-casualty insurance providers—also showed some interest in these other securities. Treasury officials said they might consider issuing FRNs with longer maturities once both they and the market gain some experience with the 2-year Treasury FRN. Over the long run, Treasury FRNs with maturities other than 2 years are likely to provide a cost savings to Treasury relative to issuance of fixed-rate securities with the same maturity. Survey respondents expressed their views on certain design features of FRNs with maturities other than 2 years. For instance, if Treasury were to issue FRNs with different maturities, almost all survey respondents (57 out of 62) thought those FRNs should also be indexed to the 13-week Treasury bill. More respondents said they would prefer daily interest rate resets to any other reset period for FRNs with maturities other than 2 years. Of the respondents who wanted new FRNs to be indexed to the 13-week Treasury bill rate, 13 would also prefer daily resets for all hypothetical maturities, including 4 state and local government retirement fund managers and 5 securities broker-dealers. Although this suggests that these respondents would prefer a “mismatched floater,” as discussed earlier in this report, the mismatch feature may raise risks that result in higher costs to Treasury in certain interest rate environments. Additionally, respondents generally preferred quarterly interest payments for FRNs with other maturities and monthly auctions for 1-year and 3-year FRNs and quarterly auctions for FRNs with other maturities. Survey respondents also expressed an interest in possible new Treasury securities. Suggestions were ultra-long bonds, callable securities, FRNs indexed to inflation, and zero-coupon notes or bonds (see figure 12). In addition, respondents suggested that certain debt management practices, specifically buybacks and reverse inquiry window, would enhance demand for Treasury securities.that in general, changes to Treasury’s current debt management practices—such as frequency of initial and reopening auctions, issuance sizes, and non-competitive award limits—would not enhance demand (see figure 13). To achieve the lowest cost of financing the government over time, it is important that Treasury spread debt across maturities and take into account investor demand for new and existing products. The medium and long term fiscal outlook make evaluating the demand for Treasury securities, including new securities, increasingly important. Currently, Treasury feels unable to conduct a broad survey of market participants. For this reason, the insights on potential demand for new products from our survey can provide Treasury with a starting point so that it does not miss opportunities. The U.S. Treasury market is the deepest and most liquid government debt market in the world. Nevertheless, Treasury faces challenges in managing the debt at a time when debt levels are high and projected to increase and when interest rates are also expected to rise. Given the market uncertainties and the federal government’s fiscal challenges, increasing Treasury’s flexibility to respond to changing market conditions in ways that minimize costs is prudent. FRNs are a tool that can help meet these goals. Over the long term, FRNs can reduce Treasury interest costs relative to fixed-rate securities that lock in funding for the same term. FRNs can also help enhance Treasury flexibility by marginally increasing demand for Treasury securities. The design and implementation of FRNs has implications for Treasury’s ability to minimize borrowing costs over time and for the balance of risks in Treasury’s debt portfolio. Our cost analysis finds that in comparison to issuance of 2-year fixed-rate notes, Treasury is taking on additional interest rate risk but is likely to achieve interest cost savings while not increasing market access risk. The mismatch feature of Treasury’s first FRN presents a tradeoff between different risks for both investors and Treasury that could raise Treasury’s borrowing costs when interest rates are high and the yield curve is volatile. However, the mismatch also helps Treasury tap into the current high demand for high-quality short-term securities. Without analyzing how the mismatch between the frequency of the reset period and the maturity of the index could affect pricing, however, Treasury is unable to judge either (1) the risks (and therefore the ultimate cost) of FRNs in a different interest environment, or (2) whether the additional demand from money market funds due to the mismatch feature outweighs the potential costs it creates. A better understanding of these tradeoffs will be important when Treasury considers issuing FRNs with maturities other than 2 years. Furthermore, with the addition of FRNs to Treasury’s debt portfolio, the weighted average maturity length of securities in the portfolio (i.e., the WAM) is now an incomplete measure of rollover risk because it does not accurately measure interest rate risk. Tracking and reporting an additional measure of the length of the debt portfolio that captures interest rate risk could help Treasury debt managers understand and weigh risks in the portfolio, and publicly reporting that measure would facilitate transparency and market understanding of Treasury debt management decisions. Introducing FRNs at this time—when demand is high—can help Treasury and market participants become more familiar with the new security so that Treasury can expand to FRNs with different maturities if Treasury determines that doing so would enhance its flexibility and advance its debt management goals. It will also be important for Treasury to gauge market demand for FRNs and other products by soliciting input from all sectors of Treasury investors, specifically state and local government retirement fund managers. Such input can help inform Treasury decisions about changes to Treasury issuance or debt management practices that could enhance overall demand for Treasury securities. When deciding what to issue, Treasury must make prudent decisions about investor demand by product. If Treasury issues the wrong mix of products, its overall cost of funding will increase, as investors express their preferences in prices bid at auction. To help minimize Treasury borrowing costs over time by better understanding and managing the risks posed by Treasury floating rate notes and by enhancing demand for Treasury securities, we recommend that the Secretary of the Treasury take the following four actions: 1. Analyze the price effects of the mismatch between the term of the index rate and the reset period; 2. Track and report an additional measure of the length of the portfolio that captures the interest rate reset frequency of securities in the portfolio; 3. Expand outreach to state and local government retirement fund 4. Examine opportunities for additional new security types, such as FRNs with maturities other than 2 years or ultra-long bonds. We provided a draft of this product and the accompanying e-supplement (GAO-14-562SP) to Treasury for comment. On May 23, 2014 the Assistant Secretary for Financial Markets told us that Treasury thought it was an excellent report, that they agreed with the recommendations, and that they had already taken steps to begin implementing them. For example, he told us that Treasury’s new Office of State and Local Finance will bolster outreach to investors in the state and local sectors. Treasury also provided technical comments that were incorporated as appropriate. Further, Treasury told us they had no comments on the e- supplement. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days from the report date. At that time, we will send copies to the Secretary of the Treasury, the appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have any questions about this report, please contact me at (202) 512-6806 or irvings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To estimate the potential cost of floating rate notes (FRN) to Treasury, we simulated the costs of 2-year FRNs based on Department of the Treasury (Treasury) auction data from January 1980 to March 2014 using two models, each with different assumptions about the spread over the index rate that Treasury would pay. We compared those costs to Treasury’s actual costs of funding with 13-week bills and 2-year notes. We also analyzed how those costs varied over different interest rate environments. To estimate the range of potential costs from FRNs, we used two models of the costs of FRNs to Treasury: 1. A “maturity-based” model where the spread estimate is influenced by the 2-year term of the FRN. In the maturity-based model, the FRN spread—the difference between the index rate and the interest rate on the FRN—split the difference between the 13-week bill and 2-year note yields on the date of the FRN auction: This model was suggested to us by a market participant as one way to estimate the likely spread for the Treasury FRN, and we found it to be reasonable. 2. A “reset-based” model where the spread estimate is derived from the weekly reset term, which determines the nature of most of the interest rate risk faced by investors in FRNs. This frequently results in a negative FRN spread, meaning that, under this model, the FRN generally has a yield lower than a 13-week bill. We allowed for negative spreads under this model because Treasury regulations allow the FRN to auction with a negative spread and, in very low interest rate environments, short term bills on the secondary market have sometimes traded with a negative yield. While we considered other models for determining the cost of FRNs, these two models are designed to approximate the range of potential spreads Treasury’s 2-year FRN would be expected to have been auctioned at in historical interest rate environments. We also considered models based on: FRNs from government-sponsored enterprises (GSEs). Several market participants we spoke with indicated that FRNs issued by Fannie Mae and Freddie Mac would be the closest comparison for Treasury FRNs. However, we determined that GSE FRNs were not sufficiently comparable for our purposes due to the issuance practices and FRN structures used by Fannie Mae and Freddie Mac. Swap prices. Several market participants also suggested interest rate and asset swaps could be used to estimate spreads on Treasury FRNs. We reviewed results of simulations of FRN spreads published by one market participant, and found the estimates from this model usually to be within our own estimates for the FRN spread. Theoretically derived formula. We explored modifying the formulas used in Don Smith’s “Negative Duration: The Odd Case of GMAC’s Floating-Rate Note” to derive a theoretically correct spread price. This approach predicted FRN spreads comparatively very close to zero and which generally lie within the costs predicted by the maturity- and reset-based models. This pricing model did not incorporate the pricing consequences of the mismatch between the reset rate and the maturity of the index, and so does not fully capture the pricing risks faced by the FRN. Because interest rate environments vary substantially over time, we compared the relative costs of the FRNs in various interest rate environments. The different environments, as used in our analysis and discussed in our report, are described below (see table 2). To determine the trend of 13-week yields over a two year period, we estimated a linear time trend on the first difference of weekly yields (where t is an index of the number of weeks since the start of the two- year window): This is essentially equivalent to fitting a second degree polynomial to the yields, allowing us to capture changes in direction of the interest rate trend (i.e., peaks and troughs) as well as the slope of a linear trend. The estimated curves were used in classifying the interest rate environments. The cut-offs for assigning an interest rate trend to a category of rising or falling—versus steady—were based on our professional judgment. Other approaches—such as using traditional statistical significance tests— conflate volatility with assessment of the presence of a trend and therefore are not appropriate for this determination. We were able to use a data-derived approach to assign 2-year periods to our volatility categories. We use the RMSE statistic as an aggregate measure of the weekly yields’ total deviation from the trend. We then used a k-mean cluster analysis to divide the sample into four volatility groups: low, moderate, high, and extreme. Using the maturity- and reset-based models, we estimated what the spread would be for FRNs auctioned on the same day as actual 2-year fixed-rate notes from January 1980 to March 2012, resulting in 387 simulated FRNs. We then applied these estimated spreads to the actual weekly 13-week bill auctions from January 1980 to March 2014, and calculated what the total interest cost would have been for each simulated FRN during this period. Like the actual 2-year FRN, we used a floor of zero for the daily interest accrual of our simulated FRNs. To determine the relative interest cost of the FRN, we compared the estimated costs of the simulated FRNs to the costs of the actual 2-year fixed-rate notes and a rolling series of 13-week bills for each 2-year period. We estimated the average interest costs relative to 2-year notes and 13-week bills as well as the percent of cases where FRNs generate savings or additional costs compared to bills or notes. In addition to the results presented in the body of our report, we estimated the cost of 2-year FRNs by volatility of the rate environment. As shown in figures 14 and 15 below, we found that, at all levels of volatility, there was little variation between our two models. In periods of low, moderate, and high volatility, 2-year FRNs tended to produce savings compared to 2- year fixed-rate notes, but could produce either costs or savings compared to 13-week bills depending on which model is used. In periods of extreme volatility, FRNs produced savings under both models. To address both of our objectives, we surveyed and interviewed market participants regarding (1) the market for FRNs, (2) the structure of FRNs, (3) other actions Treasury may consider to expand demand for Treasury securities, and (4) communication between Treasury and investors. To gather information from a broader range of investors, we administered an online survey to 82 of the largest domestic institutional holders of Treasury securities in the following sectors: money market mutual fund managers, mutual and exchange-traded fund managers, state and local government retirement fund managers, retail and commercial banks, life insurance providers, property-casualty insurance providers, and securities broker-dealers (see table 3). Results of the survey are not generalizable. For aggregate survey results reproduced as an e-supplement, see GAO-14-562SP. To identify sectors for our sample, we reviewed data from the Federal Reserve’s Financial Accounts of the United States, (table L.209, third quarter 2013) to identify which sectors have at least $60 billion in Treasury holdings. We excluded some sectors due to challenges in contacting certain entities, such as foreign monetary authorities, other foreign investors, and the household sector. To identify the organizations within each sector that would receive our web-based survey, we used rankings of the largest organizations in each sector based on total assets or an equivalent financial indicator, such as assets under management or direct premiums written. From these ranked lists, we determined Treasury holdings for each organization and selected as many organizations as needed to represent at least 50 percent of the total amount of Treasury holdings for that sector (based on table L.209 of the Federal Reserve’s Financial Accounts of the United States) or in the case of mutual funds, exchange traded funds, and money market funds, based on information from the Investment Company Institute on total assets under management in Treasury- and government-focused funds. In addition to the contact named above, Tara Carter (Assistant Director), Susan E. Murphy, (Analyst-in-Charge), Abigail Brown, Emily Gruenwald, Daniel Ramsey, and Albert Sim made key contributions to this report. Amy Bowser, Dianne Guensberg, Stuart Kaufman, Risto Laboski, Donna Miller, Dawn Simpson, and Stewart W. Small provided subject matter assistance.
To continue meeting its goal of financing the federal government's borrowing needs at the lowest cost over time, Treasury began issuing a new type of security—a 2-year floating rate note (FRN)—in January 2014. The FRN pays interest at a rate that resets periodically based on changes in the rate of the 13-week Treasury bill (to which the FRN is indexed). GAO was asked to review Treasury debt management, including this product and other debt management issues. This report (1) evaluates Treasury's rationale for introducing FRNs and (2) identifies the demand for Treasury securities from a broad range of investors to assess whether changes would help Treasury meet its goals. To address these objectives, GAO used Treasury auction data from 1980 - 2014 to simulate the costs of Treasury FRNs, reviewed Treasury documents, surveyed a non-generalizable sample of 82 large domestic institutional investors across sectors, and interviewed market participants and academic experts. (For the survey and results, see GAO-14-562SP .) Issuing floating rate notes (FRN) is likely to help the Department of the Treasury (Treasury) meet its goals to borrow at the lowest cost over time, extend the average maturity of the debt portfolio, and increase demand for Treasury securities, but it also presents risks related to changes in interest rates. GAO simulated the costs of 2-year Treasury FRNs using historical Treasury auction data and found that interest costs of the FRNs were generally less than costs of fixed-rate 2-year notes, but could be either more or less than costs of 13-week bills, depending on assumptions about how investors price the FRNs. GAO also found that in rising interest rate environments, the FRNs may be more costly than these alternatives. Multiple components contribute to achieving lowest cost financing over time: issuing FRNs is part of Treasury's approach to achieving this goal. GAO analysis identified a number of design elements that may affect how FRNs contribute to that goal. Treasury officials believe it is prudent for Treasury to extend the average maturity of its debt portfolio because the debt level is already high and is expected to grow. Relative to issuing shorter-term debt, 2-year FRNs will help Treasury extend the average maturity of the debt portfolio and thereby reduce the risk inherent in going to market. Because the interest rate on a FRN can change during the life of the security, FRNs expose Treasury to the risk of rising interest rates whereas fixed-rate securities of the same maturity do not. These shifts in risk are likely to be small because currently FRNs are expected to constitute a small proportion of Treasury debt. Although managing interest rate risk is an important aspect of Treasury's goal to borrow at the lowest cost over time, Treasury does not track and report a measure of the average maturity of the portfolio that captures the additional interest rate risk of FRNs. One element of the design of the 2-year FRN—the difference between the term of its index rate (13 weeks) and the length of its effective reset period (one week)—is not typical for floating rate notes and creates tradeoffs in interest rate risks but also may result in additional demand for the product. The risks could affect the pricing of FRNs and raise Treasury's borrowing costs in environments of high and volatile interest rates. Treasury officials told us they examined design elements, including this difference, before issuing the 2-year FRN. However, Treasury had not analyzed how the difference may affect FRN pricing. FRNs give Treasury debt managers additional flexibility by increasing demand for Treasury securities and by adding a new security that meets the high demand for short-term securities. Results from GAO's survey of a broad range of investors and interviews with market participants found that market participants likely will purchase Treasury FRNs primarily as a substitute for other Treasury securities, but they will also purchase the FRNs as a substitute for non-Treasury securities, bringing new and potentially growing demand to Treasury. To provide the lowest cost of financing the government over time, Treasury must consider investor demand for new and existing products. Survey respondents indicated an interest in FRNs of additional maturities and in other new Treasury products. Treasury currently offers many ways for market participants to provide input, but GAO's survey identified opportunities for Treasury to enhance input from some sectors—including state and local government retirement fund managers. GAO recommends that Treasury (1) track and report a measure of interest rate risk in its debt portfolio, (2) analyze the price effects of the difference between the term of the index rate and the reset period, (3) examine opportunities for additional new types of securities, such as FRNs of other maturities, and (4) expand outreach to certain market participants. Treasury agreed with the recommendations and said that they had already taken steps to begin implementing them.
As shown in figure 1, biosurveillance is a concept that emerged in response to increased concern about biological threats from emerging infectious diseases and bioterrorism. Biosurveillance is carried out by and depends on a wide range of dispersed entities. Federal biosurveillance responsibilities, likewise, are spread across an array of agencies and provided for in multiple laws and presidential directives. In an era of rapid transit and global trade, the public health and agricultural industries, as well as natural ecosystems including native plants and wildlife, face increased threats of naturally occurring outbreaks of infectious disease and accidental exposure to biological threats. According to the World Health Organization (WHO), infectious diseases are not only spreading faster, they appear to be emerging more quickly than ever before. Since the 1970s, newly emerging diseases have been identified at the unprecedented rate of one or more per year. There are now nearly 40 diseases that were unknown a generation ago. In addition, during the last 5 years, WHO has verified more than 1100 epidemic events worldwide. Figure 2 shows select disease occurrences worldwide in recent decades. Additional information about the occurrences is available electronically in pop-up boxes on the map or in print in appendix II. Examples of emerging infectious disease include Severe Acute Respiratory Syndrome (SARS), H5N1 influenza (avian flu), and the H1N1 influenza that resulted in a global pandemic in 2009. The potential impact of these events is not limited to public health. For example, the avian influenza outbreaks in Asia and Eastern Europe were reminders that the public health sector is intrinsically linked to the agriculture, trade, tourism, economic, and political sectors. Due to the rapid and constant movement of people and commodities— such as animals, plants, and food—biological agents can be carried by passengers or containers on airplanes and boats and slip across national borders unnoticed as infectious diseases are transferred from person to person through close contact with one another. Ecological changes, such as changes in land use, and the globalization of the food supply are also associated with the emergence and spread of infectious disease. Animals also face the threat of infectious disease, and in some cases diseases— known as zoonotic diseases—can be transferred between animals and people. Zoonotic diseases represent at least 65 percent of newly emerging and reemerging infectious diseases in recent decades. Many important factors contribute to the proliferation of zoonotic diseases, including the growth of human and domestic animal populations and the increasingly close physical proximity within which humans and their domestic animals live with wild animals. Some disease agents can also be weaponized and used as weapons of mass destruction to disrupt economies and endanger human, animal, and plant health. Since the attacks of September 11, 2001, there has been concern that another terrorist attack on U.S. soil could involve biological or other weapons of mass destruction. Groups like the Center for Counterproliferation Research at the National Defense University and the Commission on the Prevention of Weapons of Mass Destruction Proliferation and Terrorism (established by the 9/11 Commission Act) have warned that the biological weapons threat is real, with evidence that terror groups like Al Qaeda have had active biological weapons programs in the past and approximately 12 countries are suspected of seeking biological weapons. Emerging disease and bioterrorism concerns also surround the nation’s agriculture and food supply, as well. Plant resources in the United States, including crops, rangelands, and forests, are vulnerable to endemic, introduced, and emerging pathogens. More than 50,000 plant diseases occur in the United States, caused by a variety of pathogens. Increasing globalization and international trade activities create a likelihood that many other exotic plant pathogens will arrive in the United States in the coming years. In addition, the United States faces growing food safety challenges from fresh and processed foods that become contaminated well before they reach the consumer, leading to outbreaks linked to foods that have not previously been associated with foodborne illnesses. For example, according to USDA, leafy greens are the category of produce most likely to be associated with an outbreak. Recent outbreaks of foodborne illness have also focused public attention on the increasing potential for widespread dissemination of contaminated products. For example, beginning in September 2006, the United States experienced an outbreak of E. coli 0157:H7 associated with the consumption of tainted spinach grown in California; this outbreak resulted in 205 victims in 26 states suffering severe disease. Three people died. Widespread outbreaks of other foodborne illnesses, such as Salmonella, have also occurred from contaminated peanut butter and tomatoes. We reported in March 2005 that although the United States has never experienced a terrorist attack against agriculture, it is vulnerable for a variety of reasons, including the relative ease with which causative agents of diseases that could affect livestock and crops could be obtained and disseminated. Many of these diseases are endemic in other parts of the world and can be extracted from common materials, such as soil. Farms in general are easily accessible because they are located in rural areas and have minimal security, especially crop farms. Moreover, the highly concentrated breeding and rearing practices of our livestock industry may make it a vulnerable target for terrorists because diseases could spread rapidly and be difficult to contain. For example, between 80 and 90 percent of grain-fed beef cattle production is concentrated in less than 5 percent of the nation’s feedlots. Therefore, the deliberate introduction of a highly contagious animal disease in a single feedlot could have serious economic consequences. In addition, a number of disease causing agents can infect and be spread by wildlife. According to officials at DOI, it may be difficult to control a novel pathogen if it is introduced into native wildlife. These officials noted that the gregarious habits of many wildlife species can enhance their susceptibility to catastrophic losses from select diseases, and because of their mobility, there is potential for infectious disease to spread quickly to new locations and populations. Finally, pathogens can be carried through or introduced into the environment, causing damage to health and economies. Drinking water utilities across the country have long been recognized as potentially vulnerable to terrorist attacks of various types, including physical disruption, bioterrorism, chemical contamination, and cyber attack. Damage or destruction by terrorists could disrupt not only the availability of safe drinking water, but also the delivery of vital services that depend on these water supplies, such as fire suppression. People and animals also face the threat of becoming ill from inhaling certain biological agents, some of which occur naturally in our environment and some that can be weaponized and intentionally released to cause catastrophic disruption. Concerns about weaponized airborne pathogens were amplified by the anthrax attacks conducted through the mail a month after the September 11, 2001, attacks on the World Trade Center and the Pentagon. As figure 3 shows, federal laws and directives call for HHS, USDA, DHS, and other federal agencies to take action to strengthen biosurveillance. The most recent of these—Homeland Security Presidential Directive-21— synthesizes and reiterates actions in other laws and directives, explicitly calling for a national biosurveillance capability. In calling for this national capability, HSPD-21 discusses certain aspects related to the personnel, training, equipment, and systems needed. For example, among the elements it describes as necessary for a robust and integrated national capability are enhanced clinician awareness, stronger laboratory diagnostic capabilities, integrated biosurveillance data, and an epidemiologic surveillance system with sufficient flexibility to tailor analyses to new syndromes and emerging diseases. In the case of biological threats, timely detection of biological agents is a precursor to effective response; therefore, a national biosurveillance capability like the one described in HSPD-21 is an essential tool in the nation’s preparedness to confront catastrophic threats. Capabilities to carry out any broad emergency management mission—like biosurveillance—are made up of (1) planning, (2) organization and leadership, (3) personnel, (4) equipment and systems, (5) training, and (6) measurement/monitoring. A national biosurveillance capability like the one described in HSPD-21 would largely rely on an interagency effort because the mission activities and accompanying resources that support the capability—personnel, training, equipment, and systems—are dispersed across a number of federal agencies. For example, HHS’s Centers for Disease Control and Prevention (CDC) has primary responsibility for human health and USDA for plant and animal health. Responsibility for various food sources is split between USDA and HHS’s Food and Drug Administration (FDA). DHS, as the agency with primary responsibility for securing the homeland, is responsible for coordinating efforts to prevent, protect against, respond to, and recover from biological attacks. The resources—personnel, training, equipment, and systems–that support a national biosurveillance capability reside within the separate agencies that acquire and maintain them in pursuit of their missions, which overlap with but are not wholly focused on biosurveillance. A national biosurveillance capability also depends upon participation from state, local, and tribal governments. Few of the resources required to support the capability are wholly owned by the federal government. The responsibility and capacity for collecting most information related to plant, animal and human health, food, and environmental monitoring resides within state, local, and tribal governments, or private sector entities—such as hospital and other private health care providers. In the United States, state and local public health agencies have the authority and responsibility for carrying out most public health actions, and the federal government generally cannot compel state, local and tribal governments, or private sector entities to provide information or resources to support federal biosurveillance efforts. Instead, individual federal agencies, in pursuit of their missions, attempt to build relationships and offer incentives—like grants—to encourage voluntary cooperation with specifi c federal efforts. In addition, although traditional disease surveillance systems designed to collect information on the health of humans, animals, and plants are the backbone of biosurveillance—because they, among other things, provide mechanisms for ongoing monitoring and specific information about outbreaks to inform response—they also rely on time-intensive testing and confirmation practices. The inherent time lag, combined with limitations arising from reliance on data not owned by the federal government, presents challenges that limit the promise of traditional disease surveillance alone to provide the timely detection and situational awareness that is the goal of a national biosurveillance capability. For additional information on the contributions and associated challenges of traditional federal surveillance activities to monitor for pathogens in plants, animals, humans, and food see appendix III. For more information on specific federal programs that can be used to support biosurveillance see appendix IV. Federal agencies have taken or are planning some actions to improve the personnel, training, and systems and equipment that support a national biosurveillance capability, including, but not limited to, planning to assess workforce needs, sponsoring recruitment and training efforts, seeking to facilitate information sharing, and applying technologies to enhance surveillance. Some of the professions that underpin the surveillance mechanism for a national biosurveillance capability currently face and are expected to continue to confront workforce challenges—particularly workforce shortages; however, some federal agencies with key biosurveillance responsibilities have efforts underway to help confront those challenges. We reported in September 2009 that as the threats to national security— which include the threat of bioterrorism and pandemic outbreak—have evolved over the past decades, so have the skills needed to prepare for and respond to those threats. We also found that like other federal efforts to address modern national security challenges that require collaboration among multiple agencies, an effective biosurveillance capability relies on qualified, well-trained professionals with the right mix of skills and experience. Figure 4 provides examples of the workforce involved with detection and situational awareness activities that support biosurveillance. The public health and health care workforce is expected to continue to confront shortfalls in the coming years, which could threaten the federal government’s ability to develop a national biosurveillance capability. According to CDC officials, serious public health and health care workforce shortages currently exist due to factors such as the exodus of retiring workers, an insufficient supply of trained workers, inadequate funding, and uncompetitive salaries and benefits. In discussing concerns about looming workforce shortages, CDC officials pointed to a December 2008 Association of Schools of Public Health estimate that by 2020 the nation will face a shortfall of over 250,000 public health workers. CDC officials said that states and communities nationwide report needing more public health nurses, informaticians, epidemiologists, laboratory workers, statisticians, and environmental health experts. Moreover, the Institute of Medicine stated in 2009 that the unevenness of organizational and technical capacity at state and local levels across the public health system weakens the nation’s preparedness to detect and, especially, to respond to and manage the consequences of a major health emergency. We also reported in February 2009 that the animal health field faces workforce shortages that could affect the ability of professionals to be prepared and enabled to detect and warn of biological events. For example, USDA officials have expressed concern about the future size and skills of the veterinarian workforce, particularly veterinary pathologists who are integral to the work conducted in USDA’s diagnostic laboratories, including work on diseases that threaten animal and human health. Further, USDA officials have also expressed concern about the availability of taxonomists, whose expertise is critical to characterizing threats and providing warning of a potentially catastrophic biological event involving plants. Although the workforce shortages threaten to diminish capacity to detect signals of potentially catastrophic biological events as they emerge, some federal agencies are planning or have taken actions to help mitigate them. In particular, in its National Biosurveillance Strategy for Human Health, CDC named the biosurveillance workforce as one of its priority areas. To enhance workforce capability, CDC’s strategy calls for assessing the current biosurveillance workforce capability, identifying and addressing gaps, ensuring the workforce is competent, developing recruitment and retention strategies for professionals across diverse fields, and establishing a national-level governance body for the biosurveillance workforce across agency boundaries and disciplines. CDC has also taken actions to help increase the number of public health workers, including extending service and learning fellowships in epidemiology, informatics, laboratory, and management. Moreover, with respect to animal and plant workforce, the federal government is implementing measures to help ensure an adequate workforce. For example, we reported in February 2009 that USDA has set a goal of recruiting at all veterinary colleges and universities. In addition, USDA is using incentives, such as bonuses, to attract and maintain its veterinarian workforce. In addition, USDA has identified tactics to build the capacity and retain the experience of taxonomists. USDA is also using cooperative agreements and funding to enlist taxonomic services from nonfederal institutions to identify and confirm identifications of exotic plant pests. According to USDA officials, they have increased the number and variety of these agreements in recent years, increased availability of professionals who can help identify threats to plants by nearly 50 percent in the past 20 years, and created a career ladder to retain experienced and talented workers. Further, in response to recommendations we made in February 2009 to address veterinarian shortages, in November 2009, the Office of Personnel Management established the governmentwide Veterinary Medical Officer Talent Management Advisory Council. The purpose of the council is to lead the design and implementation of a governmentwide workforce strategy for federal veterinary medical workers. The council held its first meeting in March 2010 and is in the process of gathering workforce data from all federal agencies with veterinarian medical officers. Federal agencies have supported programs to help with workforce development at the state and local levels that respond partially to the ongoing challenge of maintaining a trained cadre of professionals who operate in fields where professional issues, systems, and technologies continue to evolve over time. Training and accreditation programs are essential to developing a knowledgeable workforce with the skills needed to identify potential threats to human, animal, and plant health. An effective medical response to a biological event would depend in part on the ability of individual clinicians and other professionals to identify, accurately diagnose, and effectively treat diseases, including many that may be uncommon. The opportunity to evaluate signs and symptoms of diseases of concern relies on trained professionals possessing the knowledge needed to identify and order the right lab test to confirm a diagnosis. In addition, detection and warning of a disease threat also relies on these professionals knowing who to call to report the finding. One challenge federal officials relayed to us is that because the concept of biosurveillance is relatively new and has been evolving over the last decade, there are few if any specialties or concentrations in biosurveillance within epidemiology or other programs. They further noted that while some university programs are beginning to address the issues in their curricula, the general lack of biosurveillance or cross- domain specialties, curricula, and classes in these programs limits the feedback loop between academics and practitioners that helps support innovative solutions. Another training challenge is keeping up with changes in issues, technologies, and systems. CDC officials told us that public health workers are always challenged with keeping abreast of developments. For example, when a new laboratory tool or method is developed, public health workers must be trained to use them. Epidemiologists also must be trained and educated about new or emerging health issues, infections, software, information technology systems, and tools. Clinicians require continuing education to remain astute. These CDC officials noted that maintaining expertise in a rapidly changing field is difficult, yet having professionals with the right expertise is critical in protecting the public’s health, as well as for emergency preparedness and response. For USDA, the increased risks to animal and public health from animal diseases have presented challenges because the expertise needed to identify and respond to the risks of zoonotic diseases has not been defined. Further, a DOI official expressed concern about maintaining skills and expertise of wildlife health professionals. According to this official, identifying, characterizing, and mitigating threats involving free ranging fish and wildlife populations call for specific training and expertise in wildlife epidemiology and wildlife disease ecology. These officials expressed concern about whether federal inspectors receive sufficient training or have sufficient resources to address disease in free-ranging wildlife populations. Federal agencies have taken actions to help respond to the challenges arising from evolving fields of study and increased risks of outbreaks impacting more than one domain. For example, in the National Biosurveillance Strategy for Human Health, CDC has called for the development of a national training and education framework to articulate professional roles and competencies necessary for biosurveillance. The strategy also noted that in addition to the traditional public health professions trained in surveillance, there is a need to recruit professionals from other diverse fields including informatics and computational sciences to enhance data sharing, as well as the plant and veterinary sciences to help understand how diseases flow among humans, animals, and plants. Along these lines, CDC has developed a public health informatics fellowship program, which is designed to help address the need for practitioners with a mastery of sophisticated electronic systems to facilitate communication and data exchange among public health personnel at the local, state, and federal levels. Officials at HHS also noted that the National Institute of Health’s National Library of Medicine has funded 18 University-Based Biomedical Informatics Research Training programs, of which 10 have special Public Health Informatics tracks. USDA has also developed training programs to ensure their first detectors are knowledgeable on diseases and pests of significance. For example, USDA has an accreditation program for veterinarians and views the cadre of veterinarians it has accredited as the front line of surveillance for diseases of significance that are not specifically monitored through a program. These accredited state and federal veterinary officials—as we reported in 2005, approximately 80 percent of the veterinary workforce— are dispersed throughout the country and are trained to observe signs and symptoms of diseases and unusual occurrences of illness or death in animal populations. According to USDA, the United States depends extensively on accredited veterinarians for official functions, such as inspecting, testing, and certifying animal health. For plant surveillance, USDA’s National Plant Diagnostic Network began the National First Detector Training Program in 2003. The program offers training to aid USDA’s surveillance of plants for pests and diseases. First detectors are individuals who in the course of their activities are in a position to notice an unusual plant pest outbreak, a pest of concern, or symptoms of a pest of concern. The individuals may include growers, nursery producers, crop consultants, pesticide applicators, and master gardeners, among others. According to USDA officials, the training equips participants with the knowledge to detect and report unusual exotic pest or disease activity, which is key to initiating response and mitigation activities. Our analysis of relevant presidential directives and our discussion with federal officials with central responsibilities for monitoring disease and protecting national security indicate that a national biosurveillance capability depends upon systems and equipment that enable rapid detection and communication of signals that might indicate a potentially catastrophic biological event, quick and effective analysis of those signals, and timely dissemination of high-quality and actionable information to decision makers. In this vein, federal agencies have taken various actions designed to promote timely detection and situational awareness by developing (1) information sharing and analysis mechanisms, (2) laboratory networks to enhance diagnostic capacity, and (3) equipment and technologies to enhance early detection and situational awareness. Because the data needed to detect an emerging infectious disease or bioterrorism may come from a variety of sources, the ability to share and analyze data from multiple sources may help officials better collaborate to analyze data and quickly recognize the nature of a disease event and its scope. As illustrated in figure 5, observing related symptoms in human and animal populations, or cross-domain surveillance, may bring concerns into focus more quickly than monitoring human symptoms alone. This may be particularly important as many disease agents have the potential to be weaponized and many of the recent emerging infectious diseases are zoonotic. In reviewing the federal response, we reported that the analysis of the outbreak continued for weeks as separate investigations of sick people and of dying birds. Only after the investigations converged, and after several parties had independently explored other possible causes, was the link made and the virus correctly identified. We concluded that the time it took to connect the bird and human outbreaks signaled a need for better coordination among public and animal health agencies. One example of a federal initiative designed to improve sharing of biosurveillance information is DHS’s National Biosurveillance Integration Center’s (NBIC) Biological Common Operating Picture (BCOP), a manually updated Google Maps application of current worldwide biological events being tracked. Officials can view the BCOP on the Homeland Security Information Network. The BCOP provides a situational awareness tool for the National Biosurveillance Integration System (NBIS)—the community of federal and other stakeholders that have information that can be used to enhance the safety and security of the United States against potential biological events of national significance. NBIC supports the BCOP through a system—the Biosurveillance Common Operating Network (BCON)—that monitors, tracks, and disseminates available NBIS-partner information, but relies largely on information from publicly available sources, such as news articles. One of the primary data sources for BCON is an international information gathering service called Global Argus, a federally funded program in partnership with Georgetown University. The service searches and filters over 13,000 overseas media sources, in more than 34 languages. A similar type of initiative underway at some federal agencies involves developing and maintaining communication tools for information sharing within specialized disciplines. These communication tools may include functions that allow users to send and receive information or view the status of ongoing events. One such tool is a Web-based forum that provides a secure environment for the exchange of information on case reports and allows users to request information or expertise from other users. For example, one such communication tool developed by CDC, known as Epi-X, provides a secure Web-based forum for public health officials to post case reports of conditions and ask other health officials whether they have also seen cases of the condition. The officials can also discuss similarities in the cases that may indicate how the disease is spreading and suggest mitigation measures—for example, product recalls—that could be implemented. In this way, health officials can leverage both information and analytical capacity across agencies, levels of government, and different regions of the country to help support the early detection and situational awareness goals of biosurveillance. For example, the forum may help them to more quickly and comprehensively determine whether diseases seem to be widespread and what caused them and to discuss treatment options in a secure environment. For more information on specific communication tools, see appendix IV. In addition, we reported in May 2003 that electronic reporting of data can facilitate data exchange among different databases and allow more rapid and accurate analysis of information from multiple sources. CDC officials noted that laboratory reports could be received by CDC in 72 hours if exchanged through an electronic system, as opposed to the up to 2 weeks it can take for laboratory report hard copies to be sent through the mail. Further, because it facilitates data exchange, standardization can help support the vision of integrated surveillance systems articulated in HSPD-9 and 21. According to CDC officials, the potential benefits of electronic reporting of data in a standardized format are striking and can eliminate the need for analysts to dedicate valuable time to processing and translating data provided in different formats expressed using various terminologies. For example, during the 2001 anthrax event, the results of laboratory tests for anthrax were reported in different formats (e.g., emailed text, mailed hard copy, as attachments in different software programs), and using different terminology, such as “Bacillus anthracis,” “B. anthracis,” and “Anthrax.” These variations in report format and language required analysts at CDC to spend time translating and compiling the data into information that could inform decision making during the event. To support effective and efficient information sharing, some agencies have efforts underway to promote electronic reporting of information in a standardized format. For example, CDC’s Public Health Information Network (PHIN) initiative aims to advance the development of interoperable surveillance and other public health systems at federal, state, and local levels. The initiative defines data and messaging standards and provides guidance for public health entities to follow in building systems that meet compatibility and interoperability standards for enhanced electronic information sharing. Additionally, HHS, through initiatives to support nationwide health information exchange, has defined specific standards that promote the exchange of biosurveillance information among health care providers and public health authorities. Within CDC, several surveillance systems have been developed and implemented in accordance with PHIN standards to promote electronic information sharing. One of these is the National Electronic Disease Surveillance System (NEDSS). A primary goal of NEDSS is the ongoing, automatic capture and analysis of data that are already available electronically to minimize the problems of fragmented, disease-specific surveillance systems. The initiative is intended to promote efficient and effective data management and information sharing by eventually consolidating the data collection of CDC’s various programmatic disease surveillance activities in one place. For more information on programs that support standardization and electronic data exchange, see appendix IV. To further enhance and support biosurveillance activities, CDC, DHS and other federal agencies have developed or taken action to gather electronic data from syndromic surveillance systems. Syndromic surveillance uses health-related data collected before diagnosis to look for signals or clusters of similar illnesses that might indicate an outbreak. Examples of syndromic surveillance data are prediagnostic health-related information like patients’ chief complaints recorded by a health care worker at the admissions desk of a hospital emergency room and information collected on over-the-counter sales of antidiarrheal medicines that could indicate gastrointestinal disease outbreaks. We reported in 2004 that because these syndromic systems monitor symptoms and other signs of disease outbreaks instead of waiting for clinically confirmed reports or diagnoses of a disease, some experts believe they can increase the speed with which outbreaks are identified. However, we also reported in September 2004 and November 2008 that the ability of syndromic surveillance to more rapidly detect emerging diseases or bioterror events has not yet been demonstrated, and questions about its cost-effectiveness arise. An example of a syndromic surveillance system is CDC’s BioSense—another CDC system that was developed and implemented in accordance with data and message standards defined by the PHIN initiative. BioSense collects electronic data that are voluntarily shared by participating state, local, and other federal public health entities, including data related to infections, injuries, and chronic diseases. Currently the system collects chief complaint data from 570 hospitals and 1,100 federal clinics, and sales data from over 10,000 pharmacies. Because the data and messages are sent and received in standardized formats, the data are integrated into the BioSense system, reducing the need for analysts to manually interpret or manipulate data and are analyzed by the system to enhance the nationwide situational awareness capabilities of public health analysts at CDC. Even as standardization and data and information-sharing tools improve, other challenges will likely impede information sharing for biosurveillance purposes across federal, state, and local levels of government. We and others have noted that numerous challenges have impeded efforts to coordinate and collaborate across organizational boundaries to integrate biosurveillance and other national security activities. Some such challenges are inherently intergovernmental issues and arise because most of the information needed for biosurveillance is generated and owned outside of the federal government. Therefore, there is limited assurance that state and local governments can or will fully participate in federal information sharing and standardization initiatives like PHIN. CDC officials expressed concern about the differing levels of capacity and willingness among states to participate in data standardization and electronic information-sharing initiatives. Moreover, information-sharing challenges also occur among the federal agencies. As we reported in December 2009, NBIC has faced collaboration challenges and has been largely unsuccessful in obtaining from federal partners key resources needed to support data integration and shared analytical capacity. Federal officials from agencies participating in the National Biosurveillance Integration System—such as CDC, USDA, and FDA—described challenges to sharing such information, including concerns about trust and control over sensitive information before it is vetted and verified. In addition, NBIC officials told us and we previously reported that much of the information available to help achieve biosurveillance goals is unstructured and not readily processed by a computer system; while, data that are more easily computer processed often lack the context needed to make appropriate conclusions about whether anomalies actually signal a potential problem. Over the past decade, the federal government has leveraged and enhanced its laboratory capabilities, capacity, resources, and expertise for detecting and warning about biological threat agents by developing and implementing laboratory networks at the federal, state, and local level. In addition, the federal government led an effort to establish a consortium of laboratory networks that further integrates a number of these networks. In June 2005, 10 federal agencies signed a Memorandum of Agreement establishing the Integrated Consortium of Laboratory Networks (ICLN). The purpose of this consortium is to establish a coordinated and operational system of laboratory networks that provide timely, high- quality, and interpretable results for early detection of acts of terrorism and other events that require integrated laboratory response capabilities. ICLN’s individual laboratory networks focus on detecting biological threat agents that affect humans, animals, or plants that contaminate the air, water, or food supply. These laboratory networks which comprise the ICLN are shown in table 1. The federal government is developing and implementing equipment and technologies that can provide additional information to support early detection and situational awareness. For example, the federal government is applying diagnostic technologies to help detect and monitor biological events. Studying disease agents at the molecular level can provide information for situational awareness. Techniques to determine and attribute the source of biological events can be important for both natural and intentional events. In a natural event—such as a foodborne illness—it can help speed detection, focus the investigation, and characterize the extent and severity of disease. For intentional events, it can provide critical information to help determine the scope of the attack and contribute to law enforcement investigations. One example of such a system is CDC’s PulseNet, a national network of public health laboratories that perform DNA “fingerprinting” as a means to help with early identification of outbreaks of foodborne illness with a common source and enhance situational awareness during an event. The PulseNet program provides a tool for participating laboratories to upload and then compare the genetic “fingerprints” of foodborne pathogens isolated from samples taken from sick individuals. The network can identify and label each fingerprint pattern to permit the rapid comparison of these patterns with others in the PulseNet database. PulseNet officials told us that this process can take roughly 2 weeks after receiving a sample. FDA, USDA, CDC, and state public health officials have access to the PulseNet database. See appendix IV for more information on foodborne disease monitoring systems and diagnostic technologies. Federal agencies have also developed technologies to detect biological agents in drinking water and air. Drinking water utilities across the country have long been recognized as potentially vulnerable to terrorist attacks of various types, including physical disruption, bioterrorism, chemical contamination, and cyber attack. People also face the threat of becoming ill from inhaling certain biological agents—whether naturally occurring or intentionally weaponized and released to cause disease and disruption. EPA has developed a system to detect contamination of drinking water and USPS, DOD, and DHS have developed sensor technologies to detect aerosolized biological agents in the air. EPA’s Water Security Initiative program developed a contamination warning system to allow local water utilities to monitor drinking water for contamination by chemical, biological, and radiological agents. The contamination warning system has been designed to provide timely detection and appropriate response to biological events. The system, however, is not widely distributed. Currently, the Greater Cincinnati Water Works in Cincinnati, Ohio is the only locality that has the system fully operating. EPA is assisting local water districts in four other locations—New York City, San Francisco, Dallas, and Philadelphia—with implementation the contamination warning system as part of a five-city pilot project each of which EPA officials stated is to last for four years and be complete in 2012. Local water utilities are to implement this system on a voluntary basis and operate it at their own expense. According to EPA officials, no contamination warning systems has yet been proven to be effective and sustainable for drinking water systems, but the Water Security Initiative is attempting to design, deploy, and test an effective and sustainable system. For more information on the Water Security Initiative, see appendix IV. USPS, DOD, and DHS have developed and implemented technologies to sample the air and test for specific biological agents. One of these, DHS’s Biowatch program, has been implemented in more than 30 metropolitan areas and tests for the presence of multiple biological threat agents. USPS has deployed an indoor monitoring system—Biohazard Detection System—at mail distribution centers nationwide that automatically detects and warns of the presence of the anthrax organism in the air surrounding mail-sorting equipment. DOD has also developed indoor and outdoor monitoring systems to detect airborne chemical, biological, radiological, or nuclear agents in order to protect military interests. However, these sensor technologies are limited in their ability to provide early detection because there are constraints on the speed with which the diagnostic testing can be performed. For example, DHS’ Biowatch sensor technology depends on air filters which must be collected and transported to a laboratory for diagnostic testing, which can take more than a day. According to senior officials from the Office of Health Affairs and the Science and Technology Directorate at DHS, research and development to eliminate the need for manual collection of samples is underway, but the science needed to do so may not yet be fully mature. Additional information on the USPS, DOD, and DHS biodetection systems can be found in appendix IV. While some high-level biodefense strategies have been developed, there is no broad, integrated national strategy that encompasses all stakeholders with biosurveillance responsibilities that can be used to guide the systematic identification of risk, assessment of resources needed to address those risks, and the prioritization and allocation of investment across the entire biosurveillance enterprise. Further, while numerous agencies have biosurveillance responsibilities, a single focal point for this effort has not been established. We have reported that developing effective national strategies and establishing a focal point with sufficient time, responsibility, authority, and resources can help ensure successful implementation of complex interagency and intergovernmental undertakings, such as providing a national biosurveillance capability. Local Public Health Dept. Local Public Health Dept. Dept. Dept. Dept. Dept. Dept. We reported in February 2004 that strategies themselves are not endpoints, but rather, starting points, and, as with any strategic planning effort, implementation is the key. This work also reported that the ultimate measure of these strategies’ value will be the extent to which they are useful as guidance for policy and decision makers in allocating resources and priorities. However, for an undertaking such as developing a national biosurveillance capability, those policy and decision makers are spread across the interagency and intergovernmental network. In our work related to combating terrorism, we reported that an interagency and intergovernmental undertaking can benefit from the leadership of a single entity with sufficient time, responsibility, authority, and resources needed to provide assurance that the federal programs are based upon a coherent strategy, are well coordinated, and that gaps and duplication in capabilities are avoided. According to our analysis of requirements in laws and presidential directives related to biosurveillance, a focal point has not been established with responsibility and authority for ensuring the development of a robust, integrated, national biosurveillance capability. The mission responsibilities and resources needed to develop a biosurveillance capability are dispersed across a number of federal agencies, and, according to officials at a number of federal agencies—CDC, USDA, and DHS—chief among them, agencies have capabilities that could be leveraged to support a robust, integrated, national biosurveillance capability. However, our analysis indicates that no entity has the responsibility, authority, and accountability for working across agency boundaries to guide and oversee the development and implementation of a national effort that encompasses all stakeholders with biosurveillance responsibilities. For example, CDC has been given the operational lead for developing the vision of HSPD-21 for a human health biosurveillance capability. However, according to CDC officials, responsibility for developing a national biosurveillance capability that includes human as well as animal, plant, food, and environmental surveillance has not been assigned to a single entity such as an intergovernmental council, a federal agency, or an individual official. Officials in various agencies have taken the lead to fulfill their agencies’ biosurveillance missions, but they lack authority to direct other agencies with whom they must partner to take specific action. For example, CDC has undertaken some efforts to coordinate federal efforts relating to human and zoonotic disease surveillance, but according to CDC officials, it has limited authority to ensure the implementation of specific activities at other agencies. According to CDC officials, an overarching organizational mechanism and clearly articulated roles and responsibilities across the separate surveillance programs that serve a range of purposes could help address common surveillance issues within CDC and across the biosurveillance enterprise by coordinating communication and planning. Officials from CDC, DOD, DHS, USDA, and HHS stated that having a focal point would help coordinate federal efforts to develop a national biosurveillance capability. Because the mission responsibilities and resources needed to develop a biosurveillance capability are dispersed across a number of federal agencies, efforts to establish a national biosurveillance capability could benefit from designated leadership—a focal point—that provides leadership for the interagency community. The report of the Commission on the Prevention of Weapons of Mass Destruction Proliferation and Terrorism stated that an attempted biological attack somewhere in the world is likely is likely within the next few years and concluded that the nation was unprepared for such an event. A key component of preparedness is the ability to detect a dangerous pathogen early and assess its potential spread and effect. Various federal statutes and presidential directives call for biosurveillance actions, culminating with HSPD-2’s most recent call for a robust, integrated national biosurveillance system that draws upon and synthesizes the capabilities of multiple existing systems across a number of federal departments and agencies. The challenges in achieving this vision are many and difficult to successfully address, such as acquiring and retaining staff with sophisticated skills and melding disparate information and data systems. Biosurveillance must operate in a complex environment of many players and an evolving threat. Because a biological incident, originating from nature or deliberate acts, could emerge through any number of means—plant, animal, air, and human transmission—it is essential that federal agencies collaborate to leverage their capabilities and find effective and efficient solutions and strategies for detection and analysis. Although efforts like the National Biosurveillance Strategy for Human Health, USDA’s Strategy for the National Animal Health Surveillance System, and DHS’s National Biosurveillance Integration Center are potentially useful steps in developing a robust, national biosurveillance capability, they do not provide a unifying framework and structure for integrating dispersed capabilities and responsibilities. Further, none of the current players have the authority to guide and oversee the development and implementation of a national effort that encompasses all stakeholders with biosurveillance responsibilities. Without a unifying framework, structure, and an entity with the authority, resources, time, and responsibility for guiding its implementation, it will be very difficult to create an integrated approach to building and sustaining a national biosurveillance capability as envisioned in HSPD-21. In order to help build and maintain a national biosurveillance capability— an inherently interagency enterprise—we recommend the Homeland Security Council direct the National Security Staff to, in coordination with relevant federal agencies, take the following two actions: (1) Establish the appropriate leadership mechanism—such as an interagency council or national biosurveillance director—to provide a focal point with authority and accountability for developing a national biosurveillance capability. (2) Charge this focal point with the responsibility for developing, in conjunction with relevant federal agencies, a national biosurveillance strategy that: defines the scope and purpose of a national capability; provides goals, objectives and activities, priorities, milestones, and assesses the costs and benefits associated with supporting and building the capability and identifies the resource and investment needs, including investment priorities; clarifies roles and responsibilities of leading, partnering, and supporting a articulates how the strategy is integrated with and supports other related strategies’ goals, objectives, and activities. We provided a draft of this report for review to the Departments of Homeland Security (DHS), Health and Human Services (HHS), Agriculture (USDA), Commerce (DOC), Defense (DOD), Interior (DOI), Justice (DOJ), State (State), Transportation (DOT), and Veterans Affairs (VA); the Environmental Protection Agency (EPA); the United States Postal Service (USPS); and the National Security Council (NSC). DHS provided written comments on the draft report, which are summarized below and presented in their entirety in appendix V of this report. DOC, DOD, DOI, DOJ, HHS, State, DOT, VA, EPA, USDA, USPS, and the NSC did not provide written comments. We incorporated technical comments from DOC, DOD, DOI, DOJ, HHS, State, DOT, VA, EPA, USDA, and the USPS where appropriate. In written comments, DHS generally concurred with our findings and recommendations. In particular, DHS noted that it is important to develop a strategy that encompasses all biological domains. Further, DHS stated that the department’s National Biosurveillance Integration Center in conjunction with its NBIS partners, have identified strategic planning gaps and could also be helpful in providing leadership for the strategic planning effort. DHS also noted that the statutory responsibilities and expectations assigned to NBIS federal participants could serve as guideposts for any White House Homeland Security Council leadership mechanism. We are sending copies of this report to the Special Assistant to the President for National Security Affairs; the Attorney General; the Secretaries of Homeland Security, Health Human and Services, Agriculture, Commerce, Defense, Interior, State, Transportation, and Veterans Affairs; the Administrator of the Environmental Protection Agency; the Postmaster General; and interested congressional committees. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report please contact me at (202) 512-8777 or JenkinsWO@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. The Implementing Recommendations of the 9/11 Commission Act of 2007 required GAO to describe the state of federal, state, local, and tribal government biosurveillance efforts, the duplication of biosurveillance efforts, the integration of biosurveillance systems, and the effective use of resources and expertise at these levels of governments. We are addressing these questions in a series of three reports. The first of the series, issued in December 2009, focused on the Department of Homeland Security’s (DHS) National Biosurveillance Integration Center (NBIC). This report describes domestic biosurveillance efforts at the federal level; we did not review efforts by the federal government to create or improve on international biosurveillance programs. A third report, which we expect to issue during the Winter of 2011, will describe biosurveillance efforts at the state, local, tribal, and territorial levels of government. Specifically, this report examines the following: (1) federal agency efforts to provide resources—personnel, training, equipment, and systems—that support a national biosurveillance capability; and (2) the extent to which mechanisms are in place to guide development of a national biosurveillance capability. To address these objectives, we reviewed key legislation, presidential directives, and agency-issued policies related to biosurveillance. Specifically, we reviewed the Homeland Security Act of 2002, the Public Health Security and Bioterrorism Preparedness and Response Act of 2002, the Pandemic and All Hazards Preparedness Act of 2006, the Implementing Recommendations of the 9/11 Commission Act of 2007, and Homeland Security Presidential Directives (HSPD) 5, 7, 8, 9, 10, and 21. These laws and presidential directives task various federal agencies with specific biosurveillance responsibilities and related mission activities, describe biosurveillance activities that agencies are to perform, and define key terms, among other things. To determine the elements of a capability, we reviewed DHS’s Target Capability List, which specifies that capabilities are made up of the personnel, training, equipment and systems, planning, and leadership necessary to accomplish a mission. We consulted our prior reports including reports on the public health system, emerging infectious diseases, the use of information technology tools to support homeland security and national security goals, protection of animal health and the agriculture sector, food safety and defense, and combating terrorism. See Related GAO Works for an expansive list. We used the information in the laws and presidential directives, as well as previous GAO work, to identify federal departments and agencies responsible for biosurveillance. We also considered federal departments and agencies that NBIC had included among its National Biosurveillance Integration System (NBIS) partners. In addition to DHS, NBIC has identified 11 NBIS-partner agencies, which it considers to be part of the NBIS interagency community. Those departments and agencies are the Departments of Agriculture (USDA), Commerce, Defense (DOD), Health and Human Services (HHS), Interior, Justice, State, Transportation, Veterans Affairs, as well as the Environmental Protection Agency (EPA) and the United States Postal Service (USPS). We considered interviews we conducted with officials with responsibilities for participating in the NBIC community, as well as interviews with officials responsible for a number of other biosurveillance-related mission activities to inform the findings and underlying context of this report. Although we conducted interviews at multiple components of 12 federal departments, we focused on information collected at 7 federal departments that have key roles and responsibilities—based on agency mission, statutory responsibilities, presidential directives, or programmatic objectives—for biosurveillance and related mission activities, including protection of public health, agriculture, and national security. These departments are USDA, DOD, DHS, HHS, DOI, EPA, and USPS. Further, as USDA, HHS, and DHS have the larger and more direct mission responsibilities for biosurveillance and related mission activities, we focused most heavily on the contributions of their activities to support a national biosurveillance capability. For the purposes of this review, we limited our evaluation to domestic biosurveillance activities and how these domestic activities may contribute to a national biosurveillance capability. We did not review federal efforts to enhance international disease surveillance. Specifically, we met with and reviewed documents from officials in the agencies shown in table 1. We reviewed publicly available documents, including organizational charts, mission statements, memoranda of understanding, and program descriptions from these agencies to identify programs which may contribute to disease surveillance, early detection of biological events, or improved situation-specific information during a biological event. We also reviewed previously assembled lists of biosurveillance or disease surveillance programs compiled in our prior reports and by other federal agencies. These include a portfolio of biosurveillance programs completed by CDC in October 2008 and the U.S. Animal Health and Productivity Surveillance Inventory assembled by USDA. NBIC has a biosurveillance mission specified in the Implementing Recommendations of the 9/11 Commission Act of 2007, which requires interagency coordination across the federal government to detect and provide warning of biological events of national concern. As such, we reviewed NBIC operational documents that describe the federal agencies that participate in NBIC’s biosurveillance activities. To determine the extent to which mechanisms are in place to support a national biosurveillance capability, we reviewed strategic plans issued for supporting the nation’s biodefense goals—which includes biosurveillance—for the extent to which these plans incorporated biosurveillance objectives. These plans included the National Health Security Strategy, the National Security Council’s National Strategy for Countering Biological Threats, and the National Response Framework. We also reviewed documents from individual agencies’ efforts to pursue their biosurveillance mission, in order to determine the extent to which individual agencies efforts may contribute to a national biosurveillance capability. These documents include: The National Biosurveillance Strategy for Human Health, NBIC’s Concept of Operations for the National Biosurveillance Integration System, USDA’s National Animal Health Surveillance System strategic plan, and FDA’s Food Protection Plan. We also reviewed reports issued by the National Academies of Science’s Institute of Medicine which analyzed the existing capacity of the United States to detect and respond to emerging microbial threats, the limitations of disease surveillance, and costs and benefits of existing biosurveillance programs. We reviewed the approach used and the information provided in the Institute of Medicine studies and found them to be credible for our purposes. We met with federal officials who had responsibility for specific disease surveillance programs or were directly involved in other federal biosurveillance activities, such as representing the department as part of NBIC activities or having responsibility for implementing the department’s responsibilities in relevant HSPDs. We interviewed these officials on the function of the specific disease surveillance program, including the process of detection, information-sharing mechanisms, and time frames in which information is generated and shared. In addition, we interviewed these officials on the degree to which the federal government has built a national biosurveillance capability, how specific programs could contribute to a national capability for early detection or situational awareness of biological events, the degree to which federal programs are integrated with each other, and the limitations of these programs in supporting a national biosurveillance capability. We analyzed this information to determine how individual agencies’ programs could contribute to building the personnel, training, and equipment and systems needed for a national biosurveillance capability. We also interviewed agency officials from programs that have responsibilities for carrying out the relevant HSPDs to determine the extent to which individual agencies have created mechanisms for integrating data, information sharing, and implementing new biosurveillance techniques. We also interviewed these officials on the limitations individual agencies have in building this capability and compared it to our previous work on identifying a focal point. These officials included senior officials with CDC’s Biosurveillance Coordination Unit, HHS’s Office of the Assistant Secretary for Preparedness and Response, DHS’s NBIC, USDA’s Centers for Epidemiology and Animal Health, and DOD’s National Center for Medical Intelligence. In addition, because public health activities are primarily administered at the state and local levels of government, we met with representatives from nonprofit and public health professional organizations that have biodefense or disease surveillance-related missions, as well as state and local organizations, in order to further identify federal programs or initiatives that may contribute to biosurveillance. These include the Association of State and Territorial Health Officials, the Council of State and Territorial Health Epidemiologists, and the National Association of County and City Health Officials. These organizations represent state and local epidemiologists, public health organizations, and officials involved in public health at the state and local levels. In particular, the Council of State and Territorial Health Epidemiologists coordinates the development of the National Notifiable Disease List. In addition, we met with experts from research organizations that study biodefense issues, including the University of Pittsburgh Center for Biosecurity, the Congressional Research Service, and the National Academies’s Institute of Medicine. These organizations identified biosurveillance efforts at the federal level, discussed the status of the federal government’s efforts to build a national biosurveillance capability, and described limitations on the federal government’s biosurveillance efforts and efforts to build a robust and integrated national biosurveillance capability. During our review of documents and interviews with knowledgeable officials, we compiled a list of more than 100 programs from across the federal government which may be relevant to biosurveillance. We also asked federal officials to explain how these programs may contribute to detecting biological events or providing situation-specific information to decision makers during an ongoing event, and to identify other programs to consider for inclusion in our study. For each program identified, we interviewed officials and requested descriptive information on their program, such as the coverage and frequency of populations surveyed; diseases on which data are collected by these biosurveillance efforts and their characteristics; how data are used to conduct biosurveillance; how information is reported to support early detection of a biological event or improved information during an event; the status of these efforts; and costs to operate these efforts. These programs were included in our catalog because they may contribute to biosurveillance in one of the following five ways: 1. Provide information to establish disease baselines, such as infection rates and geographical distribution of disease outbreaks. 2. Provide opportunities for astute clinicians to detect outbreak signals, such as collecting syndromic data that, when analyzed, may indicate an emergent infectious disease. 3. Provide disease-specific information to enhance response; for instance, data which may be used to identify and trace sources of detected outbreaks. 4. Represent a surveillance effort designed to shorten the time to detect disease outbreaks, such as environmental sensors designed to detect specific biological agents. 5. Provide tools to integrate data or coordinate information sharing; for instance, communication platforms on which analysts can discuss biosurveillance issues of concern. For each agency in our review, we compiled the information on each program into a standard profile and validated this information with program officials. We asked these program officials to verify the accuracy of the information, to add missing information, or make technical comments, which we incorporated as appropriate. We also requested officials to identify additional programs for us to consider including in this review, which we added as appropriate. These selected efforts do not represent the total universe of biosurveillance efforts, nor does the catalog represent a statistically representative sample of federal biosurveillance efforts. In addition, we did not include programs or initiatives that are led by state, local, international, or private entities; do not specifically support biosurveillance activities; or are classified systems. Some programs or initiatives may be used to support the nature and purposes of biosurveillance on a case-by-case basis during a biological event, but may not regularly be used to support biosurveillance. For example, some systems we identified track weather patters or map transportation infrastructures, which may be used to estimate the severity of an outbreak or predict a disease’s epidemiology. These programs are not included in the selected catalog. Because these efforts are not included in the selected catalog, it does not represent the total universe of biosurveillance capabilities nor does it represent a statistically significant sample of biosurveillance efforts. Finally, we did not evaluate the efficiency or effectiveness of the biosurveillance efforts that we identify in the catalog. We conducted this work from December 2008 through June 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following information appears as interactive content in the body of the report when viewed electronically. The content associated with each point on the map describes a disease event and includes information on the transmission and symptoms of the disease. The content appears in print form below in alphabetical order by disease name. Traditional disease surveillance systems designed to collect information on the health of humans, animals, and plants support biosurveillance efforts by recording national health and disease trends and providing specific information about the scope and projection of outbreaks to inform response. Traditional systems, however, rely on time-intensive testing and confirmation practices as well as data not owned by the federal government, which present challenges that limit their ability to provide timely detection and situational awareness. Monitoring for disease at the national level establishes a baseline understanding of disease characteristics that enables officials to recognize anomalous disease occurrences within the United States. Detecting a signal that warns of a potential or imminent biological threat of national concern requires the ability to discern whether disease occurrence is abnormal based on its general characteristics, as well as where, when, and how severely the disease has historically occurred. This information is also useful for projecting how an outbreak may progress during response to a potentially catastrophic biological event. The U.S. government has a long history of monitoring human, animal, and plant health—in some cases for more than a century—to help limit malady, loss of life, and economic impact. Disease surveillance for human health at the national level was established to assess the status of the public’s health, develop policy to define public health priorities, and provide assurance of the prevention and control of disease. The federal government uses a nationally notifiable disease List as the foundation of its human health surveillance efforts. Fifty-seven jurisdictions, including state and local health departments, voluntarily report cases of certain diseases named on the list of nationally notifiable diseases to the CDC. CDC uses these reports from the states to monitor national health trends, formulate and implement prevention strategies, and evaluate state and federal disease prevention efforts. In addition to the National Notifiable Disease Surveillance System, CDC maintains programs aimed at detecting and preventing specific diseases, such as influenza. (For more information on specific programs, see app. IV.) In general, these programs rely on participating health providers to send case reports to CDC on a periodic basis. According to CDC, the timeliest of these disease-specific surveillance programs have participants report the data weekly. Other programs may send information to the CDC on a monthly or annual basis. CDC compiles the data, checks for accuracy, clarifies inconsistencies, and reports national level data at regular intervals. Information that is collected through the National Notifiable Disease Surveillance System and other public health surveillance programs can be applied to determine the scope and forecast the course of an outbreak to enhance situational awareness to guide decision makers’ response efforts. Similarly, to help protect the nation’s agricultural sector, USDA has routine reporting systems and disease-specific surveillance programs for domesticated animals and some wildlife that can provide information to support the early detection goal of biosurveillance. Information gathered through these efforts can also help characterize and project the nature and scope of an outbreak—for example, by providing the number of infected animals and where they are located—to enhance situational awareness. For instance, state animal health officials obtain information on the presence of specific, confirmed clinical diseases in livestock, poultry, and aquaculture in the United States from multiple sources—including veterinary laboratories, public health laboratories, and veterinarians—and report this information to the USDA’s National Animal Health Reporting System (for more information, see app. IV). USDA has also developed control and eradication programs of specific diseases that threaten the health of animals to reduce the incidence of disease and to provide timely detection of some foreign animal diseases, resulting in smaller outbreaks. These programs are carried out in targeted high-risk populations of various animal and aquaculture species to identify cases of the disease, stem the spread of disease, and take measures to ensure certain diseases that are no longer common in the United States do not reemerge. In addition, USDA coordinates with state departments of agriculture, state foresters, universities, and industry partners to conduct pest detection surveys of agricultural plants and forests annually. By working with states to identify and prioritize pest threats of national interest and coordinate pest surveys, USDA’s Pest Detection Program provides nationwide information about the presence of select plant pests. According to officials at DOI, currently there is no national reporting system for diseases of wildlife, making it difficult to track national trends in wildlife disease. DOI’s U.S. Geological Survey’s National Wildlife Health Center is changed with addressing wildlife disease throughout the U.S. This center provides disease diagnosis, field investigation, disease management and research, and training. It also maintains a database on disease findings in wild animals and on wildlife mortality events. For foodborne illness, CDC, USDA, and FDA can partner to use their traditional surveillance activities to enhance situational awareness during outbreaks. First, CDC identifies an outbreak of foodborne illnesses based on reports from state and local health care providers. Then, it works with the other federal partners to characterize the extent of the illness and identify its source, using a system called OutbreakNet (see app. IV for more information). While CDC works with its external partners to collect additional information about identified cases—such as characteristics of the people affected, the types of food consumed, and the possible location of the consumption—FDA and USDA work with state and local food safety agencies to gather additional information about the source by conducting inspections and testing food samples. Although information provided by traditional surveillance activities is essential for biosurveillance purposes, the nature of those systems presents inherent challenges that prevent them from being wholly sufficient as tools for timely detection and enhanced situational awareness. We and others have reported that traditional disease reporting has generally been slow and incomplete, and, therefore, not well suited to provide early detection and warning of a disease outbreak or pest infestation. For example, federal agencies collecting data from state, local, and private-sector entities generally rely on voluntary participation, which limits the federal government’s ability to institute controls at the initial collection and data entry points to help ensure accuracy, completeness, or timely reporting. As we reported in 2004, most states maintain a modified version of the national notifiable disease list that reflects the public health priorities of the particular state, but do not consistently reflect CDC’s list of notifiable diseases. Therefore, some local health care providers are not obligated to report diseases on the national notifiable disease list. For instance, five states, including Alabama, Nevada, New Hampshire, Oregon, and Washington do not require local health care providers to report cases of smallpox—which could be used by terrorists as a biological weapon—even though CDC requests this information. We also previously reported that state officials have experienced significant underreporting by health care providers in their efforts to collect disease data and underreporting can adversely affect public health efforts by leading to erroneous conclusions about trends in incidence, risk factors for contracting a disease, appropriate prevention and control measures, and treatment effectiveness. According to the Institute of Medicine, many health care providers do not fully understand their role in infectious disease surveillance, including their role as a source of data. Furthermore, despite the existence of state notifiable disease lists and related laws, some providers may be unaware of basic reporting requirements. Also posing a challenge to timely detection and situational awareness is the need for laboratory confirmation. Although mechanisms exist for reporting suspected cases of disease, traditional public health systems rely on laboratory-confirmed cases. Laboratory confirmation, while important to establishing accurate information, adds up to 2 weeks to the reporting process, as results are analyzed and communicated at the state and local levels before they are reported to federal health officials. Officials from CDC and USDA attribute this delay to the inability of labs to communicate test results electronically. Timely detection and situational awareness are also problems for livestock biosurveillance. For example, we reported in 2005 that USDA does not always use rapid diagnostic tools to test animals at the site of an outbreak. Although, according to experts, on-site use of rapid diagnostic tools is critical to speeding diagnosis, containing the disease, and minimizing the number of animals that need to be slaughtered, USDA employd them only within selected laboratories. DOD used rapid diagnostic tools to identify disease agents on the battlefield, but USDA officials considered the technology to be still under development. A 2002 USDA exercise estimated that, under the current approach, a foreign animal disease such as Foot and Mouth Disease would spread rapidly, necessitating the slaughter of millions of animals and cause staggering financial losses— precisely the type of high-visibility destruction some experts told us terrorists seek. In response to our recommendation, USDA is in the process of evaluating the costs and benefits of using penside rapid diagnostic tools. In addition, we reported that animal numbers and locations are generally not known, and without a national animal identification program, surveillance, trace back, and disease containment is a challenge. Below we describe selected systems owned or developed by federal agencies which may be used to detect or provide enhanced information about outbreaks relating to human, animal, and plant health, as well as monitoring food and the environment. This list encompasses information reported by federal agencies on electronic communications and surveillance systems as well as networks of laboratories and health officials engaged in disease surveillance. Human Health, Animal, Plant, Food, Air, Water The Integrated Consortium of Laboratory Networks is to facilitate the development and maintenance of a system of laboratory networks that is built upon established laboratory networks such as the Food Emergency Response Network; the Laboratory Response Network; the National Animal Health Laboratory Network; the National Plant Diagnostic Network; the Environmental Response Laboratory Network; and other emerging networks within the federal government with responsibilities and authorities for laboratory preparedness and response. These networks are to provide timely, high-quality, and interpretable results for the early detection and effective consequence management of acts of terrorism and other events requiring an integrated laboratory response. The Integrated Consortium of Laboratory Networks has created a capabilities assessment of member network laboratories and established working groups to address deficiencies identified by member lab networks. Additionally, the Integrated Consortium of Laboratory Networks provides a forum for laboratory network representatives to provide assistance in the event of a biological, chemical, or radiological contamination emergency. FY 2009 Costs (thousands) In development since 2005 and will be transitioned to the Office of Health Affairs once operational (currently targeted for fiscal year 2011) Diseases resulting from an act of terrorism involving a biological or chemical agent or toxin, radiological contamination, or a naturally occurring outbreak of an infectious disease that may result in a national epidemic. Laboratory networks have modeled threats posed by chemical, biological, and radiological agents and are developing the capability to support characterization, containment, and recovery from such attacks. The Department of Homeland Security’s BioWatch Program is an early warning system comprised of collectors capable of detecting aerosol releases of select biological agents, natural and man-made. The Program develops and disseminates guidance and other documents geared toward the public health community, which provide information necessary to prepare for and respond to the detection of an agent of interest. The rogram also evaluates state and local implementation of guidance documents through an active exercise program which serves to assure that BioWatch coverage areas have the capability to respond to a detection. According to DHS, the combination of early warning and rapid public health response can substantially minimize the potentially catastrophic impact on the population. FY 2009 Costs (thousands) System is operational. BioWatch sensors were first deployed to major urban areas across the United States in 2003 DHS has identified scenarios involving the release of biological materials in urban areas that could result in significant casualties and economic disruption. Human Health, Animal, Plant, Food, Air, Water The Implementing Recommendations of the 9/11 Commission Act (9/11 Commission Act) established, within the Department of Homeland Security, the National Biosurveillance Integration Center. The center is tasked with enhancing the capability of the federal government to rapidly identify, characterize, localize, and track biological events of national concern by integrating and analyzing data related to human health, animal, plant, food, and environmental monitoring systems, and to disseminate alerts if any such events are detected. A central responsibility is to develop and oversee the National Biosurveillance Integration System, a federal interagency consortium and information management concept that was established to integrate and analyze biosurveillance-relevant information to achieve earlier detection and enhanced situational awareness. FY 2009 Costs (thousands) NBIC has been operational since 2007 Any bio-event involving the intentional use of biological agents as well as emergent biohazards, such as accidental release of biological agents or natural disease outbreaks. A catastrophic biological event, such as a terrorist attack with a weapon of mass destruction or a naturally occurring pandemic could cause thousands of casualties or more, weaken the economy, damage public morale and confidence, and threaten national security. Airbase/Port Detector System (Portal Shield) The Portal Shield sensor system was developed to provide early warning of biological threats for high-value, fixed-site assets, such as air bases and port facilities. Portal Shield can detect and identify up to 10 biological warfare agents simultaneously, within 25 minutes of release. FY 2009 Costs (thousands) CBRNE operative personnel managing the system Biological agents could pose a threat to soldiers on the battlefield Electronic Surveillance System for the Early Notification of Community-based Epidemics (ESSENCE) ESSENCE is used in the early detection of infectious disease outbreak and provides epidemiological tools that can be used to investigate disease oubreaks. It utilizes ambulatory data from hospitals and clinics. Epidemiologists can track, in near real time, symptoms being reported in a region through a daily feed of reported data (such as working diagnoses indicated by codes assigned by local health care staff). ESSENCE uses the daily data downloads, along with traditional epidemiological analyses that use historical data, for baseline comparisons in order to improve detection FY 2009 Costs (thousands) Health surveillance is critical to medical readiness and force health protection. Armed Forces Health Surveillance Center US Army Public Health Command (formerly known as US Army Center for Health Promotion and Preventative Medicine) The mission of the Armed Forces Health Surveillance center is to analyze, interpret, and disseminate information related to the status, trends, and determinants of the health of the U.S. military service members and military-associated populations. The center identifies obstacles to medical readiness by linking various databases that communicate information relevant to service member health and fitness. The Armed Forces Health Surveillance Center maintains the Defense Medical Surveillance System, a database containing up- to-date and historical data on disease and medical events as well as longitudinal data on personnel and deployments. The Defense Medical Surveillance System provides the data supporting the Department of Defense Serum Repository which as of Spring 2010 includes over 50 million serum specimens drawn from servicemembers (since the late 1980s) and used to perform longitudinal analyses of service member health. The system also also supports the Defense Medical Epidemiology Database, an application that provides remote user access to selected deidentified data (i.e., data with patient identifying characteristics removed). The Armed Forces Health Surveillance Center also operates the Global Emerging Infectious Surveillance and Response System, a program that conducts laboratory-based surveillance for emerging infectious diseases within the U.S. military and in foreign civilian populations through leveraging a network of research and clinical laboratory partners in the United States and overseas. FY 2009 Costs (thousands) Defense Medical Surveillance System/DOD Serum Repository/ Defense Medical Epidemiology Database: $6,000 Global Emerging Infectious Surveillance and Response System: $52,000 All health threats to U.S. military personnel, including trauma, psychological stress, environmental hazards, and infectious diseases. Laboratory network surveillance is focused on infectious diseases affecting humans, including some animal diseases that also affect humans, as well as some pathogens that could contaminate food. Health surveillance is critical to medical readiness and Force Health Protection. Division of Migratory Bird Management The Division of Migratory Bird Management is largely responsible for monitoring the health of migratory bird populations and issuing guidelines for conservation and sustainable harvest. The Division of Migratory Bird Management program activities are restricted to sampling migratory bird populations and testing those populations for indicators of diseases affecting birds or zoonotic diseases that could affect human populations, such as avian (H5N1) influenza. The United States Geological Survey laboratory in Madison, Wisconsin conducts laboratory analysis of submitted samples. The Division of Migratory Bird Management also collaborates with USDA’s Animal and Plant Health Inspection Service to test bird samples and to survey bird populations. FY 2009 Costs (thousands) Infectious diseases affecting migratory birds Animal diseases can affect wildlife as well as livestock, pets, and companion animals. Some of these diseases may affect humans. Animal disease outbreaks can cause significant and potentially disruptive losses for animal producers, put financial strain on response systems, and affect regional and national economies. Fish and Wildlife Service inspectors work with public health officials and other federal inspectors at ports of entry to enforce wildlife regulations and ensure compliance with international wildlife laws as they pertain to wild animal imports. Some inspections consist of examinations of import paperwork, while others may consist of physical inspections of the animals being imported to confirm that the shipment contents match corresponding documents. The decision to physically inspect a shipment could depend on the type of commodity, country of origin, or importer history. Random physical inspections are also conducted. FY 2009 Costs (thousands) FWS inspectors stationed at ports of entry Diseases affecting animal populations are not the primary focus of the wildlife inspection program; although, if wildlife inspectors note the presence or suspected presence of disease, they will notify the appropriate federal agency. Animal diseases can affect wildlife as well as livestock, pets, and companion animals. Some of these diseases may affect humans. Animal disease outbreaks can cause significant and potentially disruptive losses for animal producers, put financial strain on response systems, and affect regional and national economies. The National Wild Fish Health Survey is an ongoing effort to sample and test wild fish for both specific diseases identified as key threats and for emerging infectious diseases. Fish samples are collected by state and tribal governments for testing at one of Fish and Wildlife Service’s nine regional laboratories. A national database collects and maintains all laboratory test results. Fish and Wildlife Service inspectors also examine all federal fish hatcheries at least annually and some are examined biannually. States may also request testing in the event of a major fish die-off or apparent disease outbreak. FY 2009 Costs (thousands) Infectious diseases affecting fish populations Animal diseases can affect wildlife as well as livestock, pets, and companion animals. Some of these diseases may affect humans. Animal disease outbreaks can cause significant and potentially disruptive losses for animal producers, put financial strain on response systems, and affect regional and national economies. Office of Ground Water and Drinking Water The Water Security Initiative addresses the risk of intentional contamination of drinking water distribution systems by promoting the voluntary adoption of online water quality monitoring, sampling and analysis, enhanced security monitoring, consumer complaint surveillance, public health surveillance, and a consequence management plan at local water utilities. EPA is implementing the Water Security Initiative by developing the conceptual design of a system for detection of and response to a contamination event; demonstrating the viability of such a system through pilots in five cities; and developing guidance and outreach to promote voluntary adoption of drinking water contamination warning systems. EPA has implemented a pilot drinking water contamination warning system with the Cincinnati Water Works in Cincinnati, Ohio, and has funded pilots in San Francisco, New York City, Philadelphia, and Dallas, all of which are underway. FY 2009 Costs (thousands) EPA provides guidance and best practice standards to facilitate the implementation of a contamination warning system in a drinking water utility, and local water utilities operate the system and respond to threats. System is a pilot. Chemical, biological, and radiological agents which could be present in drinking water Drinking water utilities have been recognized as being potentially vulnerable to physical disruption, bioterrorism, chemical contamination, and cyber attack. Damage or destruction of a water network could disrupt not only the availability of safe drinking water but also the delivery of vital services that depend on these water supplies, like fire suppression. The Environmental Response Laboratory Network serves as a ntaional network of labs that can be accessed as needed to support large scal environmental responses by providing consistent analytical capabilities, capacities, and quality data in a systematic, coordinated response. FY 2009 Costs (thousands) System can be accessed as needed to support large-scale environmental responses Chemical, biological, and radiological agents present in air or water that can cause diseases resulting from a large-scale environmental disaster A catastrophic biological event, such as a terrorist attack with a weapon of mass destruction or a naturally occurring pandemic could cause thousands of casualties or more, weaken the economy, damage public morale and confidence, and threaten national security. 122 Cities Mortality Reporting System As part of its national influenza surveillance effort, CDC receives weekly mortality reports from 122 cities and metropolitan areas in the United States within 2-3 weeks from the date of death. These reports summarize the total number of deaths occurring in these cities/areas each week due to pneumonia and influenza. This system provides CDC with the preliminary information with which to evaluate the impact of influenza on mortality in the United States and the severity of the currently circulating virus strains. FY 2009 Costs (thousands) 122 cities and metropolitan areas contribute data to the system Influenza viruses are found in human and many different animals, including ducks, chickens, pigs, whales, horses and seals. Seasonal Flu is a contagious respiratory illness caused by flu viruses. It can cause mild to severe illness, and at times can lead to death. Pneumonia is an infection of the lungs that is usually caused by bacteria or viruses. Globally, pneumonia causes more deaths than any other infectious disease, such as AIDS, malaria, or tuberculosis. However, it can often be prevented with vaccines and can usually be treated with antibiotics or antiviral drugs. Arboviral Surveillance System (ArboNet) ArboNet is an internet-based national arboviral surveillance system developed by state health departments and CDC in 2000. ArboNet collects reports of arboviral diseases and other data from all states and three local districts (New York City, Washington D.C, and Puerto Rico). Data are reported by local health departments weekly for routine analysis and dissemination. These data are summarized periodically in the Morbidity and Mortality Weekly Report and yearly in the MMWR Summary of Notifiable Diseases. FY 2009 Costs (thousands) Arboviruses, such as West Nile Virus, encephalitis, and yellow fever viruses West Nile virus is a mosquito-borne viral disease that is transmitted to humans through infected mosquitoes. Many people infected with the virus do not become ill or show symptoms. Symptoms that do appear may be limited to headache, sore throat, backache, or fatigue. There is no vaccine for the West Nile virus, and no specific treatment besides supportive therapies. The disease occurs in Africa, Eastern Europe, West Asia, and the Middle East. This disease appeared for the first time in the United States in 1999. Yellow fever is a mosquito-borne viral disease that occurs in tropical and subtropical areas. The yellow fever virus is transmitted to humans through a specific mosquito. Symptoms include fever, muscle pain, headache, loss of appetite, and nausea. There is no treatment for yellow fever beyond supportive therapies. A vaccine for yellow fever is available. BioSense is a national program intended to improve capabilities for rapid disease detection, monitoring, and real-time situation awareness through access to specific health care data from participating organizations, including more than 500 acute-care hospitals, commercial laboratories, as well as Department of Defense and Veterans Affairs health care facilities. BioSense enables local and state public health departments to share and access data, providing a more complete picture of potential and actual health events both locally and across jurisdictional boundaries. Data received into the system are available simultaneously to state and local health departments, participating hospitals, and CDC through a Web-based application. BioSense securely processes, analyzes, and visualizes data to help characterize and monitor outbreaks and enable appropriate and timely public health interventions. Based on user feedback, BioSense is undergoing revision to deemphasize collection of detailed clinical data from hospitals at CDC and emphasize greater dependence on collection of data from existing automated surveillance systems operated by state and local health departments. The BioSense program also funds applied and developmental projects including Regional Surveillance collaboratives, state-based health information exchanges, Centers of Excellence in Public Health Informatics, and BioSense evaluations. FY 2009 Costs (thousands) BioSense is operational across the current participating health care facilities, health systems, and health department surveillance systems. A contract solicitation is underway to support BioSense redesign to enhance population coverage and stakeholder engagement. All-hazards (with a focus on infectious diseases) affecting human health The BioSense program monitors 90 concepts (syndromes and sub-syndromes) that encompass infections, injuries, chronic diseases, exposures, miscellaneous conditions, and specified codes, and free-text search terms corresponding to these concepts. In addition to these health outcome data, patient demographics (age group, sex), date of diagnosis, and geographic location information is reported. Human Health, Animal, Plant, Food, Air, Water as it relates to human health The mission of the Biosurveillance Coordination is to coordinate the development and support the implementation of an integrated, national biosurveillance plan for human health. The plan, a requirement outlined in Homeland Security Presidential Directive 21, includes the capacity to generate timely, comprehensive, and accessible information with proper context for public health decision making. Biosurveillance Coordination has developed the National Biosurveillance Strategy for Human Health and a companion document titled Concept Plan for Implementation of the National Biosurveillance Strategy for Human Health. Biosurveillance Coordination has begun to compile an inventory of biosurveillance systems, tools, collaboratives, programs, and registries within CDC. More information about these activities and final documents can be found at http://www.cdc.gov/osels/ph_surveillance/bc.html FY 2009 Costs (thousands) Biosurveillance Indications and Warning Analytic Community (BIWAC) Human Health, Animal, Plant, Food, Air, Water The Mission of the Biosurveillance Indications and Warning Analytic Community (BIWAC) is to provide a secure, interagency forum for timely collaborative exchange of critical information regarding Indications and Warning (I&W) of Biological events that may threaten U.S. National Interests. The BIWAC will conduct the collaborative exchange of critical Biosurveillance information through an encrypted information-sharing portal called “Wildfire,” and also through meetings and teleconferences. FY 2009 Costs (thousands) BIWAC partners contribute data to the system and share information via an online portal $801 (with additional in-kind support from partners) Diseases of concern to BIWAC members, including foreign animal and plant diseases and pathogens of national significance (priority 1 and 2), and zoonotic diseases, particularly those with pandemic potential. Border Infectious Disease Surveillance Project The Border Infectious Disease Surveillance Project serves as a binational early warning and active syndromic illness and disease monitoring network operating in the United States (U.S.)-Mexico Border Region and targets approximately 12 million people. The project conducts surveillance among residents of border states who visit participating clinics and hospitals. Using Web-based data entry, the project provides timely data sharing through data system and Epi-X notifications dependent on state health department and Mexican national policies. FY 2009 Costs (thousands) The Web-based system has been operational since 2006 Infectious diseases affecting humans of mutual interest to the United States and Mexico including syndroms compatible with bioterrorism agents The Border Infectious Disease Surveillance Project conducts surveillance for viral hepatitis (A,B,C,D,E); fever and rash syndromes (measles, rubella, dengue, flea-borne typhus, tick-borne ehrlichiosis); fever and neurologic illness/West Nile Virus; influenza; undifferentiated fever/dengue/rickettsial disease; severe acute vesicular rash/varicella; community acquired pneumonica/Coccioidomycosis; animal rabies; brucellosis; and foodborne infections such as Salmonella and E.coli 0157:H7. The Early Aberration Reporting System provides a free-to-end-user analysis tool that allows state and local public health officials as well as disaster and response agencies and organizations to quickly detect syndromes that might indicate a public health emergency and to monitor progression and control. The Early Aberration Reporting System allows users to add anything that can be counted into the software, and it will detect trends indicating something out of the ordinary. The tool may be downloaded from CDC’s Web site. Unless a user initiates a submission, there is no link alerting CDC to investigate a potential public health emergency, and the user is responsible for initiating investigation and incident response. According to CDC, a new version of the system is scheduled for release in Summer 2010 and a version geared for local disaster management and monitoring organizations, both domestic and foreign, will be developed and fielded prior to 2011. FY 2009 Costs (thousands) Users contribute and may analyze only their own data. CDC does not currently receive data from the system, and the data from end users vary according to state regulations and the data-sharing agreements set up with data reporters A catastrophic biological event, such as a terrorist attack with a weapon of mass destruction or a naturally occurring pandemic could cause hundreds of thousands of casualties or more, weaken the economy, damage public morale and confidence, and threaten national security. Being able to monitor extent and impact is valuable for response coordinators. Early Warning Infectious Disease Surveillance The Early Warning Infectious Disease Program is a collaboration of state, federal and international partners who are working to provide rapid and effective laboratory confirmation of urgent infectious disease case reports in the border regions of the United States, Canada, and Mexico. Activities include assessing surveillance and laboratory capacity on each side of the international border, improving electronic sharing of laboratory information, maintaining a database of all sentinel/clinical labs, and working to develop and agree on a list of notifiable conditions. The program was established in 2003 in order to enhance coordination between the United States, Canada, and Mexico to provide early warning and cross-border capability in the event of a disease outbreak. FY 2009 Costs (thousands) The program was established in 2003 A catastrophic biological event, such as a terrorist attack with a weapon of mass destruction or a naturally occurring pandemic could cause thousands of casualties or more, weaken the economy, damage public morale and confidence, and threaten national security. The Electronic Disease Notification System is an electronic system used to notify state and local public health jurisdictions about diseases and disease outbreaks occurring among refugees and immigrants entering the United States. The system has a module to track tuberculosis and other quarantinable diseases in refugees and immigrants. CDC uses the system to electronically notify health departments of arriving refugees and immigrants with Class A and Class B quarantinable conditions, provide an electronic communication system for health departments to notify each other of persons with tuberculosis conditions who change jurisdictions, and provide health departments with an electronic system to record and evaluate the outcome of domestic follow-up examinations. FY 2009 Costs (thousands) Panel Physicians using required Department of State medical examination forms Quarantinable Diseases, as defined by executive order 13295, include cholera, diphtheria, infectious tuberculosis, plague, smallpox, yellow fever, viral hemorrhagic fevers (such as Marburg, Ebola and Congo-Crimean disease), SARS (severe acute respiratory syndrome), and influenza caused by novel or re-emergent influenza viruses that are causing or have the potential to cause a pandemic. Tuberculosis is a bacterial disease that is usually transmitted by contact with an infected person. People with healthy immune systems can become infected but not ill. Symptoms include a bad cough, coughing up blood, pain in the chest, fatigue, weight loss, fever, and chills. Several drugs can be used to treat tuberculosis, but the disease is becoming increasingly drug resistant. The Emerging Infections Program is a network of CDC and 10 state health departments working with collaborators, including academic institutions and other federal agencies. The network conducts active population-based surveillance and research for emerging infectious diseases of public health importance. Examples of programs included in the Emerging Infections Program network include Active Bacteria Core Surveillance (a program conducting laboratory-based surveillance for bacterial pathogens), FoodNet (a program to monitor the incidence of foodborne and waterborne diseases), and Influenza Projects (a program that tracks trends and characterizes outbreaks of severe influenza). FY 2009 Costs (thousands) Epidemic Information Exchange, Epi-X Epi-X connects state and local public health officials so that they can share information about outbreaks and other acute health events, including those possibly related to bioterrorism. It is intended to provide epidemiologists and others with a secure, Web-based platform that can be used to instant emergency notification of outbreaks and requests for CDC assistance. Epi-X provides tools for searching, tracking, and reporting on diseases. FY 2009 Costs (thousands) Epi-X has over 5,000 users who have the capability to provide data, including all state epidemiologists and local health officers from more than 150 major metropolitan cities or counties that can post data to the system. Epi-X scientific staff are available 24 hours a day, 7 days a week to post reports and notify users of urgent health events. A catastrophic biological event, such as a terrorist attack with a weapon of mass destruction or a naturally occurring pandemic could cause thousands of casualties or more, weaken the economy, damage public morale and confidence, and threaten national security. Foodborne Disease Active Surveillance Network (FoodNet) As part of CDC’s Emerging Infections Program, FoodNet provides a network for responding to new and emerging foodborne diseases of national importance, monitoring the burden of foodborne disease, and identifying the sources of specific foodborne diseases. It consists of active surveillance and related epidemiological studies, which help public health officials better understand the epidemiology of foodborne diseases in the United States. Participating FoodNet sites may also be employed to coordinate enhanced surveillance and epidemiologic investigation if a novel foodborne disease threat is suspected in order to more rapidly identify the source and extent of the threat. FY 2009 Costs (thousands) Public health and food safety officials in the 10 FoodNet sites Foodborne illness harms human health, and outbreaks undermine consumer confidence in the safety of the nation’s food supply, and have serious economic consequences. For instance, the 2006 outbreak of E. coli O157:H7 linked to bagged spinach resulted in 205 confirmed illnesses, 3 deaths, and an estimated $100 million loss to industry. The Global Disease Detection program focuses on gathering, analyzing, and sharing global information to identify and respond to emerging health threats. The program has three mechanisms to accomplish this focus: regional centers that are placed around the world and are concerned with the detection and control of emerging infectious disease; CDC staff placed overseas to support CDC’s mission; and the Global Disease Detection program operations center, which serves as the central clearinghouse focused on early detection of international events to which CDC may be asked to respond. FY 2009 Costs (thousands) The program began operating in 2004 A catastrophic biological event, such as a terrorist attack with a weapon of mass destruction or a naturally occurring pandemic could cause thousands of casualties or more, weaken the economy, damage public morale and confidence, and threaten national security. Global Emerging Infections Sentinel Network (GeoSentinel) GeoSentinel is a Web- and provider-based sentinel network. It consists of travel/tropical medical clinics around the world that participate in surveillance to monitor geographic and temporal trends in morbidity among travelers and other globally mobile populations. Passive surveillance and response capabilities are also extended to a broader network of GeoSentinel Network members. FY 2009 Costs (thousands) GeoSentinel was established in 1995 and is operational A catastrophic biological event, such as a terrorist attack with a weapon of mass destruction or a naturally occurring pandemic could cause thousands of casualties or more, weaken the economy, damage public morale and confidence, and threaten national security. The Health Alert Network is a nationwide system serving as a platform for the distribution of health alerts, dissemination of prevention guidelines and other information, distance learning, national disease surveillance, and electronic laboratory reporting, as well as for CDC’s bioterrorism and related initiatives to strengthen preparedness at the local and state levels. Among other things, the Health Alert Network is to provide early warning alerts and to secure capability to securely transmit surveillance, laboratory, and other sensitive data. FY 2009 Costs (thousands) Not available The objective of influenza surveillance is to monitor the timing, geographic extent, and severity of influenza activity in the United States and its impact on the U.S. population over time. The system consists of nine complementary surveillance components, which include data on laboratory-based data describing the number and percentage of positive tests from laboratories across the country; the percentage of doctor visits for flu-like symptoms; the percentage of deaths reported to be caused by pneumonia and influenza in 122 U.S. cities; state and territorial epidemiologist reports of influenza activity; influenza-associated pediatric mortality; and reported pediatric influenza hospitalizations. FY 2009 Costs (thousands) Data on influenza are contributed by more than 120 laboratories, more than 2,400 sentinel health care providers, vital statistics in 122 cities, research and health care personnel at the Emerging Infections Program and influenza surveillance coordinators and state epidemiologists for 50 state health departments, New York City and the District of Columbia Influenza viruses are found in human and many different animals, including ducks, chickens, pigs, whales, horses, and seals. Seasonal flu is a contagious respiratory illness caused by flu viruses. It can cause mild to severe illness, and at times can lead to death. The Laboratory Response Network is an integrated network of 165 public health and clinical laboratories that provide laboratory diagnostics and disseminated testing capability for public health preparedness and response. It ensures that all member laboratories collectively maintain current biological detection and diagnostic capabilities, as well as surge capacity for all biological and chemical agents likely to be used by terrorists. The network is based on the use of standard protocols and reagents, integrated data management, and secure communications. FY 2009 Costs (thousands) Members share data with each other A catastrophic biological event, such as a terrorist attack with a weapon of mass destruction or a naturally occurring pandemic could cause thousands of casualties or more, weaken the economy, damage public morale and confidence, and threaten national security. Morbidity and Mortality Weekly Report The Morbidity and Mortality Weekly Report is the nation’s leading public health bulletin and the flagship publication of CDC. The Morbidity and Mortality Weekly Report includes reports on disease epidemics, trends, prevention and control of illness, injuries, and deaths. This information represents the primary manner that state and local public health officials, the media, and the public are informed of public health issues from CDC. The Morbidity and Mortality Weekly Report publishes data from the National Notifiable Disease Surveillance System each week and in an annual Summary of Notifiable Diseases. These data are the official statistics, in tabular and graphic form, for the reported occurrence of nationally notifiable infectious diseases in the United States. FY 2009 Costs (thousands) International and U.S. public health officials and scientists submit epidemiological and surveillance data about outbreaks or other health events. Other federal agencies, such as USDA, FDA, and EPA produce public health information for publication. The publication is also integrated with the Epi-X and National Notifiable Diseases Surveillance System, both of which contribute data for publication. The Morbidity and Mortality Weekly Report published its first issue in 1961. The publication is updated weekly and monthly. Not available The National Botulism Surveillance System compiles information on botulism cases that occur in the United States. CDC provides clinical, epidemiological, and laboratory consultation and testing services for suspected botulism cases 24 hours a day and is the only source for antitoxin in the US. CDC alerts other federal agencies concerning botulism outbreaks associated with commercially produced and distributed food products. Also, CDC conducts a yearly survey of state and territorial epidemiologists and of state public health laboratory directors to identify and compile all botulism cases that occurred in the previous year. FY 2009 Costs (thousands) Federal public health officials; annual report is compiled from data provided by state health departments Botulism is a rare but serious paralytic illness caused by a nerve toxin that is produced by theneurotoxin producing Clostridia. There are four types of botulism. Foodborne botulism is caused by eating foods that contain the botulinum toxin. Wound botulism is caused by toxin produced from a wound infected with Clostridium botulinum. Infant botulism is caused by consuming the spores of the botulinum bacteria, which then grow in the intestines and release toxin. Finally, Adult Colonization is a rare form of botulism, similar to Infant Botulism, and results from ingestion of spores by susceptable persons and subsequent growth and toxin production in the intestines. All forms of botulism can be fatal and are considered medical emergencies. Foodborne botulism can be especially dangerous because many people can be poisoned by eating a contaminated food. National Notifiable Diseases Surveillance System CDC has responsibility for the collection and publication of data concerning nationally notifiable diseases. All 50 states, 5 territories, the District of Columbia, and New York City participate in the National Notifiable Diseases Surveillance System. The Council of State and Territorial Epidemiologists, with input from CDC, makes recommendations annually for additions and deletions to the list of nationally notifiable diseases. Reporting of nationally notifiable diseases to CDC by the states is voluntary. Reporting is currently mandated (i.e., by state legislation or regulation) only at the state level. The list of diseases that are considered notifiable, therefore, varies slightly by state. All states generally report the internationally quarantinable diseases (i.e., cholera, plague, and yellow fever) in compliance with the World Health Organization’s International Health Regulations. FY 2009 Costs (thousands) Public health officials in 50 states, 5 territories, the District of Columbia and New York City Most states have a list of notifiable diseases that approximates a national list of notifiable diseases maintained by the Council of State and Territorial Epidemiologists. This national list is reviewed and revised annually with input from CDC. States may modify their list of notifiable diseases to reflect the public health needs of their region. National Molecular Subtyping Network for Foodborne Disease Surveillance (PulseNet) PulseNet is an early warning system for outbreaks of foodborne diseases. The network has participants from public health laboratories in all 50 states, federal regulatory agencies, and some state agricultural laboratories and is coordinated by CDC. PulseNet contributes to the identification and investigation of outbreaks of foodborne and bacterial diseases through comparison of the molecular “fingerprints” of foodborne pathogens from patients and their food, water, and animal sources. Once an outbreak is detected, PulseNet identifies patients who are infected with isolates that have the outbreak DNA “fingerprint” and thus are likely to be part of the outbreak. If a foodborne pathogen is isolated from a suspected vehicle, PulseNet also links it to the outbreak if it displays the outbreak “fingerprint.” Finally, PulseNet provides leadership, expertise, training, and education in the field of foodborne and bacterial diseases. FY 2009 Costs (thousands) PulseNet participants enter data into the system using standardized equipment and methods Outbreaks of foodborne illness can harm human health, undermine consumer confidence in the safety of the nation’s food supply, and have serious economic consequences. For example, the 2006 outbreak of E. coli O157:H7 linked to bagged spinach resulted in 205 confirmed illnesses, 3 deaths, and an estimated $100 million loss to industry. Human Health, Animals, Food, Water The National Outbreak Reporting System is a Web-based application for states to report foodborne, waterborne, and other outbreaks electronically. Information collected includes the number ill, dates and places of outbreak, percent of cases by age group and gender, symptoms, incubation period and duration of illness, implicated food or water item, contributing factors leading to food or water contamination, source of implicated food or water, and food recall or other public health response. Data are used for annual summary reports of foodborne illness as well as for the monitoring of multistate outbreaks. The National Outbreak Reporting System was developed by CDC as a successor system to the Electronic Foodborne Outbreak Reporting System. FY 2009 Costs (thousands) 50 state and 15 territorial reporting areas provide information on the number and characteristics of foodborne, waterborne and other enteric disease outbreaks in their area Foodborne, waterborne and other enteric disease outbreaks Foodborne, waterborne and other enteric disease outbreaks have myriad causes (e.g., bacteria, viruses, parasites, toxins and chemicals). These agents cause a range of human illnesses through toxicity (toxins or chemicals) or by infection (pathogens). National Respiratory and Enteric Virus Surveillance System The National Respiratory and Enteric Virus Surveillance System is a laboratory-based system that monitors temporal and geographic patterns associated with the detection of respiratory viruses including respiratory synctial virus, human parainfluenza viruses, respiratory and enteric adenoviruses, and rotaviruses. Influenza detections are also reported to the system, but are integrated with CDC influenza surveillance. Users upload data to the National Respiratory and Enteric Virus Surveillance System through a Web- based or telephone dial-in system. FY 2009 Costs (thousands) Commercial, public health, and clinical laboratories Respiratory viruses tracked in the system are generally transmitted through direct or close contact with contaminated secretions that are spread through droplets in the air or by contact with contaminated environmental surfaces. Enteric infections tracked in the system enter the body through the mouth and intestinal tract and are usually spread through contaminated food and water or by contact with vomit or feces. OutbreakNet is a national network of epidemiologists and other public health officials coordinated by CDC who investigate outbreaks of foodborne, waterborne, and other enteric illnesses in the United States. OutbreakNet ensures rapid, coordinated detection and response to multistate outbreaks of foodborne illness and to promote more comprehensive outbreak surveillance. OutbreakNet seeks to improve the collaboration and partnership among officials in local, state, and federal agencies who work with foodborne and diarrheal disease outbreak surveillance and response. OutbreakNet works in partnership with U.S. state and local health departments, USDA, FDA, and PulseNet (a national surveillance network made up of state and local public health laboratories and federal food regulatory agency laboratories). FY 2009 Costs (thousands) Foodborne, waterborne and other enteric diseases Foodborne, waterborne and other enteric disease outbreaks have myriad causes (e.g., bacteria, viruses, parasites, toxins and chemicals). These agents cause a range of human illnesses through toxicity (toxins or chemicals) or by infection (pathogens). The Public Health Information Network is an effort initiated by CDC to provide interoperability across public health functions and organizations, such as state and federal agencies, local health departments, public health labs, vaccine clinics, clinical care, and first responders. It is intended to, among other things, (1) deliver industry standard data to public health, (2) investigate bioterrorism detection, (3) provide disease tracking analysis and response, and (4) support local, state, and national data needs. It builds on existing CDC investments in other surveillance systems. The Public Health Information Network will not replace any of these systems but will provide an “umbrella” to support the interoperability of existing CDC surveillance, communications, and reporting systems. FY 2009 Costs (thousands) Not available CDC’S United States Quarantine Stations seek to limit the importation of infectious diseases into the United States by working with key partners to identify ill persons and potentially infectious items. Quarantine Stations enter reports into the Quarantine Activity Reporting System in real time to CDC-Atlanta that summarizes port activities at 20 ports of entry and land-border crossings where international travelers arrive. The reports are analyzed and evaluated daily and relevant information is captured and disseminated as part of a Disease and Activity Report which is sent to CDC leadership and relevant external partners. During 2009 Quarantine Station staff reported 3, 847 illnesses and 125 deaths, conducted 122 airline contact investigations involving 95 index cases, forwarded 9,778 migrant packets, processed 205 non-human primate shipments, released 125 drug shipments, and participated in 1,510 activities with external partners. FY 2009 Costs (thousands) CDC’s Quarantine Stations report data gathered from airline staff, state and local health departments, Customs and Border Protection personnel, emergency responders, and other first responders to infectious disease outbreaks A catastrophic biological event, such as a terrorist attack with a weapon of mass destruction or a naturally occurring pandemic could cause thousands of casualties or more, weaken the economy, damage public morale and confidence, and threaten national security. Unexplained Deaths and Critical Illnesses Surveillance System As part of CDC’s Emerging Infections Program, the Unexplained Deaths and Critical Illnesses Surveillance System is expected to contain limited epidemiological and clinical information on previously healthy persons aged 1 to 49 years who have illnesses with possible infectious causes. It is also expected to provide active population-based surveillance through coroners and medical examiners at limited sites. National and international surveillance will be passive for clusters of unexplained deaths and illnesses. FY 2009 Costs (thousands) A catastrophic biological event, such as a terrorist attack with a weapon of mass destruction or a naturally occurring pandemic could cause thousands of casualties or more, weaken the economy, damage public morale and confidence, and threaten national security. Electronic Laboratory Exchange Network, (eLEXNET) eLEXNET provides a Web-based system for real-time sharing of food safety laboratory data among federal, state, and local agencies. It allows public health officials at multiple government agencies engaged in food safety activities to compare and coordinate laboratory analysis findings in a secure setting. eLEXNET captures food safety sample and test result data from participating laboratories and uses them for risk assessment and decision support purposes, improving the early detection of problem products. FY 2009 Costs (thousands) FDA’s Center for Food Safety and Applied Nutrition manages eLEXNET, which has 1,800 users including 203 participating labs, 150 of which are FERN labs Outbreaks of foodborne illness can harm human health, undermine consumer confidence in the safety of the nation’s food supply, and have serious economic consequences. For instance, the 2006 outbreak of E. coli O157:H7 linked to bagged spinach resulted in 205 confirmed illnesses, 3 deaths, and an estimated $100 million loss to industry. FDA and the Food Safety and Inspection Service at USDA The Food Emergency Response Network is a coordinated initiative between USDA’s Food Safety and Inspection Service and FDA to develop an integrated laboratory network capable of responding to national emergences. Laboratories participating in the network are responsible for detecting and identifying biological, chemical, and radiological agents in food. The primary objectives of the Food Emergency Response Network are to prevent attacks on the food supply through surveillance; to prepare for emergencies by strengthening lab capabilities; to respond to threats, attacks, and emergencies in the food supply; and to assist in recovery. Participating laboratories conduct investigations of terrorism-related events involving food and play a major role in method development and validation for detecting foodborne contamination. FY 2009 Costs (thousands) State food testing laboratories are the primary providers of data. Some federal, local, and county food testing laboratories also provide data Outbreaks of foodborne illness can harm human health, undermine consumer confidence in the safety of the nation’s food supply, and have serious economic consequences. For instance, the 2006 outbreak of E. coli O157:H7 linked to bagged spinach resulted in 205 confirmed illnesses, 3 deaths, and an estimated $100 million loss to industry. Center for Animal Health Information and Analysis Animal and Plant Health Inspection Service The Center for Animal Health Information and Analysis analyzes biosurveillance data on animal diseases by conducting all-source intelligence and issue assessment of threats, identification of the hazards posed by animal diseases, analysis of risk and modeling of the spread of an animal disease, and the issuance of recommendations to target surveillance resources to animal threats. The center is organized into three teams: Global Intelligence and Forecasting focuses on improving animal health through intelligence and analysis; Risk Analysis identifies methods and approaches for estimating risks of animal disease outbreaks; and Spatial Epidemiology develops geospatial methods to collect and analyze data on farm animal locations and livestock concentration. FY 2009 Costs (thousands) Officials from the center analyze data from a variety of sources and issue alerts Animal diseases can affect wildlife as well as livestock, pets, and companion animals. Some of these diseases may affect humans. Animal disease outbreaks can cause significant and potentially disruptive losses for animal producers, put financial strain on response systems, and affect regional and national economies. Animal and Plant Health Inspection Service The Emergency Management Response System is used to manage investigations of outbreaks of animal diseases in the United States. This Web-based task management system was designed to automate many of the tasks that are routinely associated with disease outbreaks and animal emergencies. The Emergency Management Response System is used for routine reporting of foreign investigations of animal disease, state-specific disease outbreaks or control programs, national responses, or natural disasters involving animals. The system also has a mapping feature, which allows for real-time identification of outbreaks to enable responders to respond more quickly by providing high-resolution maps to decision makers, government agencies, and the public. The system interfaces with state and federal diagnostic laboratories for reporting test results. FY 2009 Costs (thousands) State and federal animal health officials Animal diseases can affect wildlife as well as livestock, pets, and companion animals. Some of these diseases may affect humans. Animal disease outbreaks can cause significant and potentially disruptive losses for animal producers, put financial strain on response systems, and affect regional and national economies. The emerging Veterinary event database Animal and Plant Health Inspection Service The emerging Veterinary event database stores syndromic surveillance information regarding adverse animal health events. USDA’s Center for Animal Health Information and Analysis uses publicly available information sources to gather information on these events. The database is used to establish a baseline for animal disease and catalog reports of adverse animal health events. Analysts use this information to produce reports on emerging disease. Access to the database is not restricted to personnel from the Center for Animal Health Information and Analysis. FY 2009 Costs (thousands) Minimal costs associated with employee time for maintenance Animal diseases can affect wildlife as well as livestock, pets, and companion animals. Some of these diseases may affect humans. Animal disease outbreaks can cause significant and potentially disruptive losses for animal producers, put financial strain on response systems, and affect regional and national economies. National Agriculture Pest Information System Animal and Plant Health Inspection Service The National Agricultural Pest Information System is a database for collection, summarization, and standardized information on plant pests such as insects, diseases, weeds, and nematodes. Data are gathered by each of the states and by USDA’s Plant Protection and Quarantine. Emphasis is given to surveys for exotic pests, pests that may impact export of U.S. agricultural products, as well as pest and biological control agents identified by Plant Protection and Quarantine progam officials. The National Agricultural Pest Information System facilitates data management coordination for the plant pest survey data gathered on a national, regional, and/or state scale as part of the Cooperative Agricultural Pest Survey program sponsored by USDA. FY 2009 Costs (thousands) States participating in the Cooperaive Agricultural Pest Survey enter data into the system Plant resources in the United States, inlcluding crops, rangelands, and forests, are vulnerable to endemic, introduced, and emerging pathogens. More than 50,000 plant diseases occur in the United States, caused by a variety of pathogens. Increasing globalization and international trade activities create a strong likelihood that many other exotic plant pathogens will arrive in the United States in the coming years. National Animal Health Laboratories Network Animal and Plant Health Inspection Service National Institute of Food and Agriculture The National Animal Health Laboratories Network was established as part of a strategy to coordinate and network the diagnostic testing capabilities of federal veterinary diagnostic laboratories with state and university diagnostic laboratories to improve early detection of, response to, and recovery from animal health emergencies, including bioterrorist events, newly emerging diseases, and foreign animal disease agents. The network is composed of 58 laboratories in 45 states. Current activities include a training program for technicians who test for certain high-risk diseases such as food and mouth disease and conducting surveillance for animal diseases, such as swine influenza virus and pseudorabies. FY 2009 Costs (thousands) State and university veterinary diagnostic labs (excludes funding provided to labs for testing, sample collection, or training) Animal diseases can affect wildlife as well as livestock, pets, and companion animals. Some of these diseases may affect humans. Animal disease outbreaks can cause significant and potentially disruptive losses for animal producers, put financial strain on response systems, and affect regional and national economies. National Animal Health Monitoring System Animal and Plant Health Inspection Service The mission of the National Animal Health Monitoring System is to deliver information and knowledge about animal health by conducting studies that generally focus on food animals, dairy, livestock, and poultry commodities. The studies are designed to gather information about industry practices, biosecurity, and prevalence of endemic diseases. The studies are conducted about every 5 years or more depending on budget/resources and needs of commodity stakeholders, and the results are published in an annual Animal Health Report. This information is used for surveillance planning to identify risk factors, so that surveillance can be targeted to key areas of concern. FY 2009 Costs (thousands) Data is provided by industries that are selected through the National Agricultural Statistics Service Animal diseases can affect wildlife as well as livestock, pets, and companion animals. Some of these diseases may affect humans. Animal disease outbreaks can cause significant and potentially disruptive losses for animal producers, put financial strain on response systems, and affect regional and national economies. National Animal Health Reporting System Animal and Plant Health Inspection Service The National Animal Health Reporting System was designed to provide data from chief State animal health officials on the presence or absence of confirmed World Organization for Animal Health reportable diseases in specific commercial livestock, poultry, and aquaculture species in the United States. Within a state, data about animal disease occurrence are gathered from as many verifiable sources as possible and consolidated into a monthly report submitted to the National Surveillance Unit, where the information is verified, summarized, and compiled into a national report. The commodities currently covered are cattle, sheep, goats, equine, swine, commercial poultry, and commercial food fish. The National Animal Health Reporting System is a joint effort of the U.S. Animal Health Association, American Association of Veterinary Laboratory Diagnosticians, and USDA’s Animal and Plant Health Inspection Service. FY 2009 Costs (thousands) Federal cost is a portion of the overall budget of the National Surveillance Unit Animal diseases can affect wildlife as well as livestock, pets, and companion animals. Some of these diseases may affect humans. Animal disease outbreaks can cause significant and potentially disruptive losses for animal producers, put financial strain on response systems, and affect regional and national economies. Animal and Plant Health Inspection Service The objective of the National Surveillance Unit includes developing and improving methods for surveillance and analysis of animal health related data; analyze surveillance data to provide actionable information; designing and evaluating national animal health surveillance systems; and communicating surveillance information to key partners. The National Surveillance Unit is the coordinating entity for the National Animal Health Surveillance System. The goal of the unit is to create a comprehensive, integrated national surveillance system for animal health. The National Surveillance Unit has created an inventory of biosurveillance systems focused on animal health which allows users to search for animal health surveillance systems by species, disease, source of data, sample type, category of system, and agency administering the system. FY 2009 Costs (thousands) NSU was founded in 2003 and is operational Animal diseases can affect wildlife as well as livestock, pets, and companion animals. Some of these diseases may affect humans. Animal disease outbreaks can cause significant and potentially disruptive losses for animal producers, put financial strain on response systems, and affect regional and national economies. Animal and Plant Health Inspection Service The Offshore Pest Information System is a Web-based information-sharing tool that allows users to communicate in an organized manner about offshore animal and plant health events and issues. The system is a key instrument used to meet the goals of the International Safeguarding Information Program. This program is risk-focused and designed to collect, synthesize/analyze, communicate, and utilize relevant offshore animal and plant disease or pest information. The Offshore Pest Information System is secure and enables multiple users to access, respond to, and act upon information about international events that affect animal and plant health. Weekly reports are generated from events in the system’s database that are distributed to the system’s users and stakeholders. FY 2009 Costs (thousands) Animal and plant emerging diseases and pests Animal diseases can affect wildlife as well as livestock, pets, and companion animals. Some of these diseases may affect humans. Animal disease outbreaks can cause significant and potentially disruptive losses for animal producers, put financial strain on response systems, and affect regional and national economies. Plant resources in the United States, including crops, rangelands, and forests, are vulnerable to endemic, introduced, and emerging pathogens. More than 50,000 plant diseases occur in the United States, caused by a variety of pathogens. Increasing globalization and international trade activities create a strong likelihood that many other exotic plant pathogens will arrive in the United States in the coming years. Rapid Syndrome Validation Project for Animals (RSVP-A) Animal and Plant Health Inspection Service Rapid Syndrome Validation Project for Animals is a syndromic surveillance system to facilitate the early detection, reporting, and response to an infectious disease outbreak in animals. Veterinarians collect syndromic data on animals—such as neurologic dysfunction, birth defects, or unexpected death—on hand-held computers and send the data to a central database. USDA officials and other practitioners analyze the data and create alerts of a disease outbreak or summarize normal disease occurrence. FY 2009 Costs (thousands) Operational in pilot project phase Animal diseases can affect wildlife as well as livestock, pets, and companion animals. Some of these diseases may affect humans. Animal disease outbreaks can cause significant and potentially disruptive losses for animal producers, put financial strain on response systems, and affect regional and national economies. Integrated Pest Management Pest Information Platform for Extension and Education National Institute of Food and Agriculture The Integrated Pest Management Pest Information Platform for Extension and Education is a system to analyze threats to plant health. The system utilizes modeling technology that allows stakeholders to access data online for the location of plant threats, as well as their severity, distribution, forecasting, and state-specific control recommendations. Data included are all hazards, and include weather patterns, observations of plant disease occurrences, and the results of sample testing that are contributed by the system’s users. The system is active in 41 states, 5 Canadian provinces, and Mexico. FY 2009 Costs (thousands) System has been operating since 2005 Plant resources in the United States, including crops, rangelands, and forests, are vulnerable to endemic, introduced, and emerging pathogens. More than 50,000 plant diseases occur in the United States, caused by a variety of pathogens. Increasing globalization and international trade activities create a strong liklihood that many other exotic plant pathogens will arrive in the United States in the coming years. National Institute of Food and Agriculture The mission of the National Plant Diagnostic Network is to safeguard U.S. plant agriculture against introduced pests and pathogens by enhancing diagnostic and detection capabilities; improving communication among federal, state, and local agencies involved in monitoring for plant pests and pathogens; and delivering educational programs regarding the threats posed by their introduction. A single database captures data from voluntary information given to laboratories, such as from grower samples, bugs brought into laboratories, or from citizen complaints. The network, for example, funds diagnostic labs in all 50 states and sponsors training for individuals in the plant industry (from nursery owners to home gardeners). The National Plant Diagnostic Network also maintains a national database with plant disease reports, charts, and mapping tools. FY 2009 Costs (thousands) System has been operating since 2002 Plant resources in the United States, including crops, rangelands, and forests, are vulnerable to endemic, introduced, and emerging pathogens. More than 50,000 plant diseases occur in the United States, caused by a variety of pathogens. Increasing globalization and international trade activities create a strong liklihood that many other exotic plant pathogens will arrive in the United States in the coming years. Upon arrival at a U.S. port-of-entry, all meat and poultry shipments must be reinspected by a Food Safety and Inspection Service import inspector before they are allowed into this country. Every lot of product is given a visual inspection for appearance and condition, and checked for certification and label compliance. In addition, the Automated Import Information System assigns various other types of inspection including product examinations and microbial and chemical laboratory analysis based on statistical and trend analysis of the product’s origin. FY 2009 Costs (thousands) Importers of meat and poultry products submit reports to the system Biological agents that can contaminate meat and poultry products Outbreaks of foodborne illness can harm human health, undermine consumer confidence in the safety of the nation’s food supply, and have serious economic consequences. For instance the 2006 outbreak of E. coli O157:H7 linked to bagged spinach resulted in 205 confirmed illnesses, 3 deaths, and an estimated $100 million loss to industry. Food Safety and Inspection Service Incident Management System The Food Safety and Inspection Service’s Incident Management System is a web-based common operating platform that, according to USDA officials, allows program managers and users to rapidly identify, respond to, and track the agency’s response to significant incidents such as suspected tampering of products, threats to facilities, natural disasters, and Class 1 recalls with illness. FY 2009 Costs (thousands) Food Safety and Insepction Service Emergency Management Committee members and personnel granted access to the system Outbreaks of foodborne illness can harm human health, undermine consumer confidence in the safety of the nation’s food supply, and have serious economic consequences. For instance the 2006 outbreak of E. coli O157:H7 linked to bagged spinach resulted in 205 confirmed illnesses, 3 deaths, and an estimated $100 million loss to industry. In April 2008, the Food Safety and Inspection Service implemented the Import Alert Tracking System, an automated data system that allows field employees to record information related to ineligible, illegal, or smuggled shipments of imported meat, poultry, or egg products found in commerce. The system enables better coordination in enforcement actions through quicker access to information collected on illegal entries. The system has been designed to automate the processes of incident notifications between Food Safety and Inspection Service program areas and creation of an incident report when appropriate. FY 2009 Costs (thousands) System is operational since 2005. No direct costs - costs are included under the Food Safety and Inspection Service’s Incident Management System Outbreaks of foodborne illness can harm human health, undermine consumer confidence in the safety of the nation’s food supply, and have serious economic consequences. For instance the 2006 outbreak of E. coli O157:H7 linked to bagged spinach, resulted in 205 confirmed illnesses, 3 deaths, and an estimated $100 million loss to industry. Laboratory Electronic Application for Results Notification The Laboratory Electronic Application for Results Notification program provides Food Safety and Inspection Service personnel, establishments, and state officials with reports on the status of meat, poultry, and egg product test samples. The application is an automated process that tracks each sample as it is received, analyzed, and results are reported. The Laboratory Electronic Application for Results Notification program allows field inspectors and agency staff to check on the status of individual samples or view circuit, district, and management summaries of results. Establishment and state officials will not have access to the intranet site, but they may receive e-mail reports on the status of individual samples. FY 2009 Costs (thousands) Food Safety and Insepction Service Laboratories Outbreaks of foodborne illness can harm human health, undermine consumer confidence in the safety of the nation’s food supply, and have serious economic consequences. For instance the 2006 outbreak of E. coli O157:H7 linked to bagged spinach, resulted in 205 confirmed illnesses, 3 deaths, and an estimated $100 million loss to industry. Microbiological and Residue Computer Information System The Microbiological and Residue Computer Information System contains sample identification information and results for analyses submitted by inspection personnel to laboratories. These samples consist of meat, poultry, and egg products; and they are analyzed to ensure that they are safe, wholesome, unadulterated, and properly labeled. The samples are tested because they bear or contain residues of drugs, pesticides, other chemicals, or microbiological pathogens. Test results are used to alert agency personnel and the industry of contaminations and threats to consumer health and the need for protective actions such as product recalls. The Microbiological and Residue Computer Information System is also used for risk assessment and decision support purposes, improving early detection of problem products, enabling active food safety surveillance, and evaluating potential threats to the food supply. FY 2009 Costs (thousands) Food Safety and Insepction Service Laboratories Outbreaks of foodborne illness can harm human health, undermine consumer confidence in the safety of the nation’s food supply, and have serious economic consequences. For instance the 2006 outbreak of E. coli O157:H7 linked to bagged spinach, resulted in 205 confirmed illnesses, 3 deaths, and an estimated $100 million loss to industry. The Pathogen Reduction Enforcement Program schedules tests, tracks food samples, and generates a series of reports concerning food testing eligibility and the status of food sample testing results. It collects and stores food manufacturing establishment addresses and product information, as well as the establishment’s performance in previous food safety tests. It uses this information to schedule and request the collection of food samples for testing. These tests results are used to alert agency personnel and the industry of contaminations, so an appropriate response can be issued. The Pathogen Reduction Enforcement Program is also used for risk assessment and decision support purposes, improving early detection of problem products, enabling active food safety surveillance, and evaluating potential threats to the U.S. food supply. FY 2009 Costs (thousands) Food Safety and Insepction Service Laboratories Outbreaks of foodborne illness can harm human health, undermine consumer confidence in the safety of the nation’s food supply, and have serious economic consequences. For instance the 2006 outbreak of E. coli O157:H7 linked to bagged spinach, resulted in 205 confirmed illnesses, 3 deaths, and an estimated $100 million loss to industry. The Agriculture Quarantine Inspection Program partners with Customs and Border Protection to conduct hands-on inspection of agricultural commodities entering the United States to confirm that imports are free of pests and disease. Specifically, Customs and Border Protection officers inspect any incoming agricultural commodities, including plants, animals, food, or other miscellaneous goods—such as automobile parts where pests might hide and enter the United States—for the presence of pests. The Agriculture Quarantine Inspection Program operates Plant Inspection Stations, which process the pest interceptions made by Customs and Border Protection officers at ports, and identify pests and diseases on imported goods. This information is also filtered into USDA’s Plant Protection and Quarantine databases. FY 2009 Costs (thousands) The system is planned to be replaced by a more user-friendly system in the next five years Plant resources in the United States, including crops, rangelands, and forests, are vulnerable to endemic, introduced, and emerging pathogens. More than 50,000 plant diseases occur in the United States, caused by a variety of pathogens. Increasing globalization and international trade activities create a strong likelihood that many other exotic plant pathogens will arrive in the United States in the coming years. Animal diseases can affect wildlife as well as livestock, pets, and companion animals. Some of these diseases may affect humans. Animal disease outbreaks can cause significant and potentially disruptive losses for animal producers, put financial strain on response systems, and affect regional and national economies.Outbreaks of foodborne illness can harm human health, undermine consumer confidence in the safety of the nation’s food supply, and have serious economic consequences. For instance the 2006 outbreak of E. coli O157:H7 linked to bagged spinach, resulted in 205 confirmed illnesses, 3 deaths, and an estimated $100 million loss to industry. The mission of the Cooperative Agricultural Pest Survey program is to identify exotic plant pests in the United States deemed to be of regulatory significance to USDA, state departments of agriculture, tribal governments, and cooperators. It facilitates this mission by working to confirm the presence or absence of environmentally and/or economically harmful plant pests. These pests can impact agriculture or the environment. The Cooperative Agricultural Pest Survey program also establishes and maintains a comprehensive network of cooperators and stakeholders to facilitate a plant protection mission. FY 2009 Costs (thousands) States provide information on plant pests and deliver samples for testing to USDA’s Plant Protection and Quarantine for further analysis. Results are disseminated back to participating states after testing has concluded. $9,098 was allocated to support pest detection activities. Of this, approximately $8,453.50 was given to the states via cooperative agreements to conduct pest detection activities. Plant resources in the United States, including crops, rangelands, and forests, are vulnerable to endemic, introduced, and emerging pathogens. More than 50,000 plant diseases occur in the United States, caused by a variety of pathogens. Increasing globalization and international trade activities create a strong likelihood that many other exotic plant pathogens will arrive in the United States in the coming years. Exotic Pest Information Collection and Analysis The purpose of Exotic Pest Information Collection and Analysis is to conduct plant pest biosurveillance for USDA’s Plant Protection and Quarantine. The program continuously gathers, evaluates, and communicates open-source information on quarantine-significant plant pests worldwide. The program also produces concise articles about relevant pieces of pest news, placing the news into a safeguarding context and providing important background information. The articles are distributed weekly in an e-mail notification and are archived in a Web-accessible, fully-searchable database (known as the Global pest and Disease Database). FY 2009 Costs (thousands) The Exotic Pest Information Collection and Analysis program team gathers publicly available information from the World Wide Web, including scientific journals, Web sites, listervs, and blogs Plant pests such as arthropods, nematodes, pathogens, mollusks, and weeds Plant resources in the United States, including crops, rangelands, and forests, are threateneded by exotic plant pests. Globalization and international trade increase the likelihood of exotic pest introduction into the United States. According to USDA, up-to-date pest information is essential for preparedness and early response. The Global Pest and Disease Database is an archive of exotic pest information specific to Plant Protection Quarantine needs, for uses including the prioritization of pest threats to the United States, conducting risk assessments of plant pests, and completing domestic exotic pest surveys. The Exotic Pest Information Collection and Analysis program contains information on over 600 plant and animal plant pests not native to the United States. The Exotic Pest Information Collection and Analysis program is primarily intended for use within USDA but DHS officials, other federal agencies, and state agricultural agencies also have access to the system. FY 2009 Costs (thousands) Other USDA biosurveillance systems, such as the Exotic Pest Information Collection and Analysis system, the Offshore Pest Information System, and the New Pest Advisory Group Plant pests not known to occur in the United States or in limited distribution in the United States Plant resources in the United States, including crops, rangelands, and forests, are vulnerable to endemic, introduced, and emerging pathogens. More than 50,000 plant diseases occur in the United States, caused by a variety of pathogens. Increasing globalization and international trade activities create a strong likelihood that many other exotic plant pathogens will arrive in the United States in the coming years. National Animal Health Surveillance System Animal and Plant Health Inspection Service The National Animal Health Surveillance System is a USDA initiative to integrate existing animal health monitoring programs and surveillance activities into a national, comprehensive, and coordinated system and develop new surveillance systems, methodology, and approaches. The system is an interdisciplinary network of partners working together to protect animal health and promote free trade through surveillance, control, and prevention of foreign, emerging, and endemic diseases. FY 2009 Costs (thousands) Exotic and endemic infectious diseases affecting animals and public health Animal diseases can affect wildlife as well as livestock, pets, and companion animals. Some of these diseases may affect humans. Animal disease outbreaks can cause significant and potentially disruptive losses for animal producers, put financial strain on response systems, and affect regional and national economies. National Wildlife Health Center Wildlife Mortality Database (EPIZOO) U.S. Geological Survey National Wildlife Health Center The USGS National Wildlife Center’s EPIZOO database is a data set that documents information on epidemics in wildlife. EPIZOO tracks die-offs throughout the United States and territories, primarily in migratory birds and endangered species. Data include locations, dates, species involved, history, population numbers, total sick and dead, and diagnostic information. The data are collected from a reporting network developed at the National Wildlife Health Center as well as from collaborators across the North American continent. FY 2009 Costs (thousands) Regular data are available from 1975 to the present; some data sets are available from earlier years Animal diseases can affect wildlife as well as livestock, pets, and companion animals. Some of these diseases may affect humans. Animal disease outbreaks can cause significant and potentially disruptive losses for animal producers, put financial strain on response systems, and affect regional and national economies. U.S. Geolgocial Survey National Wildlife Health Center The Wildlife Health Diagnostic Database is a computerized record of specimens sent to the National Wildlife Health Center for processing and diagnostic testing. Data include history and recordkeeping information, types of tests run, and some initial diagnostic and testing results. Data from the system cannot be used as a representative sample of animal health diseases that exist in the wild, but it may be used to determine if a disease or animal health syndrome has occurred in the wild. FY 2009 Costs (thousands) Data have been available since 1975 Animal diseases can affect wildlife as well as livestock, pets, and companion animals. Some of these diseases may affect humans. Animal disease outbreaks can cause significant and potentially disruptive losses for animal producers, put financial strain on response systems, and affect regional and national economies. The U.S. Forest Service Forest Health Protection program is responsible for detection and monitoring of forest health conditions on all forested lands in the United States. The program annually conducts aerial surveys of nearly 500 million acres of forested lands for unusual activities of forest insects and pathogens. The Forest Health Protection Program has developed a suite of forest health indicators that are to monitor forest health and facilitate the detection of the introduction of foreign pests. Pest risk assessments are used to target detection surveys in areas that are particularly vulnerable to invasion and establishment of invasive pests. Annual reports at the state, regional, and national levels assess trends in forest condition and highlight new or expanding outbreaks of forest pests. FY 2009 Costs (thousands) System is operational. Plant resources in the United States, including crops, rangelands, and forests, are vulnerable to endemic, introduced, and emerging pathogens. More than 50,000 plant diseases occur in the United States, caused by a variety of pathogens. Increasing globalization and international trade activities create a strong likelihood that many other exotic plant pathogens will arrive in the United States in the coming years. The Biohazard Detection System deployed by the USPS is a decentralized locally networked automated collection and identification system and is used to detect the biological agent causing anthrax that could be present in first-class mail. The system is installed in mail processing facilities nationwide. The system is integrated with mail processing and letters are automatically fed into the system where it detects the presence of anthrax. The detecting process runs continuously and alerts system operators if a presumptive positive case of anthrax is detected. FY 2009 Costs (thousands) Annual Operation and Maintenance Costs: $73.5M Anthrax is an acute infectious disease caused by a bacterium commonly found in the soil. Although anthrax can infect humans, it occurs most commonly in plant-eating animals. Human anthrax infections have usually resulted from occupational exposure to infected animals or from contaminated animal products. Anthrax infection can take one of three forms: coetaneous, usually through a cut or abrasion; gastrointestinal, usually by ingesting undercooked contaminated meat; or inhalation, by breathing airborne anthrax spores into the lungs. The symptoms are different for each form and usually occur within 7 days of exposure. Anthrax can be treated with antibiotics and a vaccine is available. In 2001, U.S. Postal Service employees and customers contracted anthrax after a domestic bioterrorism incident that spread anthrax spores through the U.S. mail and resulted in five deaths. In addition to the contact named above, Anne Laffoon, Assistant Director; Michelle Cooper; Kathryn Godfrey; Amanda Krause,;Steven Banovac; and Susanna Kuebler made significant contributions to the work. Keira Dembowski, Jessica Gerrard-Gough, and Patrick Peterson also provided support. Tina Cheng assisted with graphic design. Amanda Miller and Russ Burnett assisted with design, methodology, and analysis. Tracey King provided legal support. Linda Miller provided communications expertise. National Security: Key Challenges and Solutions to Strengthen Interagency Collaboration. GAO-10-822T. Washington D.C.: June 9, 2010. Biosurveillance: Developing a Collaboration Strategy Is Essential to Fostering Interagency Data and Resource Sharing. GAO-10-171. Washington D.C.: December 18, 2009. Influenza Pandemic: Monitoring and Assessing the Status of the National Pandemic Implementation Plan Needs Improvement. GAO-10-73. Washington D.C.: November 24, 2009. Interagency Collaboration: Key Issues for Congressional Oversight of National Security Strategies, Organizations, Workforce, and Information Sharing. GAO-09-904SP. Washington D.C.: September 25, 2009. Food Safety: Agencies Need to Address Gaps in Enforcement and Collaboration to Enhance Safety of Imported Food. GAO-09-873. Washington D.C.: September 15, 2009. Influenza Pandemic: Gaps in Pandemic Planning and Preparedness Need to Be Addressed. GAO-09-909T. Washington D.C.: July 29, 2009. Veterinarian Workforce: The Federal Government Lacks a Comprehensive Understanding of Its Capacity to Protect Animal and Public Health. GAO-09-424T. Washington D.C.: February 26, 2009. Seafood Fraud: FDA Program Changes and Better Collaboration among Key Federal Agencies Could Improve Detection and Prevention. GAO-09-258. Washington, D.C.: February 19, 2009. Veterinarian Workforce: Actions Are Needed to Ensure Sufficient Capacity for Protecting Public and Animal Health. GAO-09-178. Washington D.C.: February 4, 2009. Health Information Technology: More Detailed Plans Needed For the Centers for Disease Control and Prevention’s Redesigned BioSense Program. GAO-09-100. Washington D.C.: November 20, 2008. Influenza Pandemic: HHS Needs To Continue Its Actions And Finalize Guidance for Pharmaceutical Interventions. GAO-08-671. Washington D.C.: September 30, 2008. Food Safety: Improvements Needed in FDA Oversight of Fresh Produce. GAO-08-1047. Washington D.C.: September 26, 2008. United States Postal Service: Information on the Irradiation of Federal Mail in the Washington D.C., Area. GAO-08-938R. Washington D.C.: July 31, 2008. Biosurveillance: Preliminary Observations on Department of Homeland Security’s Biosurveillance Initiatives. GAO-08-960T. Washington D.C.: July 16, 2008. Homeland Security: First Responders’ Ability to Detect and Model Hazardous Releases in Urban Areas Is Significantly Limited. GAO-08-180. Washington D.C.: June 27, 2008. Emergency Preparedness: States Are Planning for Medical Surge, but Could Benefit from Shared Guidance for Allocating Scarce Medical Resources. GAO-08-668. Washington D.C.: June 13, 2008. Federal Oversight of Food Safety: FDA Has Provided Few Details on the Resources and Strategies Needed to Implement its Food Protection Plan. GAO-08-909T. Washington D.C.: June 12, 2008. Food Safety: Selected Countries’ Systems Can Offer Insights into Ensuring Import Safety and Responding to Foodborne Illness. GAO-08-794. Washington D.C.: June 10, 2008. Federal Oversight of Food Safety: FDA’s Protection Plan Proposes Positive First Steps, but Capacity to Carry Them Out is Critical. GAO-08-435T. Washington D.C.: January 29, 2008. Project Bioshield: Actions Needed to Avoid Repeating Past Problems with Procuring New Anthrax Vaccine and Managing the Stockpile of Licensed Vaccine. GAO-08-88. Washington D.C.: October 23, 2007. Global Health: U.S. Agencies Support Programs to Build Overseas Capacity for Infectious Disease Surveillance. GAO-08-138T. Washington D.C.: October 4, 2007. Global Health: U.S. Agencies Support Programs to Build Overseas Capacity Overseas Capacity for Infectious Disease Surveillance. GAO-07-1186. Washington D.C.: September 28, 2007. Anthrax: Federal Agencies Have Taken Some Steps to Validate Sampling Methods and to Develop a Next-Generation Anthrax Vaccine. GAO-06-756T. Washington D.C.: May 9, 2006. Agriculture Production: USDA Needs to Build on 2005 Experience to Minimize the Effects of Asian Soybean Rust in the Future. GAO-06-337. Washington D.C.: February 24, 2006. USPS: Guidance on Suspicious Mail Needs Further Refinement. GAO-05-716. Washington D.C.: July 19, 2005. Information Technology: Federal Agencies Face Challenges in Implementing Initiatives to Improve Public Health Infrastructure. GAO-05-308. Washington D.C.: June 10, 2005. Homeland Security: Much Is Being Done to Protect Agriculture from a Terrorist Attack, but Important Challenges Remain. GAO-05-214. Washington D.C.: March 8, 2005. Drinking Water: Experts’ Views on How Federal Funding Can Best Be Spent To Improve Security. GAO-04-1098T. Washington D.C.: September 30, 2004. Emerging Infectious Diseases: Review of State and Federal Disease Surveillance Efforts. GAO-04-877. Washington D.C.: September 30, 2004 Federal Food Safety and Security: Fundamental Restructuring is Needed to Address Fragmentation and Overlap. GAO-04-588T. Washington D.C.: March 30, 2004. Combating Terrorism: Evaluation of Selected Characteristics in National Strategies Related to Terrorism.GAO-04-408T. Washington D.C.: February 3, 2004. Drinking Water: Experts Views on How Future Funding Can Be Best Spent To Improve Security. GAO-04-29. Washington D.C.: October 31, 2003. Infectious Diseases: Gaps Remain in Surveillance Capabilities of State and Local Agencies. GAO-03-1176T. Washington D.C.: September 24, 2003. Bioterrorism Information Technology Strategy Could Strengthen Federal Agencies’ Abilities to Respond to Public Health Emergencies. GAO-03-139. Washington D.C.: May 30, 2003. Combating Terrorisms Selected Challenges and Recommendations. GAO-01-822. Washington D.C.: September 20, 2001. Food Safety: CDC Is Working to Address Limitations in Several of Its Foodborne Disease Surveillance Systems. GAO-01-973. Washington D.C.: September 7, 2001. Global Health: Challenges in Improving Infectious Disease Surveillance Systems. GAO-01-722. Washington D.C.: August 31, 2001. West Nile Virus Outbreak: Lessons for Public Health Preparedness.GAO/HEHS-00-180. Washington D.C.: September 11, 2000. Global Health: Framework for Infectious Disease Surveillance. NSIAD-00- 205R. Washington D.C.: July 20, 2000.
The U.S. government has a history of employing health surveillance to help limit malady, loss of life, and economic impact of diseases. Recent legislation and presidential directives have called for a robust and integrated biosurveillance capability; that is, the ability to provide early detection and situational awareness of potentially catastrophic biological events. The Implementing Recommendations of the 9/11 Commission Act directed GAO to report on the state of biosurveillance and resource use in federal, state, local, and tribal governments. This report is one in a series responding to that mandate. This report addresses (1) federal efforts that support a national biosurveillance capability and (2) the extent to which mechanisms are in place to guide the development of a national biosurveillance capability. To conduct this work, GAO reviewed federal biosurveillance programs, plans, and strategies and interviewed agency officials from components of 12 federal departments with biosurveillance responsibilities. Federal agencies with biosurveillance responsibilities--including the Departments of Health and Human Services, Homeland Security, and Agriculture--have taken or plan to take actions to develop the skilled personnel, training, equipment, and systems that could support a national biosurveillance capability. GAO previously reported that as the threats to national security have evolved over the past decades, so have the skills needed to prepare for and respond to those threats. Centers for Disease Control and Prevention (CDC) officials stated that skilled personnel shortages threaten the capacity to detect potentially catastrophic biological events as they emerge in humans, animals, or plants. To address this issue, some federal agencies are planning or have taken actions to attract and maintain expertise using fellowships, incentives, and cooperative agreements. Moreover, CDC has called for the development of a national training and education framework to articulate professional roles and competencies necessary for biosurveillance. The Department of Agriculture has also developed training programs to help ensure that diseases and pests that could harm plants or animals can be identified. In addition, federal agencies have taken various actions designed to promote timely detection and situational awareness by developing (1) information sharing and analysis mechanisms, (2) laboratory networks to enhance diagnostic capacity, and (3) equipment and technologies to enhance early detection and situational awareness. While national biodefense strategies have been developed to address biological threats such as pandemic influenza, there is neither a comprehensive national strategy nor a focal point with the authority and resources to guide the effort to develop a national biosurveillance capability. For example, the National Security Council issued the National Strategy for Countering Biological Threats in November 2009. While this strategy calls for the development of a national strategy for situational awareness, it does not meet the need for a biosurveillance strategy. In addition, this strategy includes objectives that would be supported by a robust and integrated biosurveillance capability, such as obtaining timely and accurate insight on current and emerging risks, but it does not provide a framework to help identify and prioritize investments in a national biosurveillance capability. GAO previously reported that complex interagency efforts, such as developing a robust, integrated, national biosurveillance capability, could benefit from an effective national strategy and a focal point with sufficient time, responsibility, authority, and resources to lead the effort. Efforts to develop a national biosurveillance capability could benefit from a national biosurveillance strategy that guides federal agencies and other stakeholders to systematically identify risks, resources needed to address those risks, and investment priorities. Further, because the mission responsibilities and resources needed to develop a biosurveillance capability are dispersed across a number of federal agencies, efforts to develop a biosurveillance system could benefit from a focal point that provides leadership for the interagency community. GAO recommends that the Homeland Security Council direct the National Security Staff to identify, in consultation with relevant federal agencies, a focal point to lead the development of a national biosurveillance strategy to guide the capability's development. GAO provided a copy of this draft to the 12 federal departments and the National Security Staff.
Authorized under section 7(a) of the Small Business Act (15 U.S.C. § 636 (a)), the SBA 7(a) program was established to serve small business borrowers that cannot otherwise obtain private sector financing under suitable terms and conditions. The SBA 7(a) program is SBA’s primary vehicle for providing small businesses with access to credit, whereby SBA provides partial guarantees of loans made by SBA-approved private sector lenders. One requirement to obtain a 7(a) loan guarantee, which is backed by the full faith and credit of the U.S. government, is that a lender must document that the prospective borrower was unable to obtain financing under reasonable terms and conditions through normal business channels. SBA authorized secondary markets in 7(a) loans to help lenders manage their funding needs for these loans. The SBA guarantee encourages lenders to make small business loans by transferring most of an approved loan’s credit risk from the loan originator to SBA. The SBA guarantee eliminates credit risk not only for the lenders on the guaranteed portion of 7(a) loans but also for the investor in 7(a) preferred lenders, indicating that SBA lacked sufficient lender oversight to limit credit risk to the agency, including risk from lender concentration. In commenting on a draft of this report, SBA stated that it has taken significant steps to improve the oversight of participating lenders, including SBLCs. We have not evaluated these initiatives, but they appear to be the type of actions that could mitigate credit risk to the agency resulting from lender concentration. pool certificates. In addition to the full faith and credit of the U. S. government, the 7(a) pool certificates also carry SBA’s timely payment guarantee, which ensures that investors will be paid on scheduled dates when collections from borrowers are not timely. SBA 7(a) loans are heterogeneous in that they differ in many respects, such as interest rates; repayment schedules; maturity; loan collateral type, quality, and marketability; and type of business to which the loans are made. SBA 7(a) loans are made to a diverse range of small businesses with widely differing financial profiles and credit needs, such as restaurants, consumer services, professional services, and retail outlets. The dollar volume of 7(a) loans that SBA can guarantee each year is based on congressional appropriations that subsidize the 7(a) guarantee program. For the fiscal year that ended September 30, 1997, SBA approved nearly $9.5 billion in loans--the highest amount to date, and an increase of over 20 percent from the previous fiscal year. As of December 31, 1997, there was $21.5 billion in outstanding 7(a) loans. According to SBA, about 8,000 lenders are authorized to participate in the 7(a) loan program. They range from institutions that make a few 7(a) loans annually to more active institutions that originate hundreds of 7(a) loans annually. Most are insured depository institutions, such as banks and savings and loan associations. Nondepository lenders include Business and Industrial Development Companies, chartered under state statutes; insurance companies; and SBLCs licensed and regulated by SBA. At the end of 1997, SBLCs accounted for about 19 percent of outstanding 7(a) loans. SBA has established three classifications of lenders within the 7(a) program--regular, certified, and preferred--each having different levels of authority in processing loans. SBA completely analyzes regular lenders’ loans and decides on their guarantee. The agency authorizes certified lenders to perform their own credit analyses and preferred lenders to make eligibility and creditworthiness determinations as well as approve their own loans without SBA review. SBA 7(a) loans differ from other small business loans in some respects. Our 1996 report indicated that 7(a) loans tend to be larger, have longer maturities, and have higher interest rates than small business loans in general. Typically, loans with features such as longer terms and no prepayment penalties warrant higher interest rates. SBA figures showed the average maturity of 7(a) loans sold in the secondary market in 1997 was three times longer than for conventional commercial and industrial loans under $1 million. Also, average interest rates for SBA loans were 67 basis points higher for fixed-rate loans, and 178 basis points higher for variable rate loans, than for respective categories of conventional commercial and industrial loans under $1 million. In the primary market, single-family residential mortgages differ from 7(a) loans in a number of dimensions that directly affect their respective secondary markets. A majority of residential mortgages have fixed interest rates, and those with adjustable rates have interest rate caps that limit interest rate risk to borrowers. In contrast, 7(a) loans consist primarily of variable rate loans without interest rate caps. As a result, lenders face more interest rate risk on residential mortgages than on 7(a) loans. Residential mortgage loans are more homogeneous than 7(a) loans because the terms are standardized, and collateral, residential property, is the same. In order to provide a perspective of how the 7(a) markets compare with other secondary markets, we compared the two secondary markets in 7(a) loan portions to the secondary markets for single-family residential mortgages as follows: The secondary market for single-family residential mortgages which has federal mortgage insurance provided by the Federal Housing Administration (FHA), a government corporation within the Department of Housing and Urban Development (HUD). These mortgages are fully insured in the event of borrower default. Lenders who originate FHA- insured mortgages can pool them and issue MBS guaranteed by the Government National Mortgage Association (Ginnie Mae), another government corporation within HUD, which, for a fee, guarantees timely payment of principal and interest to investors. We compared this secondary market to the guaranteed 7(a) secondary market. The secondary market in conventional, single-family residential mortgages that have loan amounts, or other characteristics that preclude purchase by Fannie Mae or Freddie Mac. These are called nonconforming mortgage loans. In this secondary market, state-chartered private corporations-- referred to as private-label conduits--pool mortgages they purchase and issue MBS. We compare features of this market to the unguaranteed 7(a) loan secondary market. The benefit to individual lenders of selling loans in a secondary market depends in part on demand for that lender’s loans and availability and costs of the lender’s alternative funding sources. Other considerations include whether holding loans on the balance sheet or selling them in the secondary market brings higher returns on invested capital and/or lowers the lender’s risks. To meet our report objectives, we reviewed SBA’s standard operating procedures for the 7(a) program and other SBA documents addressing the role of the secondary markets for 7(a) loans. We also reviewed research conducted by SBA, HUD, other federal agencies, and others on the workings of the 7(a) markets; secondary markets for conventional small business loans; and residential mortgage loan secondary markets. We analyzed data on the 7(a) program from SBA as well as publicly available information on the residential mortgage market. We also talked to SBA officials and officials of its fiscal and transfer agent, Colson Services, Corp.; Ginnie Mae; HUD; the National Association of Government Guaranteed Lenders; the Bond Market Association; the American Bankers Association; the Independent Bankers Association; participating 7(a) lenders and poolers; other participants in the small business loan markets; the Office of Federal Housing Enterprise Oversight (OFHEO); the Board of Governors of the Federal Reserve System; the Office of the Comptroller of the Currency; and the Securities and Exchange Commission (SEC). dollar level can sell them to the Federal National Mortgage Association (Fannie Mae) and the Federal Home Loan Mortgage Corporation (Freddie Mac), two private corporations with federal charters. These two corporations are called government-sponsored enterprises (the enterprises) and with a majority of their mortgage purchases, they pool the mortgages and issue MBS backed by these loans. These enterprises normally provide corporate guarantees on MBS they issue, which eliminate credit risk for MBS investors. This secondary market has features that we compare to the guaranteed and unguaranteed secondary markets in 7(a) loans. comparative purposes in this report. The secondary market in residential mortgages is divided into three broad parts. The first part is based on federally insured/guaranteed mortgages provided by FHA or the Department of Veterans Affairs (VA). In this market, residential mortgage loans are pooled to create tradable financial claims in the form of securities, with pool guarantees from Ginnie Mae. A majority of mortgages backing Ginnie Mae MBS are FHA-insured mortgages. We deemed this secondary market analogous to the guaranteed 7(a) market for purposes of this report. A second part of the secondary market is for mortgages that conform to underwriting standards created by the enterprises. Although the government does not guarantee these mortgages, the private sector perceives the federal connections of the enterprises as providing an implicit guarantee and takes into account their pool guarantees. The last part of the secondary residential mortgage markets includes nonconforming mortgages that are fully private and pooled without implicit or explicit government guarantees. We deemed this secondary market analogous to the unguaranteed 7(a) secondary market for this report’s purposes. In our review of disclosures made to investors in guaranteed 7(a) pool certificates, we relied on SBA regulations and information obtained in discussions with SBA and Colson officials. For unguaranteed pool securities, we reviewed offering statements from an issuer of SEC- registered, publicly offered 7(a) pool securities, but we did not obtain offering statements or materials for 7(a) pool certificates or 7(a) pool securities that were not SEC registered. We used information from financial industry publications, Ginnie Mae disclosure forms and offering statements to determine financial disclosure information provided on the various forms of MBS. We conducted our work in Washington, D.C., between September 1997 and December 1998 in accordance with generally accepted government auditing standards. We provided copies of a draft of this report to SBA for review and comment. SBA’s Associate Deputy Administrator for Capital Access provided written comments on the draft report, which are summarized on page 34 and reprinted in appendix IV. SBA and Ginnie Mae provided technical comments on the draft report, which have been incorporated where appropriate. Secondary loan markets, which are resale markets for loans originated in primary markets, link borrowers and lenders in local markets to national capital markets, lower costs for funds, and help lenders manage risks. This linkage provides an additional source of funds for lenders that can increase lenders’ liquidity. Borrowers can benefit from the ensuing increase in funds availability and from lower interest rates that result from lender competition. Investors in secondary loan markets can benefit by holding more liquid financial instruments than they would have from investing directly in individual loans. The major risks in secondary loan market transactions--as well as in primary market lending--are credit, prepayment, and interest rate risks. Investors, guarantors, and lending institutions that securitize loan pools can suffer losses or incur costs as a result of one or more of these risks in the secondary markets. The levels and types of risk, as well as the parties that incur risk, can differ as well. These variables are important determinants of the share of loans in a particular primary market that are sold in secondary markets. Factors bearing on individual lenders’ decisions to sell their loans in a secondary market include loan demand, the availability and costs of alternative funding sources, and the relative risks or returns from selling loans on the secondary market compared to holding them. The share of loans in a primary market that are sold in a secondary market depends on the benefits generated by the secondary market. Secondary loan markets can generate a number of benefits for lenders and borrowers. The secondary loan markets provide an alternative funding source in addition to deposits and other funding sources, such as lines of credit and debt issuance proceeds. Selling their loans on the secondary markets provides lenders more flexibility in managing their liquidity needs. They can generate funds for additional lending, earn fee income by servicing the sold loans, or avoid tying up capital. The resulting liquidity can reduce regional imbalances or cyclical swings in loanable funds. Borrowers can benefit from increased credit availability, and competition among lenders can provide borrowers with lower interest rates. By investing in pools of loans, investors can diversify their risks among a number of loans rather than having them concentrated in one loan. Investors can sell their interests on active secondary markets to other willing investors. The primary risks that can affect the cash flows generated by loan pools in secondary loan markets are credit, prepayment, and interest rate risks. A variety of factors affect the levels of these risks in secondary loan market transactions as well as the parties that incur them, as illustrated by the following general observations: Credit risk levels depend upon characteristics of the pooled loans that back a security, such as borrowers’ credit worthiness, the collateral securing the loans, the type of business financed, and the pool’s geographic diversity. The federal government--and therefore U.S. taxpayers--bears the credit risk on securities backed by pooled loans with federal guarantees. The investor and the lender share credit risk on securities with credit enhancements provided by the lender. The level of prepayment risk in a security depends, in part, on whether prepayment penalties are included on the pooled loans backing a security. The nature of the interest rates on loans--fixed or variable--affects the level of interest rate risk. Since the inherent credit, prepayment, and interest rate risks are important to investors, the ability to estimate returns and risks of securitized loan pools is also important. Reliable estimates of returns and risks are more likely when historical data on the performance of similar loans under varying economic conditions are available and when loan pools are homogeneous. When historical data include the loss experience of many comparable loans under a wide variety of economic conditions, investors and analysts can calculate loss probability distributions that predict the likely losses for a pool of similar and homogeneous loans. However, the less alike the loans are, the more troublesome it can be to estimate cash flows or the likelihood of losses. Although more precise cash flow estimates improve investors’ ability to estimate or measure risk, they may also lower returns because investors want to be compensated according to the degree of risk they undertake. Therefore, when less precise estimates can be made, both risks and returns to investors are generally higher. Monetary benefits to lenders from participating in the secondary market lessen when they must pay investors high returns. As discussed earlier, secondary loan markets give lenders a funding alternative to deposits and other funding sources, such as lines of credit and the proceeds from debt issuances. The benefit to an individual lender of selling loans in a secondary market depends in part on the comparative costs of its available funding sources. For example, a lender that has access to adequate funding to meet the demand for loans, consistent with its business plan, may lack funding-related incentives to participate in secondary loan markets. The benefit of secondary markets to individual lenders also depends on whether holding loans on the balance sheet or selling them in the secondary market can best increase returns on invested capital and/or lower risks for the lender. For example, a financial institution holding long-term fixed rate loans financed by variable rate liabilities is subject to interest rate risk, which the institution could reduce by selling these loans on the secondary market. In linking 7(a) borrowers and lenders from local markets to the national capital markets, the 7(a) secondary markets--particularly the market for guaranteed portions--benefit lenders, borrowers, and investors. The 7(a) secondary markets can help borrowers and lenders by reducing regional imbalances and cyclical swings in credit availability and pricing. In 1997, the guaranteed 7(a) secondary market served as a funding source for many lenders, particularly for about 50 institutions that generally lacked deposit bases, according to an SBA official. The 7(a) secondary markets can also help qualified borrowers by providing a means to lower interest rates or to make 7(a) loans available at more favorable terms. Institutional investors in 7(a) pool certificates and securities--including pension funds, mutual funds, insurance companies, and others--benefit from the greater liquidity and lower risks in pool certificates and securities compared to investments in individual loans. Figure 1 illustrates differences in how the guaranteed and unguaranteed markets work. Investors, poolers, lenders, and SBA face various risks from the 7(a) secondary markets. Investors in the guaranteed 7(a) market face prepayment risk, and investors and lenders in the unguaranteed 7(a) secondary market share prepayment and credit risks. The heterogeneity of 7(a) loans makes estimation of risks difficult for investors and limits the overall benefits of the secondary markets. Another limiting factor is the fact that interest rate risk in the 7(a) markets is less because most 7(a) loans have variable interest rates pegged to current prime market rates. Interest rate risk is less likely to be a factor for most depositories because few 7(a) loans have fixed interest rates. SBA faces the possibility of a concentration of credit risk from both 7(a) secondary markets. The combination of the potentially large amount of funds and economies of scale that individual lenders may achieve by increasing the number of 7(a) loans they sell could result in a concentration of 7(a) loans serviced by one or a few lenders. A sharp increase in loan defaults by, or the failure of, one such lender could be costly to SBA, which lacks controls for concentration risk. Moreover, our 1998 report cited inadequacies in SBA’s efforts to ensure sound SBLC lending practices. While the participation of rating agencies in the unguaranteed marketplace encourages lenders to follow prudent lending practices, this factor alone may not adequately limit SBA’s credit risk from lender concentration. Although high ratings on securities backed by unguaranteed portions of 7(a) loans may indicate that lenders have followed prudent lending practices, lenders may be able to change these practices and operate for some time before the ratings are lowered to reflect the change, thus some credit risk from lender concentration remains. In calendar year 1997, 1,540 SBA lenders sold 12,164 7(a) loans (about 45 percent of the 7(a) loans approved during the most recent fiscal year) in the guaranteed secondary market and collectively generated $2.7 billion in sales. About 50 of those lenders used the guaranteed 7(a) market extensively, selling every 7(a) loan they originated, according to an SBA official. These lenders generally lacked a sufficient deposit base to fund their loans. The unguaranteed 7(a) secondary market is much smaller than the guaranteed 7(a) market. At the end of 1997, the total guaranteed portions of outstanding 7(a) loans was $10 billion. Only nine 7(a) lenders—six nondepository lenders and three depository lenders—had securitized 20 pools of unguaranteed portions of 7(a) loans. One of these lenders, The Money Store, was responsible for 8 of the transactions, which accounted for about two-thirds of the total $1.25 billion in unguaranteed portions securitized since 1992. In 1992 , SBA authorized the creation of the unguaranteed secondary market as a funding source for nondepository institutions. Since that time, these lenders have been able to sell pools of unguaranteed portions of their 7(a) loans in the secondary market. In 1996, Congress mandated that SBA revise its rules to allow all lenders to sell the unguaranteed portions of their 7(a) loans on the secondary market. In April 1997, SBA promulgated an interim rule that extended the provisions of the previous rule to include all participating 7(a) lenders. Under the interim rule, SBA was to review each proposed securitization on a case-by-case basis for safety and soundness concerns. The interim rule remained in effect until April 12, 1999, when SBA’s final rule, promulgated February 10, 1999, became effective. The unguaranteed 7(a) secondary market comprises buyers and sellers of securities backed by the unguaranteed portions of 7(a) loans. Securities backed by unguaranteed portions of 7(a) loan portions can be issued and sold to investors in either public offerings or private placements. Unlike certificates backed by guaranteed portions of 7(a) loans, the public sale of these securities is subject to SEC registration and disclosure requirements. The Securities Act of 1933 requires that securities sold in a public offering be registered with SEC before they are distributed and that certain information regarding the securities and the issuer of the securities must be disclosed to prospective buyers. Issuers can avoid the costly registration and reporting process required of public offerings by offering the securities in private placements. Private placements must be made on a limited basis to selected persons, and not be part of a general public solicitation. In general, private placements are less liquid than publicly traded securities. Eight of the 20 pools of unguaranteed portions of 7(a) loans were sold in public offerings. Securities backed by unguaranteed portions generally carry one or more credit enhancements to raise the ratings of these securities and attract investors. As of December 31, 1998, the senior class of all 20 securitizations of unguaranteed portions had investment grade ratings. Although constrained by annual congressional appropriations for the 7(a) loan program, the 7(a) secondary markets can benefit borrowers of program loans by making loans available at more favorable terms, as other secondary markets do. Lenders that profit from the secondary markets may pass on some of their gains from lower funding costs to borrowers in the form of lower interest rates. Growth in SBA’s lending authority has accelerated since 1985, when SBA first allowed lenders to pool guaranteed portions of 7(a) loans for secondary market sales. SBA’s annual 7(a) loan volume grew steadily from $2.3 billion in fiscal year 1985 to $3.1 billion in fiscal year 1991, and $9.5 billion in fiscal year 1997. Secondary markets for investments backed by SBA 7(a) loan pools attract a wide range of institutional investors, including pension funds, mutual funds, insurance companies, and others that might not otherwise consider investing in small business loans. Investors in 7(a) pool certificates and securities benefit from greater liquidity and lower risks than they would get from investing directly in individual loans, because these instruments can be sold more easily than individual loans, and risks are dispersed among the pooled loans. Investors in guaranteed portions do not face credit risk because the SBA guarantee transfers that risk from the investor to SBA. Investors who purchase 7(a) pool securities backed by unguaranteed portions face both credit and prepayment risks. They typically share the credit risk with the lender. That is, SBA 7(a) pool securities typically carry one or more forms of lender-provided credit enhancement, which reduces credit risk and makes the securities more attractive to investors. The most common form of credit enhancement uses excess spread, which is a cash reserve funded by a portion of collections from borrowers’ loan payments on guaranteed and unguaranteed portions of the loans. Another common form of credit enhancement used with securities backed by unguaranteed portions is subordination. In subordination, at least two classes of security are created, with the subordinate classes subject to absorbing a prescribed amount of losses on the loans backing the securities. Credit enhancements provide funds to maintain scheduled payments to investors if borrowers go into default or are late in making payments. Such enhancements are required by credit rating agencies to bring securities to investment-grade ratings, as they act to mitigate credit risk. Providing such loss protection comes at a cost to the lender. The higher the credit enhancement needed to sell the securities, the lower the net proceeds to the lender from the sale of the securities and therefore the incentive to securitize the loans. Investors and analysts can generally make more reliable estimates of the returns and risks of loan pools when they are homogeneous and when historical data are available on the performance of similar loans under varying economic conditions. The availability of large bases of historical data that include the loss experience of many comparable loans can enable investors and analysts to estimate loss probability distributions from those data and use the results to predict the expected loss experience for a pool of similar and homogeneous loans. When secondary market investors face high levels of credit risk or lack information to estimate such risks, they demand greater credit enhancements and yields. These factors act to lower the prices investors will pay for securities backed by the unguaranteed portion of 7(a) loans. A discussion of how credit rating agencies determine ratings for SBA loan- backed securitizations appears in appendix I. To compensate for prepayment risk, investors demand higher yields on 7(a) pool certificates than on Treasury securities with comparable maturities. As with Treasury securities, the U.S. government bears the credit risk on SBA pool certificates backed by guaranteed portions of 7(a) loans, but it does not bear the credit risk for securities backed by unguaranteed portions of 7(a) loans. Therefore, excess spread from guaranteed certificates is set aside to shoulder some of the credit risk burden associated with the unguaranteed portions of 7(a) loans. This use of excess spread as a credit enhancement affects the spread between the interest rate the borrower pays and the rate paid to investors in 7(a) pool certificates. To address prepayment concerns, the Bond Market Association has proposed that SBA consider imposing prepayment penalties structured to reduce borrower incentives to refinance 7(a) loans as nonguaranteed commercial loans at marginally lower rates. While flexibility for some borrowers would be constrained if prepayment penalties were imposed, 7(a) loans with such prepayment penalties could lead to more favorable loan terms for borrowers. Such a feature in 7(a) loans, and often present in commercial business loans, would also mean less prepayment risk for investors in the 7(a) secondary market. A discussion of the mechanics of the guaranteed 7(a) secondary market appears in appendix II. The 7(a) secondary markets could also be instrumental in contributing to a concentration of credit risk for SBA. Because SBA generally guarantees 75 to 80 percent of each 7(a) loan, the failure of one or more large lenders to follow prudent lending practices necessary for making creditworthy loans can expose SBA to credit risk once those loans go into default. As a large potential funding source for lenders, the 7(a) secondary markets can enable lenders active in the secondary markets to increase loan volume, and the existence of economies of scale could possibly lead to concentration of the 7(a) portfolio among a few lenders. Through the rating process, the marketplace encourages lenders to follow prudent lending practices and provide credit enhancements that will protect investors to a certain degree. However, this alone may not provide sufficient lender discipline to limit credit risk to SBA resulting from concentration of 7(a) loans in the servicing portfolio of a large lender that does not follow prudent lending practices. Although high ratings on securities backed by unguaranteed portions of 7(a) loans may indicate that lenders have followed prudent lending practices, lenders may be able to change these practices and operate for some time before the ratings are lowered to reflect the change, thus some credit risk from lender concentration remains. Our 1998 report noted that weaknesses existed in regulatory oversight to help ensure that the 7(a) lenders comply with requirements that mitigate SBA’s credit risk. It stated that SBA had established various lender standards and loan policies and procedures to help ensure that lenders follow prudent lending standards. However, the report noted, without conducting periodic, on-site lender reviews, SBA had no systematic means to help ensure that lenders’ actions did not render loans ineligible, uncreditworthy, or uncollectible, and thus increased the risk of loss to the agency. Although financial institution regulators help ensure safe and sound operations, their oversight does not necessarily ensure that 7(a) portfolios are managed prudently. Perhaps of greater importance were weaknesses in oversight of SBLCs, which are licensed, regulated, and supervised by SBA. In our 1998 review of 5 of SBA’s 69 district offices, we found that SBA had not conducted the regular, periodic reviews of lender compliance with its 7(a) loan standards or met its own standards for SBLC oversight. In commenting on this report (see app. IV), SBA stated that it has since reviewed all preferred lenders, and that the Farm Credit Administration, through an agreement with SBA, has completed the on-site portions of the SBLC reviews. While we have not evaluated these initiatives, they appear to be the type of actions that could mitigate credit risk to the agency resulting from lender concentration. SBA’s final rule, promulgated February 10, 1999, and effective April 12, 1999, includes provisions that are intended to control the agency’s credit risk in the 7(a) program. As discussed in further detail in appendix II, the final rule stipulates capital requirements for lenders and establishes requirements that lenders retain a subordinated interest in securities they create, based on each lender’s loss rate. The final rule also provides a monitoring component whereby a decline in a securitizer’s performance would trigger suspension of certain lending privileges. The pooling of loans in the 7(a) secondary market is an innovation widely applied in the much larger secondary markets for single-family residential mortgage loans, where whole mortgage loans, rather than separate portions of loans, are pooled to create tradable financial claims in the form of MBS. Compared to the secondary markets for 7(a) loans, the secondary markets for residential mortgages operate with greater incentives for lenders to sell the loans they originate. A comparatively greater proportion of mortgage lenders has an economic incentive to sell loans in the secondary market because they rely on secondary mortgage markets as their most important source of funding. In addition, depository institutions’ needs to manage interest rate risk associated with mortgage loans, coupled with risk-management opportunities provided by the secondary markets, provide an important incentive for those lenders to sell mortgage loans they originate. These factors, as well as the comparatively larger size of the primary and secondary markets in residential mortgages, contribute to larger percentages of residential mortgages being sold in secondary markets compared to 7(a) loans sold in 7(a) secondary markets. In the secondary mortgage markets, investors are better able to estimate cash flows and risks from investments backed by pools of mortgage loans. Investors are comparatively less able to reliably estimate cash flows and risks from investments backed by 7(a) loans because they lack historical data on the performance of similar loans under varying economic conditions and because the loan pools are heterogeneous. Nearly all federally insured single-family mortgages originated in 1997 were sold on the Ginnie Mae secondary market, compared to about 45 percent of the guaranteed portions of 7(a) loans on its respective secondary market. In the secondary market for conventional single-family residential mortgages without federal insurance, also known as the conventional conforming market, about 46 percent of the mortgages originated in 1997 and eligible for purchase by the enterprises was sold compared to about 11 percent of unguaranteed portions of 7(a) loans originated that year. Among single-family conventional residential mortgages originated in 1997 and not eligible for purchase by the enterprises, about one-third were sold in this secondary market, called the nonconforming market. The percentage of conventional mortgages, both conforming and nonconforming, that were originated and sold in secondary markets was much greater than that for the unguaranteed portions of 7(a) loans. In addition, a greater percentage of fixed-rate conventional mortgage loans was sold in secondary markets than variable- rate loans. Greater homogeneity of single-family residential mortgage loans compared to 7(a) loans contributes to a higher percentage of loans sold in secondary markets. Similarities among residential mortgage loans, such as being backed by the same types of collateral, along with standard loan terms, increases the ability of secondary market investors to evaluate cash flows and loan collateral values and therefore the various risks associated with purchasing MBS. Finally, the bigger sizes of the primary and secondary markets in residential mortgages relative to the respective markets in 7(a) loans contribute to the larger percentages of residential mortgages sold in secondary markets. Large mortgage loan pools allow more precise risk estimates and lower fees (per dollar loaned) associated with maintaining a secondary market. Table 1 displays statistics on the shares of residential mortgage loans and guaranteed portions of 7(a) loans sold in secondary markets in 1997. Nondepository institutions benefit relatively more from these secondary markets because they do not have a deposit base with which to finance the loans they make. Mortgage companies originate mortgages for resale in the secondary markets as a means to fund further mortgage originations. These companies retain servicing rights when they sell mortgages, thereby earning income from collecting and processing mortgage payments. Mortgage companies include independent firms without deposit bases as well as subsidiaries of depository institutions. They originate about three- fourths of federally insured, and about one-half of conventional, single- family mortgage loans. Because they can use the proceeds of their secondary market loan sales to finance more mortgage loans, the secondary mortgage markets allow mortgage companies to compete in the primary market for loan origination and servicing, even though they do not have a deposit base to finance the mortgages on their balance sheets. One reason mortgage companies originate a higher share of federally insured than conventional mortgages is the presence of an active secondary market in federally insured, fixed-rate residential mortgages dating back to the 1930s. Unlike the mortgage market where nondepository institutions originate a majority of the loans, the vast majority of 7(a) loans are originated by depository institutions. However, as with mortgage loans, the 7(a) secondary markets allow nondepository institutions to compete in the primary 7(a) market for loan origination and servicing, even though they do not have a deposit base to finance 7(a) loans on their balance sheets.While mortgage companies fund mortgages from origination until time of sale in the secondary market, they do not permanently fund any portion of the originated loans on their balance sheets. In contrast, SBLCs have used debt sources, such as bank lines of credit, to fund the unguaranteed portion of 7(a) loans on their balance sheets. At the end of 1997, SBLCs accounted for about 19 percent of outstanding 7(a) loans. The presence of interest rate risk in a primary market increases the attractiveness of the secondary market to loan originators who may depend on a deposit base as a permanent funding source. Depository institutions can use their deposit base as a source of funding with costs that fluctuate frequently. These institutions use the secondary markets to manage interest rate risks. Long-term, fixed-rate loans generate interest rate risk for lenders who depend on a deposit base for funding loans because increases in short-term funding costs are not accompanied by increases in interest payments on existing loans. Variable-rate loans, of which adjustable-rate mortgages are one type, can also generate interest rate risk for lenders if caps are imposed on the maximum allowable increases in interest rates. Prior to the 1990s, virtually all FHA-insured mortgages were fixed-rate.Currently, over 70 percent of FHA-insured mortgages are fixed-rate mortgages. In addition, for adjustable-rate mortgages, FHA limits the degree to which interest rates paid by the borrower can increase to a maximum of 1 percentage point annually and 5 percentage points over the life of the mortgage loan, which acts to limit interest rate risk to the borrower but shifts this risk to the lender. As a result, FHA-insured adjustable-rate mortgages also entail interest rate risk for lenders. Nearly all federally insured residential mortgages were sold in the Ginnie Mae guaranteed secondary mortgage market in 1997. By comparison, about 45 percent of guaranteed portions of 7(a) loans were sold in the guaranteed secondary market in 1997. Over the past decade, depository institutions have played a relatively larger role in the origination of conventional rather than federally insured mortgages. Adjustable-rate mortgages currently account for about 20 to 25 percent of all single-family conventional mortgages. That percentage has been higher for nonconforming conventional mortgage loans originated. Interest rate caps vary among conventional mortgages, but most typically are 2 percentage points annually and 6 percentage points over the life of the mortgage loan. These more flexible caps allow for less interest rate risk to the lender on conventional than on FHA-insured adjustable-rate mortgages. Based on the assumption that 20 to 25 percent of all single- family residential mortgages are adjustable rate, in the overall single-family residential mortgage market (i.e., federally insured and conventional), roughly 30 percent of adjustable-rate, and slightly over 50 percent of fixed- rate, outstanding mortgage loans were sold in secondary markets. The 30-percent figure for adjustable-rate mortgages generally corresponds to the percentage of the guaranteed portions of 7(a) loans sold in its secondary market. In contrast to mortgage loans where fixed-rate loans prevail, about 90 percent of 7(a) loans originated in 1997 were variable-rate loans that adjust quarterly without interest rate caps. Because of this, depository institutions have minimal exposure to interest rate risk when they use their deposit bases to finance 7(a) loans and thus have less incentive to sell on the secondary market than if their risk were higher. MBS investors face prepayment risk that mortgage buyers will pay off their mortgages before the final payment date. Residential mortgage borrowers typically prepay their fixed-rate mortgage loans when mortgage interest rates decline significantly below that of their existing mortgage. As a result, investors in MBS backed by cash flows from fixed-rate mortgage loans may not benefit from higher yields when interest rates in the economy decrease because many of the mortgages backing the MBS may be paid off, thereby terminating the cash flows from those mortgages. The yield for these investors, however, does not increase when interest rates in the economy rise above those in the mortgages backing the MBS. SBA 7(a) borrowers typically prepay when they find better alternatives; from the standpoint of the investor prepayment also occurs with borrower default. Because most 7(a) loans have variable interest rates, prepayments based on economywide interest rate changes are less likely. Residential mortgage borrowers with fixed-rate mortgages tend to prepay loans when interest rates decline because they can reduce their monthly mortgage payments by refinancing at the lower rates. Other prepayment situations occur when a borrower sells a residence or, in the case of guaranteed or insured mortgages, when foreclosure action against a borrower generates a prepayment. However, most mortgage prepayments occur because of declining interest rates, such as in 1993, when a majority of mortgage originations were refinancings as interest rates declined. This form of prepayment is easier to forecast due to the presence of future interest rate forecasts and historic data on the relationship between interest rate movements and mortgage prepayments. Prepayments for 7(a) loans are tied more to business performance than to the movement of interest rates in the general economy and, as a result, are not as predictable as mortgage prepayments. As with residential mortgage borrowers, 7(a) borrowers have an incentive to prepay their 7(a) loans when the opportunity to obtain loans at more favorable terms arises. However, the determining factors for these prepayment opportunities differ for 7(a) borrowers. As most 7(a) loans have variable interest rates, these rates decrease in tandem with declining interest rates in the economy. SBA 7(a) loans serve a wide variety of businesses and business owners who then experience varying levels of financial success. Those that experience financial success or establish good credit can prepay their 7(a) loans by obtaining conventional loans at more favorable rates from private market lenders. On the other hand, those that default trigger SBA guarantee payments to prepay the guaranteed portions of the loans. These forms of prepayment are more difficult to forecast than residential mortgage prepayments. This also shortens the period during which investors who paid premiums for 7(a) pool certificates will realize the higher yields they anticipated. As a result, whatever prepayment risk exists for investors in 7(a) pool certificates and pool securities, it is magnified for investors who pay premiums on guaranteed pool certificates because they are less likely to recoup their premiums when borrowers prepay. When government guarantees covering loan payments are absent, lenders, issuers, and investors face credit risk because losses occur when borrowers default on their loan payments. Secondary market participants have developed methods to project expected cash flows and determine credit risk on a given pool of loans using lender, borrower, and loan characteristics. Lenders establish underwriting standards, which include maximum loan-to-value ratios and loan payment-to-income ratios. Other issuers, such as Fannie Mae and Freddie Mac, have their own established underwriting standards. To insure themselves against losses, issuers often require lenders to provide credit enhancements and borrowers to purchase mortgage insurance. However, such requirements could reduce lenders’ incentives to sell their loans on secondary markets. Lenders limit their own credit risk by establishing underwriting standards for the loans they make and developing relationships with borrowers. Because the performance of the individual borrower is especially important to the cash flows for business loans, relationships with borrowers, in addition to protections created through underwriting, are more important in assessing credit risk for business than residential mortgage loans. Securities issuers establish practices intended to help ensure that lenders who sell their loans have incentives to limit credit risk on those loans. For example, lenders who provide credit enhancements share the credit risk with MBS issuers and investors. MBS issuers often reduce their exposure to credit risk by requiring borrowers to purchase private mortgage insurance from a mortgage insurance company. These companies, in turn, also establish underwriting standards. The enterprises provide corporate guarantees for timely payment of principal and interest on MBS backed by single-family residential mortgage loans they purchase. Where the loan amount exceeds 80 percent of the value of the housing unit serving as collateral, they normally require borrowers to purchase private mortgage insurance. The enterprises generally do not require lender-provided credit enhancements on single- family mortgage loans they purchase. They estimate and manage their exposure to credit risk using techniques developed to estimate the value of housing collateral, loan-to-value ratios, borrower payment burdens, and the relationships between these variables and loan losses. Private-label conduits that purchase single-family residential mortgage loans not eligible for purchase by the enterprises follow some of the same practices as the enterprises. Private-label conduits provide guarantees for timely payment of principal and interest, but unlike the enterprises, they normally pass on some of the credit risk to MBS investors. Private-label conduits also rely on private mortgage insurance and use the same techniques as the enterprises to manage their exposure to credit risk. The conduits, however, use forms of credit enhancement, such as subordination, not normally used by the enterprises. As previously mentioned, credit enhancement is used to maintain scheduled payments to investors if borrowers go into default or are late in making payments and to bring securities to investment-grade ratings, as they act to mitigate the investor’s credit risk. Providing such loss protection comes at a price to the conduit, because net proceeds the conduit pays the lender will be lowered by the cost of providing the credit enhancement, thus lowering the lender’s incentive to securitize the loans. In 1997, about one-third of nonconforming conventional mortgages were sold in the secondary mortgage market. In 1997, about 11 percent of unguaranteed portions of 7(a) loans were sold in the secondary market compared to about 32 percent of nonconforming loans sold in the secondary mortgage market. One reason given for the lack of development of unguaranteed 7(a) secondary market is the relatively high costs to secondary market investors and rating agencies for monitoring 7(a) lenders and borrowers to assess credit risks based on available data. These costs are exacerbated by loan heterogeneity, including the various forms of businesses financed. For example, the value of the business funded is largely determined by the performance of the business operators. In contrast, the value of a housing unit providing collateral for a residential mortgage loan can be determined with little regard to borrower characteristics. Due to a number of factors, investors have a greater ability to estimate risks in secondary markets for residential mortgage loans compared to 7(a) secondary markets. The most important factor we have identified is the greater homogeneity of residential mortgage loans backing each MBS, compared to that of 7(a) loans backing 7(a) pool certificates and securities. Investors’ ability to estimate risk of securities backed by pools of loans is also affected by the availability of historical data on loan performance of similar loans under varying economic conditions and information provided to investors. Reliable estimates of prepayment rates and loan losses can be more easily attained when loan pools are geographically diverse, loans are relatively homogeneous, and historical data on prepayments of similar loans under varying economic conditions are available. The greater homogeneity of residential mortgage loans backing each MBS, compared to that of 7(a) loans backing 7(a) pool certificates, facilitates estimating investors’ prepayment and credit risks. The presence of large, geographically diverse loan pools and large historic databases of residential mortgage loan performance experience on loans with common characteristics also facilitates estimating the prepayment risk of MBS investors relative to 7(a) secondary market investors. The low level of unguaranteed portions securitized to date reflects the difficulty of estimating prepayment and credit risks on heterogeneous loans. Cash flows of residential mortgages with equivalent payment terms (e.g., 30-year fixed-rate, 15-year fixed-rate, or adjustable-rate payment terms) back each single-family MBS, and participants in the secondary market can analyze large historic databases of prepayment histories of residential mortgages. The Bond Market Association establishes benchmark prepayment rates based on historic experience. Securities dealers use historic databases and financial forecast models to estimate future prepayment rates on mortgage loans that back each MBS issuance, which they, in turn, relate to the Bond Market Association benchmarks. Geographic diversification of a loan pool backing an MBS issuance lessens the probability that an unexpected adverse change in economic conditions in any one part of the nation will have a large impact on the cash flows backing the MBS. Large databases with historic information on prepayments for loans with specific characteristics are not available for the 7(a) markets as they are in the secondary mortgage markets, which means that investors in 7(a) secondary markets cannot estimate their credit and prepayment risks as well as MBS investors can. The heterogeneity (i.e., the wide variety of unique characteristics) of 7(a) loans lessens the ability of investors to estimate prepayment risk because the effects of some unique characteristics cannot be estimated. SBA 7(a) loans differ in collateral type and the type of business to which each loan is made. Even within each business category, the performance of small business loans is heavily affected by business-owner capabilities that are not captured in historic databases. In addition, when individual loan pools are relatively small the presence of a few loans with unique characteristics can have a relatively large impact on the prepayment experience of the loan pool. The presence of a large number of lenders making a relatively small number of loans can also affect prepayment risk estimation due to the presence of unique characteristics relating to lenders and their loan practices. As table 2 shows, each 7(a) guaranteed pool averaged 26 loans in contrast to 42 loans for each Ginnie Mae MBS issued in 1997. Ginnie Mae requires that its MBS investors receive an offering statement that discloses the issuer--normally the lender--of the MBS, the maturity dates, the principal amount of loans in the pool, and loan characteristics, such as whether they are fixed- or adjustable-rate. Each month, Ginnie Mae computes a factor number, based on the remaining principal balances reported monthly by MBS issuers, for each loan pool. These factors are used to determine the amount of the original principal that will remain outstanding after the next payments are made on the pooled loans. Ginnie Mae requires each approved issuer to apply for commitment authority to issue a maximum dollar of MBS. Determinations of commitment authority levels for the largest lenders are based on examinations of each lender’s financial capacity and lending practices. Ginnie Mae guaranteed MBS are exempt from SEC registration and reporting requirements. (A discussion of the securities laws as they apply to the registration of the securities offer in the secondary markets in residential mortgages and SBA 7(a) loans appears in app. III.) While private-label MBS are subject to SEC registration and reporting requirements, Fannie Mae and Freddie Mac MBS are exempt from these requirements. However, according to enterprise officials, information on the offering statements for Fannie Mae and Freddie Mac MBS parallels that provided by private-label conduits. In addition to details provided in Ginnie Mae offering statements, those provided by the enterprises and private-label conduits include the geographic distribution of housing units financed by loans in each pool, detailed description of loan characteristics, and detailed discussion of risk factors. SBA 7(a) pool certificates are also exempt from SEC registration and reporting requirements. SBA requirements for information provided investors in 7(a) pool certificates are somewhat similar to Ginnie Mae information requirements for its guaranteed MBS. SBA requires that 7(a) pool certificate investors be provided information on interest rate, maturity date, and aggregate original pool principal amount, as well as the pool certificate’s estimated constant prepayment rate and the loan pools used in determining the rate. SBA’s fiscal and transfer agent provides monthly factor tables similar to those provided for Ginnie Mae. SBA officials told us that dealers can provide information on pool certificates they resell to other investors, but that in many instances such ongoing information on loan pools, such as which loans in a pool have been paid off, is not available to investors. Because unguaranteed 7(a) pool securities are not backed by the SBA guarantee and are not considered agency securities, they are subject to SEC registration and reporting requirements. Information provided on the prospectuses for public offerings that we reviewed included investment risks; the number of loans in a pool; the states where the loans were originated; and loan maturities, forms of loan collateral, and interest rates. However, the small volume in this secondary market reflects the negative impact of loan heterogeneity on the share of loans sold in this secondary market. SBA officials told us that they are currently considering proposals for expanding information the agency makes available to investors in 7(a) pool certificates. For example, they told us that they are considering disclosing information such as the state where a small business is located and the industry in which it operates. Officials said that they are willing to consider providing such information now that the average number of loans backing each pool has grown from 18 in 1992 to 26 in 1997. In larger pools, such information is less likely to reveal the identity of individual borrowers and expose them to potentially burdensome investor inquiries. The officials told us that such disclosures could negatively affect loan marketing for loans in locations and industries that may be construed as having relatively large prepayment risk. SBA officials told us that, while they had previously considered introducing fixed-rate loans with prepayment penalties and relaxing interest rate ceilings on fixed-rate loans, they believed that such changes would result in smaller pool sizes that could negatively affect prepayment risk diversification. Over $500 billion in MBS guaranteed by Ginnie Mae is currently outstanding. Ginnie Mae approved lenders issue MBS backed by cash flows from federally insured residential mortgages. Ginnie Mae’s fee of 6 basis points covers its guarantees for timely payment of principal and interest on these securities and pays for business expenses; default losses not covered by primary mortgage insurance; and contractual payments to business firms that provide processing, payment, and transfer services. Lenders that issue Ginnie Mae guaranteed MBS can retain 44 basis points of outstanding principal balance for servicing the mortgage loans. Therefore, the interest rate spread between the interest rate paid by the borrower and received by the MBS investor is about 50 basis points. More than $1.3 trillion in MBS guaranteed by Fannie Mae and Freddie Mac was outstanding as of June 30, 1998. Most single-family residential mortgages purchased by the enterprises are conventional mortgages without federal insurance. Guarantee fees charged by the enterprises average about 20 basis points per loan. With these fees, the enterprises cover default losses; business expenses; and payments to contractors for processing, payment, and transfer services. According to a Fannie Mae official, lenders who sell residential mortgages to the enterprises are allowed to retain, on average, about 30 basis points of outstanding principal balance for servicing the mortgage loans. Therefore, the interest rate spread between the interest rate paid by the borrower and received by the MBS investor is approximately 50 basis points. About $10 billion in 7(a) pool certificates is currently outstanding. SBA 7(a) lenders sell their loans to pool assemblers who form pools by combining loans from various lenders and then sell certificates backed by these pools. Colson Services, SBA’s fiscal and transfer agent, monitors and handles the paperwork and data management system for all 7(a) guaranteed portions sold on the secondary market and serves as a central registry for all sales and resales of these portions. Lenders pay Colson 12.5 basis points of the certificates’ value for the firm’s secondary market services on guaranteed portions under its management. This cost, relative to Ginnie Mae’s entire 6 basis point fee suggests that large volumes of activity generate economies of scale in the provision of functions such as processing, payment, and transfer services. SBA does not limit the amount of servicing fees that lenders can retain on guaranteed portions of 7(a) loans they sell. Lenders’ servicing fees generally range from 100 to 300 basis points. According to Colson officials, lenders who collect servicing fees in the lower portion of this range are more likely to sell their loans at a premium. As lenders use servicing fees, in part, to compensate for the credit risk they incur from the unguaranteed portions of 7(a) loans, these servicing fees are not comparable to servicing fees retained by residential mortgage lenders. A financial market is more liquid if market participants can buy, sell, and resell large amounts of holdings without affecting the prices of traded securities. In 1997, the dollar amount of resold Ginnie Mae guaranteed MBS was more than twice that of newly issued Ginnie Mae MBS, while in the 7(a) certificate market resales accounted for just over a third as many sales as newly issued certificates. Based on this evidence and discussions with SBA officials, we have concluded that Ginnie Mae guaranteed MBS and MBS issued by the enterprises and private-label conduits are more liquid than 7(a) pool certificates and securities. The homogeneity of single-family mortgage loans and the availability of a large historical database of information on them allow MBS investors to better estimate cash flows and risks than investors in 7(a) loans. Also, federal insurance and the timely payment guarantee eliminate credit risk to the investor in Ginnie Mae guaranteed MBS. While SBA’s timely payment guarantee on pool certificates aids liquidity, the problems inherent in estimating prepayment risk on 7(a) loans because of their heterogeneity hinder liquidity. The relative lack of available information on resold pool certificates, compared to newly issued certificates, also limits liquidity on resales. While we lack resale statistics for private-label MBS, enterprise officials have told us that there is a lower level of liquidity in the nonconforming secondary mortgage market than in the conforming market. Credit risk in both the nonconforming secondary market and that for unguaranteed 7(a) pool securities can hinder liquidity. However, because of the added difficulty in estimating credit risk on heterogeneous loans, this factor is likely greater for investors in unguaranteed 7(a) pool securities. In addition, most 7(a) pool securities have been private placements, which by definition are less liquid investments. By linking borrowers and lenders in local markets to national capital markets, secondary markets benefit lenders, borrowers, and investors. The share of loans in a primary market that is sold in a secondary market depends on the benefits that particular secondary market generates. The benefit to individual lenders of selling loans in a secondary market depends, in part, on demand for that lender's loans and the availability and costs of the lender’s alternative funding sources. Other considerations include whether holding loans on the balance sheet or selling them in the secondary market brings higher returns on invested capital and/or lowers the lender’s risks. Secondary markets also allow nondepository lenders, who cannot provide permanent financing to hold loans, to compete in primary loan origination markets. In 1997, about $2.7 billion in guaranteed portions and about $290 million in unguaranteed portions of 7(a) loans were sold in the two respective secondary markets, representing about 45 and 11 percent, respectively, of originations that year. Lenders participating in these markets can reduce funding costs, and they can pass along some of their savings in the form of more favorable loan terms to borrowers. However, both 7(a) secondary markets lack certain attributes that permit reliable statistical risk analysis. The most important factors relate to primary market characteristics of 7(a) loans. With recent growth in the 7(a) guaranteed market and in the number of loans in each loan pool, SBA is considering proposals to expand information disclosed to investors in 7(a) pool certificates. SBA has recently promulgated a rule, effective April 12, 1999, regarding sale of unguaranteed portions of 7(a) loans on the secondary market. SBA is concerned that such secondary market sales could reduce lender incentives to follow prudent lending standards and thereby increase risk to SBA. SBA is also concerned about the concentration of credit risk to the agency that could result from an active unguaranteed secondary market, which could increase the share of 7(a) loans accounted for by a small number of large lenders. The guaranteed and unguaranteed secondary markets in 7(a) loans are smaller and less active than residential mortgage loan secondary markets, and a smaller share of loans from the primary markets are sold in the 7(a) secondary markets. Variances in the shares of loans sold in these secondary markets reflect certain factors in the primary and secondary markets. Notable factors affecting these secondary market outcomes include the relative (1) preponderance of fixed interest rates in the residential mortgage market, (2) homogeneity of loan characteristics among loans contained in each loan pool in that market, and (3) ability of residential mortgage market investors to evaluate the risks associated with each loan pool. These factors affect lenders’ incentives to sell loans to mitigate interest rate risk, aid poolers in assessing loan characteristics, and assist investors in estimating their risks. Differences in the 7(a) markets lower the benefits provided by the 7(a) secondary markets compared to the secondary mortgage market and therefore the incentives to participate in these markets. SBA will continue to face a number of challenging issues in the administration of the two secondary markets for 7(a) loans. As an example, uncertainties are present in the future development of the unguaranteed secondary market for 7(a) pool securities. This secondary market could (1) continue to be small largely as a result of loan heterogeneity, (2) allow nondepository institutions to grow with an associated possible increase in competition in the primary 7(a) market, or (3) increase concentration risk to the 7(a) guarantee program. SBA’s Associate Deputy Administrator for Capital Access provided written comments on a draft of this report, which are summarized below and reprinted in appendix IV. SBA also provided technical comments that were incorporated into the report where appropriate. Ginnie Mae also provided technical comments that were incorporated into the report where appropriate. SBA’s comment letter stated that the report fairly represents that the activity level in the secondary market for either the guaranteed or unguaranteed portion of Section 7(a) loans is a function of lender liquidity and/or lender structure. It also pointed out that since our 1998 report on SBA oversight was issued, SBA has performed oversight reviews of all of its preferred lenders and that the Farm Credit Administration, under an agreement with SBA, has completed the on-site portions of SBLC safety and soundness reviews. SBA noted that its headquarters is in the final editing stages of a lender oversight system to be used by both headquarters and field office staff for reviewing 7(a) lenders. SBA also stated that it has established a risk management committee that uses computerized data to manage the portfolio. We have not evaluated these initiatives, but they appear to be the type of actions that could mitigate credit risk to SBA resulting from lender concentration. We are sending copies of this report to Senator Christopher Bond, Chairman, and Senator John Kerry, Ranking Minority Member, Senate Committee on Small Business; Representative James Talent, Chairman, and Representative Nydia Velazquez, Ranking Minority Member, House Committee on Small Business; Representative Danny Davis, Ranking Minority Member, Government Programs and Oversight Subcommittee, House Committee on Small Business; The Honorable Aida Alvarez, Administrator, Small Business Administration; and other interested parties. Copies will also be made available to others upon request. Please call me or Bill Shear, Assistant Director, at (202) 512-8678 if you or your staff have any questions concerning the report. Other major contributors to this report are listed in appendix V. This appendix discusses aspects of the unguaranteed 7(a) secondary market, including its size and development, and how securities backed by unguaranteed portions of 7(a) loans are issued. It also discusses the disclosure requirements that pertain to issuance of the securities. Finally, it discusses regulatory and market mechanisms in the 7(a) secondary market that help ensure the safety and soundness of the SBA 7(a) program. The secondary market for unguaranteed portions of SBA 7(a) loans is newer and smaller than that for the guaranteed portions. SBA first authorized the sale of unguaranteed portions of 7(a) loans on the secondary market in 1992, 8 years after pooled guaranteed portions were authorized to be sold. Recognizing that nondepository institution lenders lack customer deposits to fund their 7(a) lending, SBA initially allowed only those lenders to securitize their unguaranteed portions. In 1997, about $290 million in unguaranteed portions of SBA loans were sold on the secondary market compared to $2.7 billion dollars in guaranteed portions. As of December 31, 1998, 20 pools of unguaranteed portions totaling about $1.25 billion had been sold since the sales were authorized in 1992. Generally, securitizations of small business loans without federal guarantees have been limited. According to biannual reports issued jointly by the Board of Governors of the Federal Reserve System and SEC, based on bank call reports, the total of small business loans—loans of less than $1 million—held by domestically chartered commercial banks was about $370 billion as of June 30, 1998. The report also stated that less than $3 billion in nonguaranteed small business loans had been securitized as rated offerings through the first half of 1998. This total includes about $1.2 billion in securitized unguaranteed portions of SBA loans. As discussed elsewhere in these reports, the securitization of small business loans is slowed by characteristics of those loans that inhibit analysis by rating agencies and investors. The loans are not homogeneous, underwriting standards vary across originators, and information on historical loss rates is typically limited. To the extent that it is cost effective, one or more credit enhancements can be included in each securitization transaction to compensate for these characteristics. Credit enhancements are payment support features that cover defaults and losses up to a specific amount on loans backing a security, thereby reducing investor need for costly loan-specific information. In other words, credit enhancements act to increase the likelihood that investors will receive interest and principal payments even in the event that full payment is not received on the underlying loans. However, the higher the level of credit enhancement needed to sell the securities, the lower the net proceeds from the sale of the securities and the weaker the incentive for lenders to securitize their loans. Securitizations of unguaranteed portions of 7(a) loans may be limited, in part, by the relatively small number of lenders with sufficient loan volume to create pools to back the securities. According to Moody’s Investors Service, pool size typically ranges from 250 to 2,000 loans. (Although SBA sets no minimum number of loans for these pools, the more loans there are in a pool, the less risky the securities backed by that pool tend to be. Less risk translates to lower credit enhancement requirements and therefore less cost for enhancement, resulting in more profit for the issuer.) SBA rules currently allow securitizing only pools of unguaranteed portions of 7(a) loans originated by a single lender; however, SBA’s final rule, effective April 12, 1999, provides for case-by-case consideration of multiple-lender securitizations of unguaranteed portions as well. Also, prior to April 2, 1997, only nondepository institution lenders had been authorized to securitize those portions of the loans. Of the nine issuers to date, six are SBLCs. Several factors could lead to increased issuance of securities backed by the unguaranteed portions of SBA loans, as follows: (1) larger portfolio size of some lenders due to the substantial growth of the SBA program in recent years; (2) favorable performance of SBA securitizations to date; (3) recent inclusion of depository lenders among those authorized to sell securities backed by unguaranteed portions; (4) pending participation by loan conduits, which will enable the creation of multilender pools to support securitizations of unguaranteed portions, as in the secondary market for guaranteed portions. SBA will consider multilender securitization on a case-by-case basis, according to a final rule effective April 12, 1999; and (5) ongoing improvements in understanding and underwriting small business loans, such as credit scoring, which may help determine and manage credit risks and improve the design of securitizations based on small business loans. A lender’s unguaranteed portions of 7(a) loans are securitized through the pooling and sale of those portions to a bankruptcy-remote special purpose vehicle (SPV). Securities backed by unguaranteed portions of 7(a) loan portions are issued and sold to investors in either a private placement or public offering. Investors receive an ownership interest in the right to receive the principal of the pooled unguaranteed portions with interest. The stream of interest and principal payments is divided, according to the structure of the securitization, into various classes, which give securities holders differing priorities to, and allocable interests in, such payment streams. These offerings are assigned ratings by securities rating agencies. The most risk-averse investors look for an investment-grade rating to be reasonably sure of getting reliable cash flows. Investors willing to take on more risk can invest in lower-rated offerings, which offer higher expected returns. According to SBA officials, investors in SBA unguaranteed portions are generally institutional investors such as banks, pension funds, and credit unions that typically are required to restrict their investments to those of investment grade. All 20 securitizations of unguaranteed portions as of December 31, 1998, were investment-grade rated. Securitizations offered for public sale must be registered with SEC and meet its requirements for disclosure of information relating to the securities. The Securities Act of 1933 (the Securities Act) and the Securities Exchange Act of 1934 (the Exchange Act) require securities’ issuers to disclose information to help investors assess the risks of a particular, publicly traded security. The Securities Act specifies registration and disclosure requirements, and the Exchange Act requires continuing disclosure after a security is issued. The required disclosures are made in a prospectus or offering statement that is distributed in connection with the offer and sale of a security. In a private placement, the issuer can avoid the costly registration and reporting process required of a public offering. Administrative and judicial decisions provide the criteria for determining whether a transaction does not involve a public offering. In addition, in order to minimize the uncertainty about reliance on the private offering exemption, SEC has a safe harbor rule exempting transactions that meet its requirements. In general, private placements are less liquid than publicly traded securities. Of the nine issuers of securities backed by the unguaranteed portion of 7(a) loans, as of December 30, 1998, one--The Money Store--issued publicly traded, registered securities, while the other eight have sold their securities through private placements. The Money Store accounted for 40 percent of the total 7(a) securities transactions as of December 31, 1998, and about two-thirds of the $1.25 billion total for all securitizations as of that date. Securities rating agencies play an important role in determining how securities should be structured and priced to appeal to investors. To understand how securities rating agencies approach the rating of SBA loan-backed securitizations, we reviewed reports on this subject published by Moody’s Investors Service and Standard and Poor’s. According to the reports, the agencies estimate the ability of a transaction to pay interest and principal fully and in a timely manner under varying levels of stress. The agencies analyze historical performance data, including SBA loss performance studies of loans from common origination periods and portfolio data from specific lenders. If a loss curve cannot be developed for a specific originator due to lack of sufficient historical data, the SBA aggregate loss curve may be used to project losses for recently originated pools. Generally speaking, the more limited the data, the less precise the loan pool performance estimates can be, which requires a higher level of credit enhancements. Such factors as the age of the securitized loans in the portfolio, referred to as seasoning; the degree of industrial and geographic diversity of the loans in the pool; and the number of loans in the pool also play a role in loss performance analyses. In reviewing the originator’s and servicer’s operations to gain insight into the policies and procedures they use to originate, underwrite, and service the SBA loans, a rating agency might look at such areas as management and financial strength, credit origination and approval, servicing and collection practices, back-up servicing, workout and liquidation policies, environmental issues, and data processing and reporting. The Moody’s report states that the ultimate credit quality of a security depends not only on the riskiness of the underlying loans but also on the manner in which the transaction is structured to channel the benefits of payments from borrowers to investors. As mentioned earlier, SBA 7(a) securities are usually structured in classes that provide differing streams of interest and principal payments, reflecting securities holders’ differing priorities to, and allocable interests in, such payment streams. A typical two-class structure would have senior and subordinate classes, with the subordinate class typically providing protection against principal and interest shortfalls for the senior class after the exhaustion of funds set aside to provide such protection. The funds that are set aside to provide the protection come from excess spreadfrom the sale of both the guaranteed and unguaranteed portions. All securitizations of the unguaranteed portions of 7(a) loans done before June 30, 1998, have used subordination and excess spread from the sale of both the guaranteed and unguaranteed portions to enhance the securities. Although credit rating agencies gave subordinated classes lower investment grade ratings than the senior classes, subordinated classes offer higher returns to compensate for the added risks. In order for securitization to be feasible, the interest received from the loans in the pool must exceed the sum of interest paid to security holders and the costs of organizing the securitization. The excess spread from the guaranteed portions and the spread generated from the sale of the unguaranteed portions have been used to enhance the credit for transactions where the unguaranteed portions are securitized. When the lender receives loan payments, it remits the portion of the payment on the unguaranteed portion, along with the excess spread (minus fees) to a collections account established by a trustee for the benefit of the investors. The trust uses these funds to make necessary payments, such as interest and principal on the securities, spread account deposits, and servicing fees. Should a default occur in the pool, the cash flows that would have come from the defaulted loan would be paid from the excess fund account until it is depleted. Because unguaranteed portions lack the SBA guaranty, SBA involvement in the unguaranteed 7(a) secondary market is limited to ensuring that the safety and soundness of the 7(a) program are protected before it approves a securitization. SBA does not set minimum pool sizes or dictate the range of loan terms for loans in a pool of unguaranteed 7(a) loan portions as it does for pools of guaranteed portions. SBA establishes requirements for lenders who wish to sell their unguaranteed portions on the secondary market. In this section, we discuss existing and proposed requirements for securitization of 7(a) loans, which are intended to help ensure the safety and soundness of the 7(a) program. Before a lender can securitize a pool of unguaranteed portions of SBA 7(a) loans it originated, it must obtain SBA’s written consent. To obtain this consent, the lender must satisfactorily show that it is retaining an economic risk in the unguaranteed portions—such as keeping a certain percentage of the unguaranteed portion or of the securitized pool backed by these portions. This risk-sharing requirement is intended to provide an economic incentive for lenders to maintain prudent lending practices. The lender must also meet other criteria in SBA rules for securitizing these portions. To effect a securitization of unguaranteed portions that converts individual loans into several types of marketable securities, a lender must sell them to a legal entity, known as a special purpose vehicle, which issues securities that represent ownership in these portions. SBA rules require that the lender continue servicing a loan after the pledge or transfer is made. SBA rules also require that the lender, or a custodian agreeable to SBA, hold the loans. According to officials at Colson Services, Inc., the fiscal and transfer agent for SBA that collects payments on guaranteed portions from lenders and distributes the proceeds to investors in guaranteed pools, its role in the secondary market for unguaranteed portions is limited to holding the notes for SBA. As mentioned earlier, SBA initially allowed only its nondepository lenders to securitize their unguaranteed portions. This reflected SBA’s recognition that nondepositories do not have customer deposits to fund their 7(a) lending. However, the October 1, 1996, Small Business Program Improvement Act of 1996 directed SBA to promulgate a final rule that applied uniformly to both depository and nondepository lenders, setting forth the terms and conditions and other safeguards to protect the safety and soundness of the program, or cease permitting the sale of the unguaranteed portion of 7(a) loans after March 31, 1997. After proposing a rule in February 1997, SBA promulgated an interim final rule on April 2, 1997, that extended the program to include depository lenders and set forth some terms and conditions while it continued its review of securitization issues. On February 10, 1999, the agency promulgated a final rule that became effective on April 12, 1999. The rulemaking process included two proposed rules, two public hearings, and an interim rule as the agency took the time to consider views and comments of securitization and accounting experts, representatives of financial regulatory agencies, and industry representatives in drafting a final rule. A final rule, promulgated February 10, 1999, and effective April 12, 1999, generally requires that a securitizer have sufficient capital to meet the definition of “well-capitalized” used by bank regulators (depository institution), or maintain a minimum applicable capital equal to at least 10 percent of its assets, excluding the guaranteed portion of its 7(a) loans and including any remaining balance in its portfolio or in any securitization pool (nondepository institution); retain for 6 years a subordinated interest in the securities, the amount of which is the greater of two times the securitizer’s loss rate on its 7(a) loans disbursed for the preceding 10-year period or 2 percent of the principal balance outstanding at the time of the securitization of the unguaranteed portions of the loans in the securitization; and be placed on probation for one quarter, and then suspended for at least 3 months from preferred lender status if the securitizer’s default rate crosses certain thresholds and fails to improve to SBA’s standards. SBA also will not approve additional securitization requests from that securitizer during the suspension period. This appendix discusses aspects of the guaranteed 7(a) secondary market, including its size and development and how certificates backed by guaranteed portions of 7(a) loans are issued. It also discusses the disclosure requirements that pertain to issuance of the certificates. The guaranteed secondary market was created in 1972, when the first guaranteed portions of individual loans were sold. In 1984, Congress authorized issuance of pool certificates backed by pools of the guaranteed portions. Lenders sell their loans to pool assemblers who form pools by combining the loans of several 7(a) lenders. Overall, about 88 percent of all loans sold in the secondary market in 1997 were pooled loans. SBA prescribes certain characteristics that every pool of 7(a) guaranteed portions must meet. Each pool must have at least four loans with a minimum aggregate principal balance of at least $1 million. No single loan can account for more than 25 percent of the pool. Although all loans in a pool need not have the same interest rate, they must be either all fixed or all variable rate loans. If the pool has variable rate loans, all loans must have the same rate adjustment dates. The pool’s interest rate is based on the loan with the lowest net interest rate, and the range of these rates cannot be greater than 2 percent. The maturity date designation for the entire pool is based on the loan with the longest remaining term to maturity. The remaining term to maturity for the shortest loan in the pool must be at least 70 percent of that for the longest. New loans cannot be added to a pool to replace others that prepay or default. In calendar year 1997, 427 variable rate pools and 5 fixed rate pools were formed, averaging 25 and 9 loans per pool, respectively. Pool certificates are issued on each pool in denominations of at least $25,000. Each pool certificate has a unique number, called a Committee on Uniform Securities Identification Procedures (CUSIP) number, for identification purposes. They are backed by the full faith and credit of the U. S. government and have a timely payment guarantee from SBA. SBA does not charge for its timely payment guarantee, which ensures that investors will be paid on scheduled dates regardless of whether payments from borrowers were on time. This timely payment guarantee applies only to pooled guaranteed portions of 7(a) loans, and not to individually purchased loans. As with other government guaranteed securities, these securities are exempt from SEC registration and reporting requirements. Pool assemblers acquire the guaranteed portions of SBA 7(a) loans from lenders, create the pools, and issue pool certificates through Colson Services Corp., SBA’s fiscal and transfer agent. SBA must approve all pool assembler applicants. SBA criteria require applicants to be in good standing with SBA, any state or federal regulatory bodies that govern their activities, and the National Association of Securities Dealers, if members. They must meet certain net worth requirements and must have the financial capability to assemble acceptable and eligible loans in sufficient quantity to meet the requirements for issuing pool certificates. Federal- or state-chartered banks and savings and loan associations, insurance companies, credit unions, SBLCs, and broker-dealers can all become pool assemblers as long as they meet these requirements. Colson Services Corp., based in New York City, is SBA’s fiscal and transfer agent for secondary market transactions involving both individual 7(a) loans and pooled guaranteed portions. For each transaction, Colson issues a certificate and sets the beginning balance, interest rate, maturity, payment schedule, and issue date once it has determined that the issuer or seller has provided the necessary documents to support the transaction. Colson delivers the certificates to the registered holders (investors or their designee). Colson maintains a registry of registered holders and the current outstanding principal balance of each certificate. Borrowers pay lenders, who take out their fees and other portions of the payments due them and forward the remainder to Colson. Colson then makes principal and interest payments to the registered holders. It also sends statements to registered holders on the status of each pool backing their certificates. When a pooled loan prepays or defaults, Colson forwards each registered holder its pro rata share of the prepayment or SBA’s guaranty purchase. Colson’s fee is one-eighth of 1 percent of the outstanding balance per year for its services, which it collects by retaining a portion of the lender’s payments. SBA requires the seller to disclose certain information to the buyer before all initial sales and subsequent sales (transfers) of guaranteed pool certificates and individual loan certificates. The seller must disclose a yield calculation and the prepayment rate assumptions on which the yield calculation is based; the scheduled maturity date; the price to be paid by the buyer, both in dollars and as a percentage of the par or principal loan amount; the dollar amount of premium or discount associated with the sale price; and the interest rate (the base rate and the differential for variable rate loans). The seller must also disclose investment characteristics, such as the fact that (1) SBA guarantees timely payment of principal and interest on pool certificates, but not on individual loan certificates; (2) SBA will purchase the guaranteed portion of individual loans after 60 days of default by the borrower; (3) SBA does not guarantee premiums paid for certificates; and (4) the loan or pool may be prepaid prior to the maturity date. Through its disclosure requirements, SBA seeks to provide investors with an annual constant prepayment rate (CPR) based on the seller’s analysis of the prepayment histories of SBA guaranteed loans with similar maturities and with information on the certificates’ terms, conditions, and yields. Colson provides a summary report of CPRs of pools with similar maturities, which is attached to each guaranteed pooled certificate issued. Investors can compare the CPRs represented by their seller with that reported for similar sales. Colson updates the summary information each month on a rolling 6-month basis. SBA officials believe this system keeps the CPR information current and useful to investors and other market participants. SBA guaranteed certificates are generally marketed to institutional investors, such as pension funds and insurance companies. SBA has authorized Colson to make available to subscribers a data tape containing the payment history of every SBA 7(a) loan sold since 1985, the time Colson has functioned as fiscal and transfer agent. To maintain borrower confidentiality, Colson eliminates identifying loan numbers and zip codes and provides a dummy number for each loan. Individuals or organizations must meet certain requirements before they are permitted to act as brokers or dealers in initial sales or transfers of guaranteed certificates on either individual loans or pooled loans. They must be regulated by a state or federal financial regulatory agency or SBA, or be a member of the National Association of Securities Dealers. SBA regulations require lenders to retain responsibility for all loan- servicing activities, including those for loans sold in the secondary market. SBA regulations allow lenders to earn fee income for servicing its small business loan portfolio when the guaranteed portions have been sold in the secondary market. By retaining servicing responsibilities, lenders can also maintain long-term relationships with their customers. A lender services its loans by continuing to collect principal and interest payments from borrowers and managing the collateral. The lender must forward monthly payments from borrowers to Colson along with a complete accounting of the funds. The Securities Act of 1933 (the Securities Act) and Securities Exchange Act of 1934 (the Exchange Act) require securities issuers to disclose information to help investors assess the risks of a particular publicly traded security. The Securities Act specifies registration and disclosure requirements. The required disclosures are contained in a prospectus or offering statement that is distributed in connection with the offer and sale of a security. The Exchange Act requires continuing disclosure after a security is issued. Certain publicly traded securities are exempt from the registration and reporting requirements of the Securities Act as well as from the continuing reporting requirements of the Exchange Act. For example, securities issued or guaranteed by the United States, its agencies, and corporate instrumentalities are exempt. Offerings of exempt securities, however, are subject to the antifraud provisions of the federal securities laws, which provide generally that offering materials shall not contain an untrue statement of a material fact in connection with the offer or sale of a security. This exemption includes securities guaranteed by SBA and Ginnie Mae as well as securities issued by most government-sponsored enterprises, such as Fannie Mae and Freddie Mac. The sale of the guaranteed portions of 7(a) loans in the secondary market also is exempt from these provisions in the Securities Act and the Exchange Act because of SBA’s unconditional guarantee. SBA provides investors an unconditional guarantee to pay principal and interest, including interest accrued to the date SBA honors its guarantee, on the guaranteed portion of each 7(a) loan that goes into default. Pooled certificates also contain a timely payment guarantee from SBA to make scheduled payments to investors in the event of default. SBA, Ginnie Mae, Fannie Mae, and Freddie Mac all issue or guarantee exempt securities that are sold in different types of offerings. For example, SBA pooled certificates typically are backed by up to 25 SBA-guaranteed loan portions and marketed by pool assemblers to a small number of institutional investors. Investors in SBA-guaranteed securities receive required disclosures on the terms, conditions, and yield of the pool that they are purchasing. For example, with regard to prepayment risk information, the fiscal and transfer agent tracks and provides a summary report of constant prepayment rates (CPR) of similar maturities, which is to be attached to each guaranteed pooled certificate issued. Ginnie Mae guaranteed MBS are backed by larger mortgage pools and sold in public offerings to a large investor market. The offering materials for Ginnie Mae securities includes the issuer, the principal amount of loans in the pool, whether the loans backing the pools are fixed- or adjustable-rate, the interest rate, and the maturity date. Fannie Mae and Freddie Mac MBS are typically issued in public offerings in which investors receive offering materials that contain detailed information about the loan pools. According to enterprise officials, the information provided by Fannie Mae and Freddie Mac is similar to that provided by MBS issuers who are not exempt from SEC registration and reporting requirements. With regard to the securitization of the unguaranteed portion of 7(a) loans, an SBA lender may issue a security that is backed by the cash flows from the unguaranteed portions. These securities are not covered by the same exemption as the guaranteed portion securitizations because an SBA guarantee is not present. Accordingly, when these securities are publicly offered and traded, they are subject to SEC registration and reporting requirements. Therefore, issuers are required to comply with registration and reporting requirements in the federal securities laws unless they rely on another exemption from registration, such as the private placement exemption. In a private placement, the issuer can avoid the costly registration and reporting process if the transaction by the issuer does not involve a public offering. Administrative and judicial decisions provide the criteria for determining whether a transaction does not involve a public offering. In addition, in order to minimize the uncertainty about the reliance on the private offering exemption, SEC has a safe harbor rule that provides more objective standards. If the rule is properly followed, the issuer is assured the availability of the exemption. Many corporate securities issuers use the private placement market. One of the characteristics of a private placement, however, is that the investor cannot easily resell the security; that is, the security is less liquid than a publicly traded security. Of the nine issuers to date of securities backed by the unguaranteed portion of 7(a) loans, one has issued publicly traded, registered securities, while the other eight have sold their securities through private placements. Rosemary Healy, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the secondary markets for 7(a) small business loans guaranteed by the Small Business Administration (SBA), focusing on: (1) the benefits and risks of secondary loan markets to participants; (2) primary benefits and risks to participants in the guaranteed 7(a) secondary market and the unguaranteed 7(a) secondary market; and (3) a comparison of the guaranteed 7(a) secondary market with the secondary market for federally guaranteed residential mortgages, and the unguaranteed 7(a) secondary market with the secondary market for residential mortgages without a federal guarantee. GAO noted that: (1) the proportion of loans that are sold in a secondary market depends on the benefits generated by the secondary market and how the benefits and risks are distributed among market participants; (2) by linking borrowers and lenders to national capital markets, secondary markets benefit lenders, borrowers, and investors; (3) these markets: (a) tap additional sources of funds; (b) reduce dependence on availability of local funds; (c) help to lower interest rates paid by borrowers; and (d) help lenders manage risks; (4) they provide lenders a funding alternative to deposits and, by enhancing market liquidity, they can reduce regional imbalances in loanable funds; (5) secondary loan markets can benefit borrowers by increasing the overall availability of credit in the primary market and by lowering the interest rates borrowers pay on loans; (6) the secondary markets in 7(a) loans provide lenders a funding source that otherwise would not be available; (7) in calendar year 1997, 1,540 SBA lenders sold 12,164 SBA 7(a) loans in the guaranteed secondary market, generating $2.7 billion in sales of guaranteed portions; (8) about $290 million in sales of unguaranteed portions were made that year by a smaller number of lenders; (9) these were generally Small Business Lending Companies, which lack a deposit base, or banks that had not developed a sufficient deposit base as a funding source for their loans; (10) lenders participating in these markets can reduce funding costs, and investors in 7(a) pool certificates and securities can get greater liquidity and lower risk than they would from directly investing in individual loans; (11) in the guaranteed 7(a) market, investors face prepayment risk, and in the unguaranteed 7(a) secondary market, investors and lenders share prepayment and credit risk; (12) both 7(a) secondary markets can help lenders make more loans, which could contribute to a concentration of SBA's credit risk among a few lenders that originate a large percentage of 7(a) loans; (13) compared to the secondary markets for 7(a) loans, the secondary markets for residential mortgages operate with greater incentives for lenders to sell the loans they originate; (14) in 1997, about 45 percent of the guaranteed portions of 7(a) loans originated that year were pooled and sold on the secondary market compared to virtually all federally insured single-family residential mortgages; and (15) about 11 percent of the unguaranteed portions of 7(a) loans originated in 1997 were pooled and sold on the secondary market compared to about 32 percent of nonconforming residential mortgages.
Overview of the disaster recovery process. According to the Department of Homeland Security’s National Response Framework, once immediate lifesaving activities are complete after a major disaster, the focus shifts to assisting individuals, households, critical infrastructure, and businesses in meeting basic needs and returning to self-sufficiency. Even as the immediate imperatives for response to an incident are being addressed, the need to begin recovery operations emerges. The emphasis on response gradually gives way to recovery operations. During the recovery phase, actions are taken to help individuals, communities, and the nation return to normal. The National Response Framework characterizes disaster recovery as having two phases: short-term recovery and long-term recovery. Short-term recovery is immediate and an extension of the response phase in which basic services and functions are restored. It includes actions such as providing essential public health and safety services, restoring interrupted utility and other essential services, reestablishing transportation routes, and providing food and shelter for those displaced by the incident. Although called short-term, some of these activities may last for weeks. Long-term recovery may involve some of the same actions as short- term recovery but may continue for a number of months or years, depending on the severity and extent of the damage sustained. It involves restoring both the individual and the community, including the complete redevelopment of damaged areas. Some examples of long- term recovery include providing permanent disaster-resistant housing units to replace those destroyed, initiating a low-interest façade loan program for the portion of the downtown area that sustained damage from the disaster, and initiating a buyout of flood-prone properties and designating them community open space. As the President has previously noted, state and local leaders have the primary role in planning for recovery efforts. Under the Robert T. Stafford Disaster Relief and Emergency Assistance Act (Stafford Act), the federal government is authorized to provide assistance to those jurisdictions in carrying out their responsibilities to alleviate suffering and damage which results from disasters. In general under the Stafford Act, the federal role is to assist state and local governments—which have the primary role with regard to recovery efforts. In major disasters where the event overwhelms the capacity of state and local governments, the federal government can offer more assistance to supplement the efforts and available resources of states, local governments, and disaster relief organizations in alleviating the damage, loss, hardship, or suffering caused by the disaster. After a major disaster, the federal government may provide unemployment assistance; food coupons to low-income households; and repair, restoration, and replacement of certain damaged facilities, among other things. For example, the city of New Orleans estimated this April that the federal government will provide over $15 billion for the rebuilding of the city through numerous disaster assistance programs, including FEMA’s Public Assistance Grant Program and Community Disaster Loan program, and the Department of Housing and Urban Development’s Community Development Block Grants program. Nevertheless, state and local governments have the main responsibility of applying for, receiving, and implementing federal assistance. Further, they make decisions about what priorities and projects the community will undertake for recovery. Impact of Hurricanes Gustav and Ike. Hurricanes Gustav and Ike made landfall in the Gulf Coast this month, resulting in federal major disaster declarations for 95 counties in Texas, Louisiana, and Alabama (see fig. 1). Gustav made landfall near Cocodrie, Louisiana, as a category 2 hurricane on September 1, 2008. Ike made landfall as a category 2 hurricane near Galveston, Texas, on September 13, 2008. These hurricanes have caused widespread damage to affected Gulf Coast states. For example, the state of Louisiana has confirmed 10 Gustav-related deaths. Recent press accounts have attributed the death of about 50 people in the United States to Hurricane Ike. Further, Hurricanes Gustav and Ike have significantly disrupted utility service as well as oil and natural gas production in the Gulf Coast. Specifically, Gustav caused power outages for over 1.1 million Louisiana and Mississippi customers, while over 2.2 million customers in Texas lost power after Ike made landfall. The hurricanes have also affected oil and natural gas production in the Gulf Coast. Most of the refineries in Gustav’s path were affected, resulting in a 100 percent reduction in crude oil production. Almost all refineries in Ike’s path shut down, halting crude oil production in the area by 99.9 percent. Over half of the 39 major natural gas processing plants in the affected areas have ceased operations as a result of Hurricanes Gustav and Ike, reducing the total operating capacity of the region by 65 percent. Given the recent landfall of these hurricanes, comprehensive damage assessments from government agencies were not available at the time of this report’s issuance. Impact of the 2008 Midwest Floods. Heavy rainfall across much of the northern half of the Great Plains during early June 2008 resulted in river flooding. This flooding became increasingly severe as heavy rain continued into the second week of June and rising rivers threatened dams and levees and submerged large areas of farmland along with many cities and towns. As a result, the President issued federal major disaster declarations for counties in seven states: Illinois, Indiana, Iowa, Missouri, Minnesota, Nebraska, and Wisconsin (see fig. 2). The flooding resulted in widespread damage for some communities in these states. For example, the rivers in Cedar Rapids, Iowa, crested over 30 feet, flooding 10 square miles of the city and displacing over 18,000 people and several city facilitates, including the city hall, police department, and fire station. The flooding also affected agricultural production in these states. For example, the state of Indiana estimates the floods will result in a crop shortfall of $800 million in the coming year and $200 million in damaged farmlands. To identify insights from past disasters we interviewed officials involved in disaster recovery in the United States and Japan. Domestically, we met with officials from state and local governments affected by the selected disasters, as well as representatives of nongovernmental organizations involved in long-term recovery. In Japan, we met with officials from the government of Japan, Hyogo Prefecture, and the city of Kobe. In addition, we also interviewed over 40 experts—both domestic and international— on the subject of disaster recovery. We visited the key communities affected by five of the six disasters in our study to meet officials involved in the recovery effort and examine current conditions. While we did not visit communities affected by the Red River flood, we were able to gather the necessary information through interviews by telephone with key officials involved in the recovery as well as recovery experts knowledgeable about the disaster. Further, we obtained and reviewed legislation, ordinances, policies, and program documents that described steps taken to facilitate long-term recovery following each of our selected disasters. The scope of our work did not include independent evaluation or verification regarding the extent to which the communities’ recovery efforts were successful. We also drew on previous work we have conducted on recovery efforts in the aftermath of the 2005 Gulf Coast hurricanes. We have issued findings and recommendations on several aspects of the Gulf Coast recovery, including protecting federal disaster programs from fraud, waste, and abuse; providing tax incentives to assist recovery; and determining the role of the nonprofit sector in providing assistance to that region. See figure 3 for the locations of the six disasters that we selected for this review. We reviewed lessons from past disasters and collected information about the impact of Hurricanes Ike and Gustav and the 2008 Midwest floods from June 2007 through September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. After a major disaster, a recovery plan can provide state and local governments with a valuable tool to document and communicate recovery goals, decisions, and priorities. Such plans offer communities a roadmap as they begin the process of short- and long-term recovery. The process taken to develop these plans also allows state and local governments to involve the community in identifying recovery goals and priorities. After past disasters, the federal government has both funded and provided technical assistance on how to create such plans. In our review of recovery plans that state and local governments created after major disasters, we identified three key characteristics. Specifically, these plans (1) identified clear goals for recovery, (2) included detailed information to facilitate implementation, and (3) were established in a timely manner. A recovery plan containing clear goals can provide direction and specific objectives for communities to focus on and strive for. Clear recovery goals can also help state and local governments prioritize projects, allocate resources, and establish a basis for subsequent evaluations of the recovery progress. After the 1995 Kobe earthquake in Japan, the areas most hard-hit by the disaster—Hyogo prefecture and the city of Kobe—identified specific recovery goals to include in their plans. Among these were the goals of rebuilding all damaged housing units in 3 years, removing all temporary housing within 5 years, and completing physical recovery in 10 years. According to later evaluations of Kobe’s recovery conducted by the city and recovery experts, these goals were critical for helping to coordinate the wide range of participants involved in the recovery. In addition, it helped to inform the national government’s subsequent decisions for funding recovery projects in these areas. These goals also allowed the government to communicate its recovery progress with the public. Each month, information on progress made towards achieving these goals was provided to the public on-line and to the media at press conferences. This communication helped to inform the public about the government’s recovery progress on a periodic basis. Further, these goals provided a basis for assessing the recovery progress a few years after the earthquake. Both Hyogo and Kobe convened panels of international and domestic experts on disaster recovery as well as community members to assess the progress made on these targets and other recovery issues. These evaluations enabled policymakers to measure the region’s progress towards recovery, identify needed changes to existing policies, and learn lessons for future disasters. Similar efforts to inform the public about the government’s recovery progress are being taken in the wake of the 2005 Gulf Coast hurricanes. In February 2008, FEMA and the Federal Coordinator of Gulf Coast Recovery launched its Transparency Initiative. This web-based information sharing effort provides detailed information about selected buildings and types of projects in the Gulf Coast receiving Public Assistance grants. For example, the web site provides information on whether specific New Orleans schools are open or closed and how much federal funding is available for each school site. To do this, FEMA and Federal Coordinator staff pulled together information from state and locals as well as data on all Public Assistance grants for permanent infrastructure throughout the Gulf Coast. According to the Office of the Federal Coordinator, the initiative has been useful in providing information on federal funds available and the status of infrastructure projects in a clear and understandable way to the general public and a wide range of stakeholders. With the uncertainty that can exist after major disasters, the inclusion of detailed implementation information in recovery plans can help communities realize their recovery goals. Implementable recovery plans specify objectives and tasks, clarify roles and responsibilities, and identify potential funding sources. Approximately 3 months after the 1997 Red River flood, the city of Grand Forks approved a recovery plan with these characteristics that helped the city take action towards achieving recovery. First, the Grand Forks plan identified 5 broad recovery goals covering areas such as housing and community redevelopment, business redevelopment, and infrastructure rehabilitation. The plan details a number of supporting objectives and tasks to be implemented in order to achieve the stated goals. For example, one of the 5 goals included the plan was to clean up, repair, and rehabilitate the city’s infrastructure and restore public services to pre-flood conditions. The plan outlined 5 objectives to accomplish that goal, including repairing and rehabilitating the city’s water distribution and treatment facilities. Some of the tasks specified in the plan to achieve that objective are repairing pumping stations, fixing water meters, and completing a 2-mile limit drainage master plan. Additionally, the plan identified a target completion date for each task so that the city can better manage the mix of short- and long- term activities necessary to recover. Second, the Grand Forks recovery plan clearly identified roles and responsibilities associated with the specific tasks, which helped with achieving broader recovery goals. To do this, the plan identified which personnel—drawn from city, state, and federal agencies—would be needed to carry out each task. For example, the plan called for collaboration of staff from the city’s urban development and engineering/building inspection departments, FEMA, and the Army Corp of Engineers to create an inventory of substantially damaged buildings in the downtown area. By clarifying the roles and responsibilities of those who would be involved in accomplishing specific tasks, the plan provided detailed information to facilitate implementation. Third, the Grand Forks plan also identified funding sources for each recovery task. For example, to fund the task of cleaning up and repairing street lights (which would help achieve the objective of cleaning, repairing, and rehabilitating the city’s streets), the plan referenced sources from FEMA’s Public Assistance Grant Program, the state of North Dakota, and the city’s general revenue fund. The plan contained a detailed financing matrix, organized by the broader recovery goals identified in the body of the plan, which identified various funding sources for each task (see fig. 4). The matrix also included a target completion date for each task. A city evaluation of the recovery plan found that the process of specifying goals and identifying funding sources allowed the city to conceive and formulate projects in collaboration with the city council and representatives from state and local governments. This helped Grand Forks meet its recovery needs as well as adhere to federal and state disaster assistance funding laws and regulations. The recovery plans created by the Hyogo and Kobe governments after the 1995 earthquake also helped to facilitate the funding of recovery projects. It served as the basis of discussions with the national government regarding recovery funding by clearly communicating local goals and needs. Towards this end, Hyogo and Kobe submitted their recovery plans to a centralized recovery organization that included officials from several national agencies including the Ministry of Finance and the Ministry of Construction. Ministry staff worked with local officials to reconcile the needs identified in their recovery plans with national funding constraints and priorities. As a result of this process, local officials were able to adjust their recovery plans to reflect national budget and funding realities. Some state and local governments quickly completed recovery plans just a few short months after a major disaster. These plans helped to facilitate the ensuing recovery process by providing a clear framework early on. The regional governments affected by the Kobe earthquake promptly created recovery plans to help ensure that they could take advantage of central government funding as soon as possible. After the earthquake, there was a relatively short amount of time to submit proposals for the national budget in order to be considered for the coming year. Facing this deadline, local officials devised a two-phase strategy to develop a plan that could quickly identify broad recovery goals to provide a basis for budget requests to meet the national budget deadline. After that initial planning phase, the governments then collaborated with residents to develop detailed plans for specific communities. In the first phase, Kobe focused on creating a general plan to identify broad recovery goals, such as building quality housing, restoring transportation infrastructure, and building a safer city. This first plan was issued 2 months after the earthquake and contained 1,000 projects with a budget of $90 billion. It was designed to establish the framework for recovery actions and to provide the basis for obtaining central government funds. In the second phase, the city involved residents and local organizations, including businesses and community groups, to develop a more detailed plan for the recovery of specific neighborhoods. This second plan began 6 months after the earthquake. The two-phase planning process enabled Kobe and Hyogo to meet their tight national budget submission deadline while allowing additional time for communities to develop specific recovery strategies. Given the lead role that state and local governments play in disaster recovery, their ability to act effectively directly affects how well communities recover after a major disaster. There are different types of capacity that can be enhanced to facilitate the recovery process. One such capacity is the ability of state and local governments to make use of various kinds of disaster assistance. The federal government—along with other recovery stakeholders, such as nongovernmental organizations— plays a key supporting role by providing financial assistance through a range of programs to help affected jurisdictions recover after a major disaster. However, state and local governments may need certain capacities to effectively make use of this federal assistance, including having financial resources and technical know-how. More specifically, state and local governments are often required to match a portion of the federal disaster assistance they receive. Further, affected jurisdictions may also need additional technical assistance on how to correctly and effectively process applications and maintain required paperwork. Following Hurricanes Ike and Gustav and the Midwest floods earlier this year, building up these capacities may improve affected jurisdictions’ ability to navigate federal disaster programs. After a major disaster, state and local governments may not have adequate financial capacity to perform many short- and long-term recovery activities, such as continuing government operations and paying for rebuilding projects. The widespread destruction caused by major disasters can impose significant unbudgeted expenses while at the same time decimate the local tax base. Further, federal disaster programs often require state and local governments to match a portion of the assistance they receive, which may pose an additional financial burden. In the past, affected jurisdictions have used loans from a variety of sources including federal and state governments to enhance their local financial capacity. For example, the Stafford Act authorizes FEMA to administer the Community Disaster Loan program which can be used by local governments to provide essential postdisaster services. Additionally, affected localities have used special taxes to build local financial capacity after major disasters. Providing a loan to local governments is one way to build financial capacity after a disaster. Soon after the 1997 Red River flood, the state-owned Bank of North Dakota provided a line of credit totaling over $44 million to the city of Grand Forks. The city used this loan to meet FEMA matching requirements, provide cash flow for the city government to meet operating expenses, and fund recovery projects that commenced before the arrival of federal assistance. The city of New Orleans also sought state loans to help build financial capacity in the aftermath of the 2005 Gulf Coast hurricanes. The city is working with Louisiana to develop a construction fund to facilitate recovery projects. The fund would enable New Orleans to have more access to money to fund projects upfront and reduce the level of debt that the city would otherwise incur. Another way to augment local financial capacity is to raise revenue through temporary taxes that local governments can target according to their recovery needs. After the 1989 Loma Prieta earthquake, voters in Santa Cruz County took steps to provide additional financial capacity to affected localities. The county implemented a tax increment, called “Measure E,” about 1 year after the disaster, which increased the county sales tax by ½ cent for 6 years. The proceeds were targeted to damaged areas within the county based on an allocation approved by voters. Measure E generated approximately $12 million for the city of Santa Cruz, $15 million for the city of Watsonville, and $17 million for unincorporated areas of Santa Cruz County. According to officials from Watsonville and Santa Cruz, Measure E provided a critical source of extra funding for affected Santa Cruz County localities. For example, officials from Watsonville (whose general fund annual budget was about $17 million prior to the earthquake) used proceeds from Measure E to meet matching requirements for FEMA’s Public Assistance Grant Program. These officials also used Measure E to offset economic losses from the earthquake, as well as provide financing for various recovery projects, such as creating programs to repair damaged homes and hiring consultants that helped the community plan for long-term recovery. While raising local sales taxes may not be a feasible option for all communities, Santa Cruz officials recognized the willingness of county voters to support this strategy. Similarly, state and local governments in the Gulf Coast and Midwest states can look to develop strategies for increasing financial capacity in ways that are both practical and appropriate for their communities. State and local governments face the challenge of implementing the wide range of federal programs that provide assistance for recovery from major disasters. Some of these federal programs require a certain amount of technical know-how to navigate. For example, FEMA’s Public Assistance Grant Program has complicated paperwork requirements and multistage application processes that can place considerable demands on applicants. After the 2005 Gulf Coast hurricanes, FEMA and Mississippi state officials used federal funding to obtain an on-line accounting system that tracked and facilitated the sharing of operational documents, thereby reducing the burden on applicants of meeting Public Assistance Grant Program requirements. According to state and local officials, the state contracted with an accounting firm that worked hand-in-hand with applicants to regularly scan and transmit documentation on architectural and engineering estimates, contractor receipts, and related materials from this Web-based system. As a result, FEMA and the state had immediate access to key documents that helped them to make project approval decisions. Further, local officials reported that this information-sharing tool, along with contractor staff from an accounting firm, helped to relieve the documentation and resulting human capital burdens that state and local applicants of the Public Assistance Grant Program faced during project development. Business recovery is a key element of a community’s recovery after a major disaster. Small businesses are especially vulnerable to these events because they often lack resources to sustain physical losses and have little ability to adjust to market changes. Widespread failure of individual businesses may hinder a community’s recovery. Federal, state, and local governments have developed strategies to facilitate business recovery, including several targeted at small businesses. These strategies helped businesses adapt to postdisaster market conditions, helped reduce business relocation, and allowed businesses to borrow funds at lower interest rates than would have been otherwise available. Major disasters can change communities in ways that require businesses to adapt. For example, following Hurricane Andrew, large numbers of people left south Miami-Dade County. The closing of Homestead Air Force Base, which was permanently evacuated just hours before the hurricane struck, reduced the population of the area significantly. Moreover, the base closure removed families and individuals with reliable incomes and spending power. Following the departure of Air Force personnel and dependents, winter residents and retired people also left in great numbers, never to return. Today, the city of Homestead is an entirely different place as community demographics have changed dramatically. Businesses that did not adapt to this new reality did not survive. The extent to which business owners can recognize change and adapt to the postdisaster market for goods and services can help those firms attain long-term viability after a disaster. Recognizing this after the Northridge earthquake, Los Angeles officials assisted neighborhood businesses in adapting to short- and long-term changes, using a combination of federal, state, and local funds. The Northridge earthquake caused uneven damage throughout the Los Angeles area, leaving some neighborhoods largely intact while creating pockets of damaged, abandoned buildings. Businesses in these areas suffered physical damage and the loss of customers when area residents abandoned their homes. The Valley Economic Development Center (VEDC), a local non-profit, established an outreach and counseling program to provide direct technical assistance to affected businesses throughout the San Fernando Valley after the Northridge earthquake. With funding from the city of Los Angeles, the state of California, and the Small Business Administration, VEDC provided guidance on obtaining federal and local governmental financial assistance, as well as strategies for adjusting to changes in the business environment. Toward this end, VEDC staff went door-to-door in affected business districts, served as a clearinghouse for information on earthquake recovery, sponsored workshops, reached out to business owners, and collected detailed information about businesses. VEDC also hosted conferences that taught business owners how to strategically market goods and services given the changed demographics. Speakers at these conferences provided information about the economic and social impact of the earthquake. VEDC estimates that over 6,000 businesses were served by these efforts. Additionally, they found that these services helped saved almost 8,000 jobs in the San Fernando Valley. Continuing programs provided counseling and assistance with applying for financial assistance to hundreds of businesses for more than 5 years after the earthquake. The potential value of this type of technical assistance is illustrated by an example of a Northridge business that did not receive it. A well- established fish market outside of the San Fernando Valley reopened after the earthquake with the intention of resuming its formerly successful business of selling the same inventory that it sold before the disaster. However, as a result of the earthquake, the area’s customer base had changed significantly and the new population did not purchase the market’s merchandise. Despite spending his life savings to restore the business, the owner suffered considerable losses and eventually was forced to close the fish market after the lease expired. Since major disasters can bring significant change to business environments, communities may look for ways to help retain some existing businesses because widespread relocation can hinder recovery. In an effort to minimize relocations after the Red River flood, the city of Grand Forks created incentives to encourage businesses to remain in the community using funds from the Department of Housing and Urban Development’s Community Development Block Grant program and the Department of Commerce’s Economic Development Administration. Grand Forks developed a program that provided $1.75 million in loans to assist businesses that suffered physical damage in the flood. This program offered 15-year loans with no interest or payments required for the first 5 years of the loan. In addition, businesses which continued to operate within the city at the end of 3 years had 40 percent of the loan’s principal forgiven. A Grand Forks official said that over 70 percent of the businesses that received the loan stayed in the community for at least 3 years. This official also estimated that over 40 percent of the businesses would have closed without the loan program. The city of Santa Cruz also took steps to minimize the relocation of businesses from its downtown shopping district, which also helped to maintain a customer base for the community. Within weeks of the Loma Prieta earthquake, the city worked together with community groups to construct seven large aluminum and fabric pavilions where local businesses that suffered physical damage temporarily relocated. These pavilions, located in parking areas 1 block behind the main commercial area, were leased to businesses displaced by the earthquake. Over 40 retail stores, including bookstores, cafes, and hardware stores, operated out of the pavilions for up to 3 years while storefronts were rebuilt (see fig. 5). City officials stated that these pavilions help to mitigate the impact of the earthquake on small businesses by enabling them to continue operations and thereby maintain their customer base. In contrast, officials near Santa Cruz in the city of Watsonville did not create such temporary locations after the Loma Prieta earthquake, and as a result, businesses moved out of the downtown area to a newly completed shopping center on the outskirts of the city. With the relocation of these businesses, some consumers stopped shopping in remaining stores in the downtown area. A senior Watsonville official told us that these business relocations continue to hamper the recovery of the downtown district almost two decades after the earthquake. The federal government has used tax incentives to stimulate business recovery after major disasters. These incentives provide businesses with financial resources for recovery that may otherwise not be available. Certain tax incentives are open-ended, meaning that any individual or business that meets specified federal requirements may claim the tax incentives. States allocate other tax incentives to selected businesses, projects, or local governments and ensure allocations do not exceed limits set for each state. For those tax incentives where the states have primary allocation responsibility, an opportunity exists for states to allocate the incentives in a manner consistent with their communities’ recovery goals. Midwest and other states may find value in considering the experiences of communities recovering from past disasters when developing their own approach in how to allocate these incentives. The Congress created tax incentives after the 2005 hurricanes through the Gulf Opportunity Zone Act of 2005 (GO Zone Act) in part to promote business recovery. Following those hurricanes, affected state governments were responsible for allocating four tax incentives, including a $14.9 billion tax-exempt private activity bond authority to assist business recovery. These bonds allowed businesses to borrow funds at lower interest rates than would have otherwise been available because investors purchasing the bonds are not required to pay taxes on the interest they earn on the bonds. The Gulf Coast states exercising this authority are using the tax-exempt private activity bonds for a wide range of purposes to support different businesses, including manufacturing facilities, utilities, medical offices, mortgage companies, hotels, and retail facilities. Under the GO Zone Act, authorized states have established processes and selected which projects were to receive these bond allocations up to each state’s allocation authority limit. These states generally used a first-come, first-served basis for allocating the rights to issue tax-exempt private activity bonds under the GO Zone Act and did not consistently target the bond authority to assist recovery in the most damaged areas at the beginning of the program. Officials in Louisiana and Mississippi involved in allocating this authority acknowledged that the first-come, first-served approach made it difficult for applicants in some of the most damaged areas to make use of the bond provision immediately following the 2005 hurricanes. Counties and parishes in the most damaged coastal areas of Louisiana and Mississippi faced challenges dealing with the immediate aftermath of the hurricanes and could not focus on applying for this authority. Louisiana recently set aside a portion of its remaining allocation authority for the most damaged parishes. This July, legislation was introduced in Congress modeled after the GO Zone Act, which, among other tax incentives, would provide private activity bond allocation authority to certain Midwest states to help the victims of this year’s floods. Under the proposed legislation, similar to the GO Zone Act, affected states would also have the authority to allocate additional low-income housing tax credits for rental housing and issue tax credit bonds for temporary debt relief, among other provisions. The Gulf Coast states’ first-come, first-served allocation process meant, according to some officials we interviewed, that some projects that would have been viable without tax-exempt private activity bond financing received tax- exempt private activity bond allocations. Such allocations may not have fully supported the long-term recovery goals of that region. This may be particularly relevant to Midwest states given that the proposed legislation contains provisions related to tax-exempt private activity bonds similar to those authorized by the GO Zone Act of 2005. The influx of federal financial assistance available to victims after a major disaster provides increased opportunities for fraud, waste, and abuse. Disaster victims are at risk, as well as the public funds supporting government disaster programs. Specifically, many disaster victims hire contractors to repair or rebuild their homes using financial assistance from the government. Residents are potential targets for fraud by unscrupulous contractors. In addition, government programs are also vulnerable: the need to quickly provide assistance to disaster victims puts assistance programs at risk of fraudulent applicants trying to obtain benefits that they are not entitled to receive. We identified two actions that state and local governments can take after major disasters to combat fraud, waste, and abuse. Communities are often faced with the problem of contractor fraud after major disasters as large numbers of residents look to hire private firms to repair or rebuild their homes and businesses. For example, after Hurricane Andrew in 1992, over 7,000 homeowners filed formal complaints of contractor fraud with Miami-Dade County’s Construction Fraud Task Force from August 1993 through March 1995. An official from the Miami- Dade Office of the State Attorney reported that they successfully prosecuted more than 300 felony cases, over 290 misdemeanor cases, and resulting in the restitution of more than $2.6 million to homeowners by October 1996. Other complaints that were not criminal in nature resulted in substantial administrative fines and additional restitution. More recently, FEMA and Midwest states anticipate that fraud will also be a concern after this year’s floods and have issued warnings to residents about the need to be vigilant for potentially fraudulent contractors. To help address this issue, FEMA has issued tips and guidelines to the public about hiring contractors. To help protect its residents from contractor fraud after the Red River flood, the city of Grand Forks established a required credentialing program for contractors. This included a “one-stop shop” that served as a mandatory clearinghouse for any contractor who wanted to do business with recovering residents. The clearinghouse was staffed by representatives from a range of city and state offices, including the North Dakota Secretary of State, the North Dakota Attorney General, the North Dakota Workers Compensation Bureau, the North Dakota Bureau of Criminal Investigations, and the Grand Forks Department of Administration and Licensing. These staff carried out a variety of functions, including checking that contractors had appropriate licenses, insurance, and no criminal records, in addition to collecting application fees and filing bonding information. After passing these checks and completing all the required applications, contractors were issued photo identification cards, which they were required to carry at all times while working within the city limits. To inform its citizens about this program, Grand Forks officials conducted press briefings urging residents to check for these photo identifications and to hire only credentialed contractors. In about 2 months, the city issued approximately 500 new contractor licenses and 2,000 contractor identification cards through the one-stop shop. During that same period, officials arrested more than 20 individuals who had outstanding warrants. City and state officials credited this approach with playing a key role in limiting contractor fraud in Grand Forks during the recovery from the Red River flood. In the wake of this year’s flooding, the city of Cedar Rapids, Iowa, has created a similar contractor credentialing program modeled after Grand Forks’ One-Stop-Shop program, in an effort to minimize instances of contractor fraud. Cedar Rapid’s program requires contractors to visit a local mall where representatives from the police department and community development, and code enforcement divisions are assembled. There, city officials check contractors’ licenses and insurance policies, as well as conducting criminal background checks. Similar to Grand Forks’ program, contractors who pass checks are issued photo identification cards. Those who do not obtain identification before working in the area can incur a fine of $100 or face up to 30 days of jail time. As of August 2008, over 900 local and out-of-town contracting companies and 6,200 individual contractors have been credentialed through this program. Twelve people have been arrested as a result of outstanding warrants that were identified through criminal background checks. Our prior work on FEMA’s Individuals and Households Program payments and the Department of Homeland Security’s purchase card program show that fraud, waste, and abuse related to disaster assistance in the wake of the 2005 Gulf Coast hurricanes are significant. We have previously estimated improper and potentially fraudulent payments related to the Individuals and Households Program application process to be approximately $1 billion of the first $6 billion provided. In addition, FEMA provided nearly $20 million in duplicate payments to individuals who registered and received assistance twice by using the same Social Security number and address. Similarly, the Hurricane Katrina Fraud Task Force—comprised of the Department of Justice’s Criminal Division and Offices of the United States Attorneys; several other federal agencies, including the Federal Bureau of Investigations, Secret Service, and Securities and Exchange Commission; and various representatives of state and local law enforcement—have collaborated to prosecute instances of fraud related to the hurricane. According to the Office of the Federal Coordinator of Gulf Coast Recovery, the efforts of the task force have resulted in the indictment of over 890 cases of fraud to date. Because of the role state governments play in distributing and allocating this federal assistance, these known vulnerabilities call for states to establish effective controls to minimize opportunities for individuals to defraud the government. With the need to provide assistance quickly and expedite purchases, programs without effective fraud prevention controls can end up losing millions or potentially billions of dollars to fraud, waste, and abuse. We have previously testified on the need for fraud prevention controls, fraud detection, monitoring adherence to controls throughout the entire program life, collection of improper payments, and aggressive prosecution of individuals committing fraud. These controls are crucial whether dealing with programs to provide housing and other needs assistance or other recovery efforts. By creating such a fraud protection framework—especially the adoption of fraud prevention controls— government programs should not have to make a choice between the speedy delivery of disaster recovery assistance and effective fraud protection. While receiving millions of dollars in federal assistance, state and local governments bear the main responsibility for helping communities cope with the destruction left in the wake of major disasters. Now that the wind and storm surge from Hurricanes Ike and Gustav have passed and the Midwest flood waters have subsided, state and local governments face a myriad of decisions regarding the short- and long-term recovery of their communities. We have seen that actions taken shortly after a major disaster and during the early stages of the recovery process can have a significant impact on the success of a community’s long-term recovery. Accordingly, this is a critical time for communities affected by these major disasters. Insights drawn from state and local governments that have experienced previous major disasters may provide a valuable opportunity for officials to anticipate challenges and adopt appropriate strategies and approaches early on in the recovery process. There is no one right way for how state and local governments should manage recovery from a major disaster, nor is there a recipe of techniques that fits all situations. While many of the practices we describe in this report were tailored to the specific needs and conditions of a particular disaster, taken together, they can provide state and local officials with a set of tools and approaches to consider as they move forward in the process of recovering from major disasters. We provided a draft of this report to the Federal Coordinator of Gulf Coast Recovery in the Department of Homeland Security. In addition, we provided drafts of the relevant sections of this report to officials involved in the particular practices we describe, as well as experts in disaster recovery. They generally agreed with the contents of this report. We have incorporated their technical comments as appropriate. We are sending copies of this report to other interested congressional committees, the Secretary of Homeland Security, the FEMA Administrator, and state and local officials affected by Hurricanes Ike and Gustav as well as the Midwest floods. We will make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-6806 or czerwinskis@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Gulf Coast Rebuilding: Observations on Federal Financial Implications. GAO-07-1079T. Washington. D.C.: August 2, 2007. Preliminary Information on Rebuilding Efforts in the Gulf Coast. GAO-07-809R. Washington, D.C.: June 29, 2007. Gulf Coast Rebuilding: Preliminary Observations on Progress to Date and Challenges for the Future. GAO-07-574T. Washington, D.C.: April 12, 2007. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-618. Washington, D.C.: September 6, 2006. Hurricane Katrina: GAO’s Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. Washington, D.C.: March 8, 2006. Hurricane Katrina: Providing Oversight of the Nation’s Preparedness, Response, and Recovery Activities. GAO-05-1053T. Washington, D.C.: September 28, 2005. Department of Agriculture, Farm Service Agency: 2005 Section 32 Hurricane Disaster Programs; 2006 Livestock Assistance Grant Program. GAO-07-715R. Washington, D.C.: April 16, 2007. Department of Agriculture, Commodity Credit Corporation: 2006 Emergency Agricultural Disaster Assistance Programs. GAO-07-511R. Washington, D.C.: April 16, 2007. Small Business Contracting: Observations from Reviews of Contracting and Advocacy Activities of Federal Agencies. GAO-07-1255T. Washington, D.C.: September 26, 2007. Hurricane Katrina: Agency Contracting Data Should Be More Complete Regarding Subcontracting Opportunities for Small Business. GAO-07-698T. Washington, D.C.: April 12, 2007. Hurricane Katrina: Agency Contracting Data Should Be More Complete Regarding Subcontracting Opportunities for Small Businesses. GAO-07-205. Washington, D.C.: March 1, 2007. Hurricane Katrina: Improving Federal Contracting Practices in Disaster Recovery Operations. GAO-06-714T. Washington, D.C.: May 4, 2006. Hurricane Katrina: Army Corps of Engineers Contract for Mississippi Classrooms. GAO-06-454. Washington, D.C.: May 1, 2006. Hurricane Katrina: Planning for and Management of Federal Disaster Recovery Contracts. GAO-06-622T. Washington, D.C.: April 10, 2006. Hurricanes Katrina and Rita: Preliminary Observations on Contracting for Response and Recovery Efforts. GAO-06-246T. Washington, D.C.: November 8, 2005. Hurricanes Katrina and Rita: Contracting for Response and Recovery Efforts. GAO-06-235T. Washington, D.C.: November 2, 2005. Hurricane Katrina: Ineffective FEMA Oversight of Housing Maintenance Contracts in Mississippi Resulted in Millions of Dollars of Waste and Potential Fraud. GAO-08-106. Washington, D.C.: November 16, 2007. Hurricanes Katrina and Rita Disaster Relief: Continued Findings of Fraud, Waste, and Abuse. GAO-07-300. Washington, D.C.: March 15, 2007. Hurricanes Katrina and Rita Disaster Relief: Prevention Is the Key to Minimizing Fraud, Waste, and Abuse in Recovery Efforts. GAO-07-418T. Washington, D.C.: January 29, 2007. Response to a post hearing question related to GAO’s December 6, 2006 testimony on continued findings of fraud, waste, and abuse associated with Hurricanes Katrina and Rita relief efforts. GAO-07-363R. Washington, D.C.: January 12, 2007. Hurricanes Katrina and Rita Disaster Relief: Continued Findings of Fraud, Waste, and Abuse. GAO-07-252T. Washington, D.C.: December 6, 2006. Purchase Cards: Control Weaknesses Leave DHS Highly Vulnerable to Fraudulent, Improper, and Abusive Activity. GAO-06-1117. Washington, D.C.: September 28, 2006. Hurricanes Katrina and Rita: Unprecedented Challenges Exposed the Individuals and Households Program to Fraud and Abuse; Actions Needed to Reduce Such Problems in Future. GAO-06-1013. Washington, D.C.: September 27, 2006. Disaster Relief: Governmentwide Framework Needed to Collect and Consolidate Information to Report on Billions in Federal Funding for the 2005 Gulf Coast Hurricanes. GAO-06-834. Washington, D.C.: September 5, 2006. Individual Disaster Assistance Programs: Framework for Fraud Prevention, Detection, and Prosecution. GAO-06-954T. Washington, D.C.: July 12, 2006. Expedited Assistance for Victims of Hurricanes Katrina and Rita: FEMA’s Control Weaknesses Exposed the Government to Significant Fraud and Abuse. GAO-06-655. Washington, D.C.: June 16, 2006. Hurricanes Katrina and Rita Disaster Relief: Improper and Potentially Fraudulent Individual Assistance Payments Estimated to Be Between $600 Million and $1.4 Billion. GAO-06-844T. Washington, D.C.: June 14, 2006. Expedited Assistance for Victims of Hurricanes Katrina and Rita: FEMA’s Control Weaknesses Exposed the Government to Significant Fraud and Abuse. GAO-06-403T. Washington, D.C.: February 13, 2006. Disaster Housing: Implementation of FEMA’s Alternative Housing Pilot Program Provides Lessons for Improving Future Competitions. GAO-07-1143R. Washington, D.C.: August 31, 2007. Disaster Assistance: Better Planning Needed for Housing Victims of Catastrophic Disasters. GAO-07-88. Washington, D.C.: February 28, 2007. Hurricane Katrina: Continuing Debris Removal and Disposal Issues. GAO-08-985R. Washington, D.C. August 25, 2008. Hurricane Katrina: Trends in the Operating Results of Five Hospitals in New Orleans before and after Hurricane Katrina. GAO-08-681R. Washington, D.C.: July 17, 2008. Hurricane Katrina: EPA’s Current and Future Environmental Protection Efforts Could Be Enhanced by Addressing Issues and Challenges Faced on the Gulf Coast. GAO-07-651. Washington, D.C.: June 25, 2007. Hurricane Katrina: Allocation and Use of $2 Billion for Medicaid and Other Health Care Needs. GAO-07-67. Washington, D.C.: February 28, 2007. Hurricanes Katrina and Rita: Federal Actions Could Enhance Preparedness of Certain State-Administered Federal Support Programs. GAO-07-219. Washington, D.C.: February 7, 2007. Hurricane Katrina: Status of Hospital Inpatient and Emergency Departments in the Greater New Orleans Area. GAO-06-1003. Washington, D.C.: September 29, 2006. Child Welfare: Federal Action Needed to Ensure States Have Plans to Safeguard Children in the Child Welfare System Displaced by Disasters. GAO-06-944. Washington, D.C.: July 28, 2006. Lessons Learned for Protecting and Educating Children after the Gulf Coast Hurricanes. GAO-06-680R. Washington, D.C.: May 11, 2006. Hurricane Katrina: Status of the Health Care System in New Orleans and Difficult Decisions Related to Efforts to Rebuild It Approximately 6 Months After Hurricane Katrina. GAO-06-576R. Washington, D.C.: March 28, 2006. Army Corps of Engineers: Known Performance Issues with New Orleans Drainage Canal Pumps Have Been Addressed, but Guidance on Future Contracts Is Needed. GAO-08-288. Washington, D.C.: December 31, 2007. U.S. Army Corps of Engineers’ Procurement of Pumping Systems for the New Orleans Drainage Canals. GAO-07-908R. Washington, D.C.: May 23, 2007. Hurricane Katrina: Strategic Planning Needed to Guide Future Enhancements Beyond Interim Levee Repairs. GAO-06-934. Washington, D.C.: September 6, 2006. Small Business Administration: Response to the Gulf Coast Hurricanes Highlights Need for Enhanced Disaster Preparedness. GAO-07-484T. Washington, D. C.: February 14, 2007. Small Business Administration: Additional Steps Needed to Enhance Agency Preparedness for Future Disasters. GAO-07-114. Washington, D.C.: February 14, 2007. Small Business Administration: Actions Needed to Provide More Timely Disaster Assistance. GAO-06-860. Washington, D.C.: July 28, 2006. Gulf Opportunity Zone: States Are Allocating Federal Tax Incentives to Finance Low-Income Housing and a Wide Range of Private Facilities. GAO-08-913. Washington, D.C.: July 16, 2008. Tax Compliance: Some Hurricanes Katrina and Rita Disaster Assistance Recipients Have Unpaid Federal Taxes. GAO-08-101R. Washington, D.C.: November 16, 2007. Disaster Assistance: Guidance Needed for FEMA’s ‘Fast Track’ Housing Assistance Process. RCED-98-1. Washington, D.C.: October 17, 1997. Disaster Assistance: Improvements Needed in Determining Eligibility for Public Assistance. RCED-96-113. Washington, D.C.: May 23, 1996. Emergency Relief: Status of the Replacement of the Cypress Viaduct. RCED-96-136. Washington, D.C.: May 6, 1996. Disaster Assistance: Information on Expenditures and Proposals to Improve Effectiveness and Reduce Future Costs. T-RCED-95-140. Washington, D.C.: March 16, 1995. GAO Work on Disaster Assistance. RCED-94-293R. Washington, D.C.: August 31, 1994. Los Angeles Earthquake: Opinions of Officials on Federal Impediments to Rebuilding. RCED-94-193. Washington, D.C.: June 17, 1994. Earthquake Recovery: Staffing and Other Improvements Made Following Loma Prieta Earthquake. RCED-92-141. Washington, D.C.: July 30, 1992. Transportation Infrastructure: The Nation’s Highway Bridges Remain at Risk From Earthquakes. RCED-92-59. Washington, D.C.: January 23, 1992. Loma Prieta Earthquake: Collapse of the Bay Bridge and the Cypress Viaduct. RCED-90-177. Washington, D.C.: June 19, 1990. Disaster Assistance: Program Changes Expedited Delivery of Individual and Family Grants. RCED-89-73. Washington, D.C.: April 4, 1989. National Flood Insurance Program: Financial Challenges Underscore Need for Improved Oversight of Mitigation Programs and Key Contracts. GAO-08-437. Washington, D.C.: June 16, 2008. Natural Catastrophe Insurance: Analysis of a Proposed Combined Federal Flood and Wind Insurance Program. GAO-08-504. Washington, D.C.: April 25, 2008. National Flood Insurance Program: Greater Transparency and Oversight of Wind and Flood Damage Determinations Are Needed. GAO-08-28. Washington, D.C.: December 28, 2007. Natural Disasters: Public Policy Options for Changing the Federal Role in Natural Catastrophe Insurance. GAO-08-7. Washington, D.C.: November 26, 2007. Federal Emergency Management Agency: Ongoing Challenges Facing the National Flood Insurance Program. GAO-08-118T. Washington, D.C.: October 2, 2007. National Flood Insurance Program: FEMA’s Management and Oversight of Payments for Insurance Company Services Should Be Improved. GAO-07-1078. Washington, D.C.: September 5, 2007. National Flood Insurance Program: Preliminary Views on FEMA’s Ability to Ensure Accurate Payments on Hurricane-Damaged Properties. GAO-07-991T. Washington, D.C.: June 12, 2007. National Flood Insurance Program: New Processes Aided Hurricane Katrina Claims Handling, but FEMA’s Oversight Should Be Improved. GAO-07-169. Washington, D.C.: December 15, 2006. Federal Emergency Management Agency: Challenges for the National Flood Insurance Program. GAO-06-335T. Washington, D.C.: January 25, 2006. Federal Emergency Management Agency: Challenges Facing the National Flood Insurance Program. GAO-06-174T. Washington, D.C.: October 18, 2005. Catastrophe Risk: U.S. and European Approaches to Insure Natural Catastrophe and Terrorism Risks. GAO-05-199. Washington, D.C.: February 28, 2005. Catastrophe Insurance Risks: The Role of Risk-Linked Securities. GAO-03-195T. Washington, D.C.: October 8, 2002. Catastrophe Insurance Risks: The Role of Risk-Linked Securities and Factors Affecting Their Use. GAO-02-941. Washington, D.C.: September 24, 2002. Federal Disaster Insurance. GGD-95-20R. Washington, D.C.: November 7, 1994. Federal Disaster Insurance: Goals Are Good, But Insurance Programs Would Expose the Federal Government to Large Potential Losses. T-GGD-94-153. Washington, D.C.: May 26, 1994. In addition to the individual named above, Peter Del Toro, Assistant Director; Patrick Breiding; Michael Brostek; Keya Chateauneuf; Thomas Gilbert; Shirley Hwang; Gregory Kutz; Donna Miller; John Mingus; MaryLynn Sergent; and Diana Zinkl made key contributions to this report.
This month, Hurricanes Ike and Gustav struck the Gulf Coast producing widespread damage and leading to federal major disaster declarations. Earlier this year, heavy flooding resulted in similar declarations in seven Midwest states. In response, federal agencies have provided millions of dollars in assistance to help with short- and long-term recovery. State and local governments bear the primary responsibility for recovery and have a great stake in its success. Experiences from past disasters may help them better prepare for the challenges of managing and implementing the complexities of disaster recovery. GAO was asked to identify insights from past disasters and share them with state and local officials undertaking recovery activities. GAO reviewed six past disasters-- the Loma Prieta earthquake in northern California (1989), Hurricane Andrew in south Florida (1992), the Northridge earthquake in Los Angeles, California (1994), the Kobe earthquake in Japan (1995), the Grand Forks/Red River flood in North Dakota and Minnesota (1997), and Hurricanes Katrina and Rita in the Gulf Coast (2005). GAO interviewed officials involved in the recovery from these disasters and experts on disaster recovery. GAO also reviewed relevant legislation, policies, and its previous work. While the federal government provides significant financial assistance after major disasters, state and local governments play the lead role in disaster recovery. As affected jurisdictions recover from the recent hurricanes and floods, experiences from past disasters can provide insights into potential good practices. Drawing on experiences from six major disasters that occurred from 1989 to 2005, GAO identified the following selected insights: (1) Create a clear, implementable, and timely recovery plan. Effective recovery plans provide a road map for recovery. For example, within 6 months of the 1995 earthquake in Japan, the city of Kobe created a recovery plan that identified detailed goals which facilitated coordination among recovery stakeholders. The plan also helped Kobe prioritize and fund recovery projects, in addition to establishing a basis for subsequent governmental evaluations of the recovery's progress. (2) Build state and local capacity for recovery. State and local governments need certain capacities to effectively make use of federal assistance, including having sufficient financial resources and technical know-how. State and local governments are often required to match a portion of the federal disaster assistance they receive. Loans provided one way for localities to enhance their financial capacity. For example, after the Red River flood, the state-owned Bank of North Dakota extended the city of Grand Forks a $44 million loan, which the city used to match funding from federal disaster programs and begin recovery projects. (3) Implement strategies for businesses recovery. Business recovery is a key element of a community's recovery. Small businesses can be especially vulnerable to major disasters because they often lack resources to sustain financial losses. Federal, state, and local governments developed strategies to help businesses remain in the community, adapt to changed market conditions, and borrow funds at lower interest rates. For example, after the Loma Prieta earthquake, the city of Santa Cruz erected large pavilions near the main shopping street. These structures enabled more than 40 local businesses to operate as their storefronts were repaired. As a result, shoppers continued to frequent the downtown area thereby maintaining a customer base for impacted businesses. (4) Adopt a comprehensive approach toward combating fraud, waste, and abuse. The influx of financial assistance after a major disaster provides increased opportunities for fraud, waste, and abuse. Looking for ways to combat such activities before, during, and after a disaster can help states and localities protect residents from contractor fraud as well as safeguard the financial assistance they allocate to victims. For example, to reduce contractor fraud after the Red River flood, the city of Grand Forks established a credentialing program that issued photo identification to contractors who passed licensing and criminal checks.
The safe storage of plutonium has become increasingly important for the Department of Energy (DOE) since it ceased producing nuclear weapons in 1989. Although DOE no longer manufactures plutonium for use in nuclear weapons, the plutonium it produced in the past by irradiating uranium in nuclear reactors poses hazards to workers’ health and safety. The majority of DOE’s plutonium inventory (excluding reactor fuel, spent nuclear fuel, and special isotopes) is stored at five sites that formerly developed or produced nuclear weapons components or materials and a sixth facility where those weapons are now dismantled. Prior to 1989, DOE usually stored plutonium only temporarily because the Department continually recycled it for use in nuclear weapons. In 1994, both DOE and the Defense Nuclear Facilities Safety Board identified problems with how the Department stored its plutonium. In an effort to remediate these problems, DOE developed and began implementing a plan to stabilize and package its plutonium that was not in nuclear weapons components. Plutonium in nuclear weapons components was excluded because it was considered to be relatively safe and stable compared to other forms of plutonium. Although it recently decided to dispose of the United States’ excess plutonium inventory, DOE is many years away from implementing this decision and must safely store these materials in the interim. Plutonium, a radioactive element, exists in several forms, including metals, oxides, residues, and solutions. Plutonium metals are stable if packaged correctly. The remainder of DOE’s plutonium—oxides, residues, and solutions—is in forms that are less stable. Plutonium oxides are fine powders produced when plutonium metals react with oxygen—during processing of plutonium for weapons or other uses, or during storage. Plutonium residues are the by-products of plutonium processing and generally contain plutonium in concentrations of less than 10 percent. These residues include plutonium mixed with other materials, such as impure plutonium metals and oxides, ash, contaminated glass and metals, and other items. Plutonium solutions are acidic and corrosive, making their containers vulnerable to leakage. Most of DOE’s plutonium is stored as metals because during the production era, plutonium in other forms was recycled and purified into metals to be used in pits for nuclear warheads. A plutonium pit is a nuclear weapons component, made up of a plutonium metal sphere encased in a nonradioactive metal shell, which can be compressed by detonating high explosives inside a weapon to create a nuclear explosion. If not safely contained and managed, plutonium can be dangerous to human health, even in small (microgram) quantities. Inhaling a large dose of plutonium particles can cause lung injuries and death, while exposure to a small dose creates a long-term risk of lung, liver, and bone cancer. When the container or packaging (and the metal shell for pits) fails to fully contain the plutonium, the potential for exposure exists. Leakage from corroded containers or inadvertent accumulations of plutonium dust in piping or duct work pose health and safety hazards, especially in aging, poorly maintained, or obsolete facilities. When DOE stopped producing nuclear weapons in 1989, much of its plutonium was either not in a suitable form, such as plutonium in solutions, or was not packaged for long-term storage. DOE’s plutonium inventory is stored primarily at six sites. Five of these sites formerly developed or produced nuclear materials or weapons: the Hanford Site, in Washington; Lawrence Livermore National Laboratory, in California; Los Alamos National Laboratory, in New Mexico; the Rocky Flats Environmental Technology Site, in Colorado; and the Savannah River Site, in South Carolina. The remaining site, the Pantex Plant, in Texas, is predominantly a nuclear weapons dismantlement site, where the majority of DOE’s plutonium pits are stored. Pantex does not store plutonium that is not in pits. (See fig. 1.1.) The former weapons production sites have different amounts and forms of plutonium not in pits. For example, the Rocky Flats Environmental Technology Site, with about 12.7 metric tons of this plutonium, has the largest inventory of plutonium and many of the more unstable forms, including residues, while the other four sites have different amounts and forms of plutonium, as shown in table 1.1. Even though the United States no longer manufactures new nuclear weapons, some of DOE’s plutonium is still needed to support the U.S. nuclear weapons stockpile. The plutonium pits in DOE’s custody that are needed for national security purposes are stored primarily at the Pantex Plant. As part of the U.S. nuclear strategic reserves, these pits will be retained for an indeterminate amount of time, in case the plutonium is ever needed for use in nuclear weapons. In 1994, both DOE and the Defense Nuclear Facilities Safety Board noted safety problems with DOE’s storage of plutonium not in pits. DOE subsequently developed an implementation plan to address these safety problems by having much of this plutonium stabilized and packaged for safe long-term storage by May 2002. In March 1994, the Secretary of Energy requested that DOE’s Office of Environment, Safety and Health conduct a comprehensive assessment to identify the risks of storing plutonium in DOE facilities and to determine which were the most dangerous and urgent. The assessment, which considered both plutonium not in pits and plutonium in pits, identified such vulnerabilities as the degradation of plutonium materials and packaging and weaknesses in facilities and administrative controls. These vulnerabilities are important because they could cause inadvertent releases of plutonium, which could expose workers. In April 1994, the Defense Nuclear Facilities Safety Board issued a report describing problems with plutonium storage safety at four of the Department’s sites with large inventories of plutonium. Subsequently, in May 1994, the Board recommended that the Department take action to safely store its plutonium. In this recommendation, the Board expressed concern that the cessation of nuclear weapons production had left plutonium in an unsafe state that should be remediated. For example, when packaging the plutonium not in pits, some sites used plastic inner liners, which could react with the plutonium to form a buildup of hydrogen gas that could bulge and even rupture the outer containers or cause the plutonium to spontaneously ignite. The Board also identified specific materials, in the form of plutonium residues, that it believed to be higher-risk because of their unstable nature, uncertainty about what the plutonium was mixed with, or the inappropriate packaging of the materials. According to a Board staff member, the Board excluded plutonium in pits from its recommendation because it believed that in the near term, storage problems were not as severe for pits as for the other forms of plutonium. As required by statute, the Secretary of Energy prepared an implementation plan responding to the Defense Nuclear Facilities Safety Board’s 1994 recommendation. In that plan, DOE established milestones for stabilizing and packaging its plutonium not in pits, including metals, oxides, residues, and solutions. Stabilizing plutonium not in pits includes such activities as brushing loose oxides from the plutonium metals and heating plutonium oxides to a high temperature to (1) remove any moisture that could cause the buildup of gases that could burst the containers and (2) make the oxides into larger particles to reduce the potential for dispersal. Plutonium residues are typically stabilized by either converting them into plutonium oxides through various processes or by blending them with other materials for disposal at the Waste Isolation Pilot Plant when this facility becomes available. Plutonium solutions are not appropriate for storage and have to be processed into a solid form before the plutonium can be stored. DOE requires that stabilized plutonium metals that are not in pits and oxides be packaged in approved, sealed double containers to isolate the plutonium from the outside environment and to prevent its release. In April 1997, we reported that DOE estimated that its plutonium management activities, including stabilization and storage, at eight sites across the complex would cost approximately $7.9 billion, in constant 1996 dollars, from fiscal year 1995 through fiscal year 2002. However, DOE does not specifically break out its costs for stabilizing, packaging, and storing its plutonium from that total. In January 1997, DOE formally decided how it would dispose of its plutonium that is excess to national security requirements. The Department plans to convert excess plutonium into forms that are difficult to reuse in nuclear weapons and are suitable for permanent disposal and to store the plutonium until the conversion can be completed. To convert its excess plutonium to other forms, DOE intends to pursue a hybrid strategy: (1) burning the plutonium as fuel in power reactors and (2) immobilizing it in glass or ceramic material. As described in our April 1997 report, DOE’s estimated cost to implement its hybrid strategy would be approximately $2 billion, in constant 1996 dollars. This strategy, however, is subject to technical, institutional, and cost uncertainties. For example, DOE has not yet determined where the disposition facilities will be located or which technology will be used for immobilization. DOE is currently assessing the possible environmental impacts of several likely sites where plutonium disposition activities may take place and plans to have a final decision in late 1998 or early 1999. While DOE’s January 1997 record of decision on disposition strategies focuses on converting the nation’s excess plutonium to safer forms for disposal, DOE must safely store its excess plutonium until disposition facilities are built and available for converting the plutonium. In April 1997, we reported that DOE anticipates completing its conversion activities by 2023. The Chairman of the Subcommittee on Energy and Power, House Committee on Commerce, asked us to review DOE’s efforts to stabilize, package, and store its plutonium, including problems the Department has encountered or anticipates in accomplishing these activities, specifically for (1) plutonium that is not in the form of nuclear weapons components, or pits, and (2) plutonium in the form of pits. To review DOE’s management of its plutonium that is not in pits (excluding reactor fuel, spent nuclear fuel, and special isotopes), we obtained and analyzed DOE’s 1994 plutonium vulnerability assessment, its plutonium storage standards, and its implementation plan for stabilizing and packaging the plutonium. We identified progress in meeting milestones in the plan by interviewing officials and gathering and analyzing data from the Defense Nuclear Facilities Safety Board, DOE headquarters, and the DOE sites that maintain the majority of DOE’s plutonium not in pits. These sites are the Hanford Site, near Richland, Washington; Lawrence Livermore National Laboratory, in Livermore, California; Los Alamos National Laboratory, in Los Alamos, New Mexico; the Rocky Flats Environmental Technology Site, near Denver, Colorado; and the Savannah River Site, near Aiken, South Carolina. To review DOE’s management of its plutonium pits, we reviewed and analyzed DOE’s 1994 plutonium vulnerability assessment and reviewed and analyzed the subsequent Pantex Corrective Action Plan. We also interviewed officials and gathered and analyzed data from the Defense Nuclear Facilities Safety Board, DOE headquarters, DOE’s Albuquerque Operations Office, Los Alamos National Laboratory, and the Pantex Plant, near Amarillo, Texas. The Department of Energy provided written comments on a draft of this report. These comments are presented and evaluated at the end of chapters 2 and 3. The full text of the Department’s comments is provided in appendix I. We conducted our review from May 1997 through February 1998 in accordance with generally accepted government auditing standards. DOE’s activities to stabilize, package, and store its plutonium not in pits are primarily guided by two DOE standards governing plutonium storage and the Department’s implementation plan, which commits the Department to stabilize and package its plutonium metals and oxides for long-term storage by May 2002. While the five DOE sites with the majority of the plutonium not in pits have made progress in stabilizing their plutonium, all have had delays in meeting implementation plan milestones, including some critical ones for higher-risk plutonium, and the sites anticipate more delays. Various problems contribute to these delays in meeting milestones, including (1) changes from the technologies originally chosen by Rocky Flats to stabilize plutonium to meet a security requirement; (2) a suspension of plutonium stabilization operations due to safety problems at Hanford; (3) competing priorities for funding, staff, and equipment at Los Alamos; and (4) delays in obtaining a system for stabilizing and packaging plutonium at three sites. Missing these milestones will result in some sites’ not having all of their plutonium metals and oxides stabilized and packaged by May 2002. Given the inherent dangers of plutonium, such delays result in a continuing risk to workers’ health and safety and increased costs. Although DOE is planning to dispose of its excess plutonium, it has yet to develop final disposition criteria. As a result, it is unknown whether current activities to stabilize and package plutonium for long-term storage will be compatible with the activities required for the disposition of this plutonium. DOE’s activities to stabilize, package, and store its plutonium not in pits are based primarily on three DOE documents: (1) Criteria for Preparing and Packaging Plutonium Metals and Oxides for Long-Term Storage, dated September 1996 (DOE Standard 3013); (2) Defense Nuclear Facilities Safety Board Recommendation 94-1 Implementation Plan, dated February 1995; and (3) Criteria for Interim Safe Storage of Plutonium-Bearing Solid Materials, dated November 1995. DOE Standard 3013 establishes safety criteria for packaging plutonium metals and stabilized plutonium oxides for long-term storage. This standard prescribes the form the plutonium must be in and processes for stabilization. For example, Standard 3013 requires that plutonium oxides be stabilized by heating them in air to a very high temperature—approximately 950 degrees celsius or higher—for at least 2 hours. The standard also contains requirements for plutonium packaging and for inspection, surveillance, documentation, and quality assurance and control. According to DOE Standard 3013, plutonium that is stabilized and packaged to meet this requirement should be safe for storage for at least 50 years. DOE’s implementation plan established milestones to address the Defense Nuclear Facilities Safety Board’s 1994 recommendation to the Secretary of Energy for the safe storage of the Department’s nuclear materials, including plutonium not in pits. In its implementation plan, DOE agreed to have all of its plutonium metals and oxides stabilized and packaged to meet DOE Standard 3013 by May 2002. Until their plutonium metals and oxides meet Standard 3013, officials at the five sites that we visited stated that they are meeting DOE’s criteria for interim storage. Issued in November 1995, the interim storage criteria—for storage from 5 to 20 years—define an acceptable interim state for plutonium residues until they are converted to oxides and meet Standard 3013 or are shipped to the Waste Isolation Pilot Plant. To provide flexibility to address the broad range of materials and differences among facilities, the interim storage criteria are very general in nature and allow for a variety of approaches. However, the criteria are less stringent than Standard 3013 and do not provide the level of storage safety afforded by the standard. According to DOE site officials and a Defense Nuclear Facilities Safety Board staff member, until the plutonium metals and oxides meet Standard 3013, there is a continuing risk to workers’ health and safety. The five sites we reviewed have made progress in stabilizing their plutonium. According to DOE officials, plutonium stabilization activities have focused on getting the plutonium into safer forms or packaging to reduce the risk to workers’ health and safety. For example, Rocky Flats has drained plutonium solutions from 15 tanks and processed many of these solutions into solid forms, thus reducing the risk. In addition, Rocky Flats and Savannah River have repackaged all of their plutonium that was in direct contact with plastic—a condition that is dangerous because the plastic can react with the plutonium to form a buildup of gas that can cause the containers to rupture and possibly ignite spontaneously if exposed to air. But due to the numerous past and anticipated future delays at the various sites, it seems unlikely that DOE will meet its May 2002 date for stabilizing, packaging, and storing its plutonium that is not in pits. DOE established 98 milestones for its plutonium stabilization and packaging activities at the five sites we visited, ranging from 9 milestones at Lawrence Livermore to 37 at Rocky Flats. Half of these (49 of 98) mark activities that have been completed at the five sites. These milestones focused on two primary areas: (1) preliminary activities required for subsequent stabilization activities, such as preparing environmental impact statements, and (2) stabilizing higher-risk plutonium, such as plutonium in contact with plastic. Of the remaining 49 milestones, 59 percent have already been delayed or are at risk of delay. These remaining milestones include activities for completing the stabilization and packaging to ready plutonium metals and oxides for long-term storage. All five sites have identified milestones that are at risk of delay, and over 40 percent of these delays are expected to be for 1 year or more from the original due dates in the implementation plan. Notwithstanding the risk of potential delays, DOE officials at three of the sites believe they will meet the May 2002 commitment date, but officials at two of the sites told us they will not. Officials at Rocky Flats, Savannah River, and Lawrence Livermore stated that they plan to have their plutonium metals and oxides stabilized and packaged for long-term storage by May 2002. On the other hand, officials at Hanford and Los Alamos told us that they currently anticipate missing the May 2002 date, although these delays to the Department’s commitment date have not been approved by DOE headquarters. Hanford officials estimate that their completion date will slip by 7 months because of the suspension of the site’s plutonium stabilization activities at one facility there. According to Los Alamos officials, their site is planning to delay completing its activities for up to 3 years beyond May 2002. Table 2.1 shows, for the five DOE sites, the status of the implementation plan milestones for stabilizing and packaging plutonium not in pits. As shown in table 2.1, all five sites have identified milestones that are at risk of delay, but these milestones and the sites’ plans for them vary. For example, although Los Alamos has identified only two milestones at risk of delay, one of these milestones is the ultimate completion of its stabilization and packaging activities. Los Alamos is anticipating up to a 3-year delay beyond May 2002 because of its competing priorities for funding, staff, and equipment. On the other hand, while Lawrence Livermore has seven remaining milestones—five of which are at risk of delay—officials from this site told us that because they have a very small inventory of plutonium to stabilize and repackage, they anticipate meeting the May 2002 date. Although Rocky Flats officials told us that they plan to meet May 2002, we believe the site may have difficulty meeting this commitment because of the many delays it has already experienced and the additional milestones it anticipates missing in the future. Site officials explained that there may be alternatives to stabilizing plutonium on-site—including shipping some to other sites for stabilization. They also believe that they can achieve higher efficiencies than they originally expected from their new plutonium stabilization and packaging system in readying the metals and oxides for storage. However, many obstacles would have to be overcome to allow the shipment of unstabilized plutonium to other sites, including determining the receiving sites’ future storage capabilities and obtaining approval for shipments. Also, the site’s new stabilization and packaging system has not yet been installed or fully tested, and any possible efficiencies in the new system have not been proven. Furthermore, Rocky Flats possesses the most plutonium among the five sites and many of the more unstable residues and solutions, but only limited capability to process these materials. Delays that have occurred or are anticipated in meeting implementation plan milestones are attributable to several factors. For example, unanticipated changes from the technologies originally chosen to stabilize some of the plutonium residues have impeded progress at Rocky Flats, as has the suspension of plutonium stabilization activities at Hanford. In another case, as described, Los Alamos officials cited competing priorities for funding, staff, and equipment as an impediment. Furthermore, three sites are experiencing delays in obtaining a system for stabilizing and packaging their plutonium. These delays result in a continued risk to workers’ health and safety and increased costs to DOE and taxpayers. According to DOE officials, unanticipated changes from the technologies originally chosen to stabilize two types of Rocky Flats’ plutonium residues have contributed to delays in meeting two of its milestones. Originally, Rocky Flats officials thought that all of the site’s residues would be exempted from meeting a DOE security requirement specifying the level of plutonium content acceptable so that the materials will not have to be guarded at the Waste Isolation Pilot Plant. In July 1996, DOE headquarters officials informed Rocky Flats that it had to either comply with this requirement or qualify for a variance. Shortly thereafter, Rocky Flats requested but was subsequently denied a variance for some of its plutonium residues. In particular, Rocky Flats had originally planned to have one type of plutonium residue (graphite fines) stabilized by May 1997. However, since the process it had originally chosen would not meet the security requirement, Rocky Flats selected a different process for stabilizing graphite fines—switching from heating them at a high temperature (calcination) to immobilizing them in molten glass (vitrification). To accommodate this change, the site plans to spend an additional $300,000 and will not have its graphite fines stabilized until September 1998—a delay of 16 months from the original milestone. In addition, Rocky Flats had originally planned to have the majority of its plutonium salt residues stabilized by May 1997 using an available technology. According to a DOE official, as with the situation with graphite fines, Rocky Flats thought these salts would be exempted from the security requirement specifying the allowable plutonium content. However, for some of these salt residues, the site did not receive a variance, and since the process it had originally chosen would not comply with this requirement, a different technology—a distillation process to separate the salts from the plutonium—was chosen. To accommodate this change, the site plans to spend an additional $14.5 million and does not expect to complete the work for this milestone until January 1999—a 20-month delay from the original date in the implementation plan. Since December 1996, the Hanford Site’s stabilization activities have been suspended owing to the shutdown of one of its facilities for safety infractions. The DOE contractor managing this facility failed to comply with operating regulations concerning the safe handling of nuclear materials—leading to the suspension of plutonium stabilization operations at this facility. In order to resume operations, the facility must pass a review by DOE. Hanford officials expect to resume stabilization activities at the plant in March 1998, at the earliest. In addition to the suspension of stabilization activities, because of budget cutbacks Hanford expects delays in installing its new plutonium stabilization and packaging system. To make up for these delays, Hanford officials told us that when this new system becomes operational, they plan to go from a 5-day-per-week, three-shift-per-day work schedule to a 7-day-per-week, three-shift-per-day schedule. This increase would last about 3 years—beginning late in 2000, when the site’s plutonium stabilization and packaging system is planned to become fully operational, and continuing into December 2002, when Hanford officials plan to have all of the site’s plutonium metals and oxides stabilized and packaged for long-term storage. Hanford officials were unable to estimate the likely costs of the approximately 2-year expanded work schedule, and given the site’s budget constraints, they were unsure whether funds for this work schedule would be available. In commenting on a draft of this report, the Department stated that questions remain about how plutonium stabilization work will be prioritized by the site. The Department believes that if the risk is determined to be high enough, funds will be provided. According to Los Alamos officials, competing priorities for site funding, staff, and equipment have caused delays there. These officials stated that the site may not have its plutonium stabilized and packaged for long-term storage by May 2002 and plans to delay its completion date by up to 3 years—possibly until 2005. According to site officials, an assessment it conducted in mid-1997 shows a marginal increase in risk due to the delay. According to site officials, the site’s stabilization program lost momentum because of budget reallocations in fiscal year 1997, and they expect additional funding reallocations for fiscal year 1998. In commenting on our draft report, the Department clarified that as DOE reduces the overall size of its weapons complex, missions and programs considered still vital to national defense are being relocated and consolidated at the Department’s remaining operational sites. Los Alamos has become the new site for some of these relocated missions and programs. Plutonium stabilization activities must compete with these defense missions and programs for financial resources, personnel, and facilities at the site, and this competition will likely continue in the future as Los Alamos continues to expand its weapons-related mission. However, DOE further commented, “Remediation efforts will continue at Los Alamos, and the Department is reviewing proposals to hire additional personnel and add additional equipment to continue this work in an effective and efficient manner.” Four of the five sites we visited—Rocky Flats, Hanford, Lawrence Livermore, and Savannah River—plan to procure and install a new plutonium stabilization and packaging system for their metals and oxides to meet DOE’s long-term storage standard. The sites will have variations of this system, with costs ranging from nearly $1.9 million for a manual packaging system at Lawrence Livermore to $28.9 million for the prototype automated version of the stabilization and packaging system at Rocky Flats. Three sites have identified milestones that are at risk because of delays in procuring this new system. Rocky Flats and Hanford anticipate delays ranging from 6 to 18 months in having their stabilization and packaging systems operational—contributing to difficulties in meeting the May 2002 date. The third site that is experiencing delays in using this system is Lawrence Livermore; however, this site is purchasing a manual packaging unit, has only a small quantity of plutonium to package, and anticipates meeting the May 2002 commitment date. DOE’s plutonium stabilization and packaging activities are focused on getting the Department’s plutonium that is not in pits into safe long-term storage. Due to the nature of plutonium, if it is not stabilized and stored properly for the long term, it could become airborne—thereby exposing workers to it. As described, plutonium can be dangerous to human health, even in small quantities, and site officials acknowledge that any delays in stabilizing, packaging, and storing the plutonium result in continuing the existing level of risk to workers’ health and safety by delaying the risk reduction that is achieved by those activities. Delays also result in increased costs. For example, according to a Hanford official, continuing plutonium stabilization and packaging operations at the site would cost $20 million per year, at current costs. While Savannah River anticipates meeting the May 2002 date, it anticipates an intermediate delay that will result in the continued operation of one of its processing facilities for an extra year, at a cost of $16 million. Delays also prevent DOE from achieving cost reductions from deactivating sites or facilities, as safeguards and security must be provided as long as plutonium or other nuclear materials remain there. As noted earlier in chapter 1, the Department plans to convert the nation’s excess plutonium through two technologies—burning the plutonium in reactors and immobilizing it in glass or ceramics—to make it difficult to reuse in nuclear weapons and suitable for permanent disposal. Until DOE has developed and built facilities for both of these options, it plans to store the excess plutonium at several DOE sites. Although DOE announced its decision to dispose of the excess plutonium, it has not finalized the criteria the plutonium must meet to be acceptable for disposition. According to a DOE official, at the time the decision was announced, in January 1997, the two disposition technologies were not mature enough for disposition criteria to be developed. Since then, DOE produced a draft of the disposition criteria in July 1997, and final criteria are expected in June 1998. Without final disposition criteria available, the sites are proceeding to stabilize and package their plutonium that is not in pits according to the existing storage standards—especially DOE Standard 3013. However, DOE Standard 3013 for long-term storage and the draft criteria for disposition vary in some significant ways, which could result in additional activities or processing steps and increased costs. For example, according to DOE, the draft disposition criteria would require the sites to provide historical information on how the plutonium was processed, what impurities are likely to be included with it, and what the physical condition of the plutonium is. However, under Standard 3013, the sites are not currently required to retain this information with the plutonium. If the necessary information was not available, the draft criteria for disposition would require the sites to sample their plutonium to gather it. Sampling of the plutonium is not required by Standard 3013 and, as described in the draft disposition criteria, would require additional and potentially expensive equipment and activities by the sites prior to shipping the plutonium to the disposition facilities. The additional equipment and activities would add to the cost and time required for disposing of the plutonium. According to officials from both the Office of Environmental Management and the Office of Fissile Materials Disposition—the DOE headquarters organizations responsible for stabilization, packaging, and storage activities and for disposition activities, respectively—there has been some coordination between the two organizations to attempt to resolve differences between DOE Standard 3013 and the draft disposition criteria. However, in a December 8, 1997, letter to the Secretary of Energy, the Chairman of the Defense Nuclear Facilities Safety Board cited problems with DOE headquarters’ integration of stabilization and disposition and concluded that these problems had contributed to delays in meeting implementation plan milestones and unacceptable postponement of stabilizing materials, along with significantly greater budget requirements. Specifically, the letter noted that there was no organization with crosscutting authority and resources within the Department to integrate stabilization and disposition activities across the DOE complex. To remedy this problem, the Board suggested that DOE designate a lead officer with primary responsibility for the program as a whole. According to a Board staff member, DOE has not responded to the December 8, 1997, letter. In addition to the Board’s concerns, several site officials told us that they are concerned about whether plutonium that is stabilized and packaged to meet the standard for long-term storage will be compatible with DOE’s final disposition criteria. Several site officials also stated that the DOE headquarters organizations responsible for these two activities need to work out differences between the long-term storage and disposition requirements to preclude additional activities or processing steps, which would add to the cost and time required. One contractor official told us that if the bridge between stabilization and disposition were fully understood, complications with disposition could be avoided. DOE is taking important steps to reduce the dangers of plutonium that is not in pits by beginning to stabilize and package it for long-term storage. For example, the sites have stabilized the majority of the higher-risk residues to reduce the risk to workers’ health and safety. However, given its history of delays and the anticipated future delays in meeting many of its milestones, DOE is unlikely to meet its commitment to stabilize, package, and store its plutonium metals and oxides by May 2002. Delaying these activities will result in continuing health and safety risks to workers and increased costs at DOE facilities. As stabilization, packaging, and storage activities progress to meet DOE Standard 3013 for long-term storage, the Department is also moving toward the disposition of excess plutonium. The headquarters organizations responsible for these two sets of activities—the Office of Environmental Management and the Office of Fissile Materials Disposition—have coordinated some, but the Defense Nuclear Facilities Safety Board has recently cited problems with the integration of these activities across the DOE complex. Furthermore, several site officials have suggested that the two organizations need to work out any differences between the final disposition criteria (due out in June 1998) and DOE Standard 3013 to avoid unnecessary rework and costs. In overall comments concerning its stabilization program, the Department stated that the program has now moved into a phase that requires extensive integration among deactivation programs, disposition programs and active weapons programs. Furthermore, the implementation of several policy decisions—including polices regarding stewardship of the nuclear weapons stockpile, the disposition of “weapons-usable” fissile materials, and accelerated cleanup—has required the Department to reevaluate many of its stabilization plans, “to define a technically and managerially sound path forward.” According to the Department, activities have been initiated to produce a fully integrated and optimized revision to the implementation plan for plutonium stabilization, complex wide. The Department is proposing a two-path approach to formally revise its commitment in the implementation plan: (1) as soon as possible, forward known changes and decision paths to the Defense Nuclear Facilities Safety Board and (2) by the end of December 1998, submit an integrated revision of the implementation plan to DOE management and the Board for approval. We agree with the Department’s commitment to define a technically and managerially sound path in revising its implementation plan. Furthermore, as reflected in our conclusions, we support the Department’s stated intent to integrate its plutonium management across the complex. However, based on its comments, the Department appears to be totally reassessing its existing implementation plan in light of the opportunities for this integration and departmental policy decisions about such as stewardship of the stockpile and accelerated cleanup. Until the Department’s complexwide plan is complete—scheduled for the end of December 1998—we cannot speculate on the impact in terms of costs; timeframes for completing plutonium stabilization, packaging, and storage activities; or the risk to the workers. DOE further commented that the statement “The Department is unlikely to meet its May 2002 target date . . .” does not convey the fact that a large percentage of the stabilization work will be done by May 2002, even if that milestone for final repackaging of the plutonium is missed at some sites. While we agree that much of the stabilization work could be done by May 2002, we cannot project with any degree of certainty the actual extent to which it will be completed. Furthermore, while stabilization is a critical step in this process, the risk reduction to workers anticipated by the Defense Nuclear Facilities Safety Board’s 1994 recommendation and the Department’s implementation plan will not be fully achieved until the plutonium is packaged for safe long-term storage. While some sites are currently projecting that they will have all of their stabilization and packaging activities completed by May 2002, others are anticipating delays. Therefore, DOE as a whole is unlikely to meet the May 2002 target date. In addition to the overall comments cited above, the Department provided a number of more detailed or technical comments, and the report has been revised as appropriate to reflect these specific comments. The Department’s comments and our responses are presented in appendix I. Since the end of the Cold War, DOE has retired and dismantled large numbers of nuclear weapons and curtailed recycling the plutonium into new nuclear weapons. As a result, the Department has had to store the plutonium pits for prolonged periods of time. However, because extended storage had never been required, DOE had no containers specifically designed for that purpose. Since 1989, DOE has stored pits in a type of container known as the AL-R8, which was designed to transport pits. However, since that time, both DOE and the Defense Nuclear Facilities Safety Board have indicated that pits should not be stored in these containers for an extended period. These containers are unsuitable for extended storage because moisture absorbed by their cushioning liner could accelerate some pits’ corrosion, increasing the possibility that a pit will crack. Should that occur, the container may not contain the plutonium, thus risking workers’ exposure to it. To remedy this safety problem, DOE spent nearly $50 million over 5 years to develop a replacement container, but because each container will cost about $8,000, the Department plans to use the new container to repackage only about 5 percent of its pits. Currently, DOE has no formal plan or schedules to repackage the remaining 95 percent of its pits. However, DOE is evaluating options for another replacement container and intends to choose a design and have a repackaging plan by April 1998. As of February 1998, only a preliminary draft of the plan was available—much of it only in outline format—so we were unable to determine if it will adequately address the outstanding issues in storing pits. In the meantime, about 10,000 pits at DOE’s Pantex Plant have been stored in the AL-R8 containers, posing a risk to workers’ health and safety, and DOE has only preliminary estimates of what it will cost to resolve this problem. Moreover, as DOE continues to dismantle weapons, the number of pits stored in these containers continues to grow. Although the Defense Nuclear Facilities Safety Board and DOE laboratories have criticized the limited monitoring program for the pits stored for an extended period in AL-R8 containers at Pantex, the Department has decided not to implement the aggressive monitoring program recommended by the laboratories to maintain safety. Since the end of the Cold War and the dissolution of the Soviet Union, the United States has entered into international agreements and established national policy to retire and dismantle thousands of nuclear weapons. As it removed pits from these weapons, DOE no longer recycled the plutonium for use in manufacturing new weapons, but, for the first time, had to store these pits for a prolonged period. However, DOE had no containers specifically designed for that purpose. As a result, in 1989, when DOE started storing increasing numbers of pits, the Department decided to store them in existing AL-R8 containers, which were designed for transporting the pits. According to the DOE official responsible for overseeing the storage of pits, in 1989 the Department may have assumed that because the AL-R8 containers had been certified to transport pits and met requirements to withstand various accident scenarios, they could also be used to store the pits. The basis for this assumption, however, is unclear, and DOE officials were unable to provide any analysis supporting the 1989 decision. An AL-R8 container consists of an outer steel drum with a clamped (but unsealed) lid. Inside this steel drum, the pit is secured on a metal frame and surrounded by a fibrous cushioning liner. Normally, pits are placed into AL-R8 containers after they have been removed from retired nuclear weapons during the dismantlement process at Pantex. See figure 3.1 for an illustration of an AL-R8 container. In 1990, the AL-R8 container was decertified for transportation because it could not meet updated shipping requirements, such as crush and leak tests. Within 1 year, DOE’s Albuquerque Operations Office sent a letter to Pantex and Rocky Flats directing that the AL-R8 not be used to transport pits off-site but allowing the continued use of the container for storing them. However, DOE was unable to provide documentation or related analysis explaining the basis of this decision. According to DOE officials, in 1992 the Department decided that the AL-R8 containers were the best it had available at that time for storing pits. However, again, DOE had no technical analysis to determine whether these containers were adequate for storing pits for an extended period of time. Using the AL-R8 container for storing pits poses a risk to workers’ health and safety. A DOE study and, more recently, DOE laboratory officials have expressed concerns about the continued use of the AL-R8 container to store pits. DOE’s 1994 vulnerability assessment noted that “being unsealed, the AL-R8 container does not keep out airborne contaminants and would not totally contain plutonium released from a failed pit.” In 1995, DOE’s Los Alamos and Lawrence Livermore national laboratories—the two laboratories that had designed the pits—jointly recommended that all pits be removed from the AL-R8s as soon as possible because of potential problems with corrosion resulting from moisture and chloride absorbed by the containers’ cushioning liner. According to the laboratories, the moisture and chloride can accelerate the pits’ aging process, which could lead to a pit’s cracking and the release of plutonium, thereby potentially exposing workers at Pantex. DOE and laboratory officials have also expressed concern over the aging of the pits and the extended period that some have been stored in the AL-R8 containers. Some of DOE’s pits are over 36 years old, and some have been stored in these containers for over 8 years. In late 1992, after the AL-R8 was decertified for transportation, DOE began a project to replace the container and, in 1993, clarified that this replacement container—known as the AT400A—had to be designed for both the transportation and storage of pits. Although it subsequently invested a great deal of time and nearly $50 million in this effort, DOE recently decided to use the AT400A to repackage only about 5 percent of its pits. At this time, DOE has no formal plan or schedules for repackaging approximately 95 percent of the pits. However, according to DOE officials, while a formal decision has not yet been made, the Department is developing a plan, which it intends to issue in April 1998. DOE officials believe that a “retrofit” of the AL-R8 is the most likely option and that it will be several more years before all the pits currently stored in the AL-R8 containers can be repackaged. In late 1992, after the AL-R8 failed to meet new transportation standards, the Department undertook a project to design a replacement container, called the AT400A. In addition to being used for transporting pits, DOE decided that the container had to also be able to store them for at least 20 years. However, DOE has not been successful in developing a cost-effective container that provides safe long-term storage and can also be used to transport pits. According to some DOE officials, to be cost-effective, a transportation container must be reusable. In contrast, a storage container, as illustrated by the problems with the AL-R8, needs to be sealed to keep out moisture and to keep the plutonium contained in the event that the pit would crack. Nonetheless, DOE attempted to design and develop a container that could be used for both purposes. After investing a great deal of time and nearly $50 million to design and develop the replacement container, DOE found that it is not cost-effective for extensive use in either capacity. According to DOE officials, at a cost of about $8,000 per container (largely due to transportation requirements), the AT400A is not cost-effective for use as a storage container (the containers alone for 10,000 pits would cost about $80 million). Furthermore, according to DOE officials, the AT400A is not cost-effective for multiple shipments between sites because it is designed to be welded shut for storage purposes and therefore is not reusable. The AT400A container consists of an outer stainless steel container that surrounds an inner, sealed container, within which a pit is secured by a metal fixture. Unlike the case with the AL-R8 container, the pit inside an AT400A is in a sealed environment and is not directly in contact with the cushioning material that could absorb moisture. Figure 3.2 shows an AT400A container. In addition to problems with developing a cost-effective dual-purpose design, DOE did not provide effective oversight or coordinate the work of its laboratories and Pantex in developing the AT400A container. DOE tasked three of its national laboratories to work on various aspects of the project: Sandia National Laboratory developed the container and the system to weld it shut, while Los Alamos and Lawrence Livermore national laboratories jointly developed the fixture to hold the pit inside the container. However, DOE did not ensure that the work of the laboratories was adequately coordinated and did not adequately involve Pantex safety experts in the design and development process. As a result, according to DOE and Pantex officials, after the design phase was complete, Pantex safety experts had to compensate for design flaws, which included a defective safety system and a weld directly over the pit, which could have allowed the welder to burn through the container into the pit. To resolve these major design problems, Pantex needed to commit additional time and expense. DOE project officials acknowledge that the Department did not adequately manage the development of the AT400A container and that design flaws occurred because of a lack of good coordination and communication among the four sites. DOE’s 1994 plutonium vulnerability assessment first identified problems with using the AL-R8 container for storing pits, but the Department has yet to resolve these problems. In June 1995, DOE developed a corrective action plan, and even though the AT400A container was then only under development, the Department regarded it as the container that would correct all the problems with the AL-R8 and developed schedules to repackage all the pits at the Pantex Plant into the new container by 2006.However, after determining that the cost to use the new container to repackage all of the pits was prohibitive, DOE decided to use the AT400A for only about 5 percent of the pits—those it considered to be at higher risk of cracking. Thus, DOE has essentially abandoned its initial plan and, as of January 1998, had not developed a formal plan and schedules to repackage the remaining 95 percent of the plutonium pits stored in AL-R8 containers. According to DOE officials, the Department is developing a plan for repackaging these pits, which it intends to issue in April 1998, and begin repackaging in late 1998. However, in a preliminary draft of the plan provided by officials in February 1998, many sections were in only a cursory outline form, so we were unable to determine if the plan will be adequate to ensure the problems in storing pits will be addressed. For example, at that time, the draft did not contain schedules or cost estimates for selecting a design, procuring the containers, or repackaging the pits. Furthermore, this draft included a listing of the numerous entities involved with repackaging and storage—within various organizations of the Department and its contractors—however, it did not define how these entities will interact or how their efforts will be coordinated, nor did it clearly delineate program responsibility and accountability for overseeing the various facets of the pit repackaging and storage program to ensure its success. DOE is developing a repackaging alternative that officials believe will be more cost-effective and will allow quicker repackaging than using the AT400A. As they describe it, this alternative will probably involve a retrofit of the original AL-R8 container by removing the pit from it, sealing the pit inside an inner container, and placing that inner container back into the AL-R8 outer container. DOE is currently reviewing alternative designs developed by Lawrence Livermore, Sandia, and Pantex and plans to have a decision by April 1998. The Department’s preliminary estimates of the costs to repackage 12,000 pits into retrofitted AL-R8 containers range from $35.5 million to $59.4 million. These estimates are based on the cost to purchase the containers (ranging from $20.5 million to $40.4 million), as well as start-up costs (from $1.2 million to $1.6 million) and operating costs (from $13.8 million to $17.4 million) for repackaging the pits. DOE officials estimate that, at the earliest, the repackaging could begin around the end of 1998. Given the number of pits to be repackaged and competing demands on equipment and facilities at Pantex, they estimate that it may take from 4 to 7 years to complete repackaging once the process begins. Thus, the potential exists for the unsuitable AL-R8 containers to be used for storing pits for up to 16 years. DOE has yet to make several critical decisions concerning pit storage in the future. First, according to DOE officials, pits that are being retained as strategic reserves for possible future use in nuclear weapons will require longer storage than pits that are excess to national security needs and that will eventually be disposed of. Currently, DOE officials expect the AL-R8 retrofit to safely store pits for approximately 20 to 25 years. However, the Department has not decided if it will store the strategic reserve pits in the AT400A container or the retrofit of the AL-R8 container or if it will develop another container for lengthier storage. According to DOE officials, the Department is evaluating this issue, and they expect a decision by April 1998. Second, because the threat of corrosion increases the longer that pits remain in the existing AL-R8 containers, DOE laboratories have recommended that the containers not be used for storing pits and that the Department implement an aggressive monitoring program to help ensure the pits are safely stored until they are repackaged. Specifically, in August 1995 Los Alamos and Lawrence Livermore national laboratories recommended that if DOE continues to store pits in AL-R8 containers for longer than 10 years, it should implement an aggressive monitoring program to examine 20 percent of the pits each year. With about 10,000 pits in storage now, monitoring 2,000 pits per year is a sizable increase over the current 30 pits per year that DOE now formally monitors. According to DOE and Pantex officials, implementing the monitoring program recommended by the laboratories would likely require constructing additional facilities, procuring additional equipment, and hiring and training additional staff. Although they had not conducted analyses of the costs or benefits of the enhanced monitoring program and were unable to provide a cost estimate, DOE officials told us that they believed the cost of implementing this program would be “significant and perhaps prohibitive.” They also thought the program would increase workers’ exposure to radiation from frequent handling and moving of the pits. Because the officials hope to have the pits repackaged before this type of aggressive monitoring becomes necessary, they have decided not to implement such a program. Nonetheless, as explained, some pits have already been stored in AL-R8 containers for over 8 years, and it will be several more years before all the pits can be repackaged. Although the Department has decided against the enhanced monitoring of its pits while they remain in the existing AL-R8 containers, DOE officials point out that they plan to conduct a visual examination and to check for contamination as each pit is repackaged. In its November 1997 report, the Defense Nuclear Facilities Safety Board also criticized DOE’s monitoring program of pits stored in AL-R8 containers at Pantex. In its report, the Board concluded that DOE’s current program to monitor the condition of these pits was insufficient because the number of pits currently monitored each year (approximately 30) was small compared to the thousands of pits stored there. The Board also noted that the variety of pits would require additional monitoring work to gather an adequate amount of data for an informed judgment about each type of pit. According to a Board staff member, monitoring the safety of the pits is most critical while they remain in the existing AL-R8 containers—once the pits have been repackaged into containers more suitable for extended storage, monitoring will be less important. To date, DOE has not responded to the Board’s report nor addressed the Board’s conclusion that the current monitoring program is insufficient to determine the condition of the pits stored at Pantex. Since 1989, DOE has stored its pits in containers that are not suitable for extended storage. The Department has not effectively managed its problems in storing pits, developed a cost-effective replacement container to repackage the pits, or performed adequate monitoring to ensure the pits are safe. DOE currently lacks a plan and schedules to repackage 95 percent of its pits, and workers’ health and safety have been placed at risk; the problem will continue to grow as DOE continues to retire and dismantle nuclear weapons and place additional pits into AL-R8 containers. Responsibility for addressing the issue of safely storing pits has been decentralized, with the involvement of various DOE organizations and contractor-managed laboratories and sites. While DOE officials have told us they are developing a plan for repackaging pits, there is currently only a preliminary draft, and it is too early to determine if the plan will adequately address the outstanding issues. However, at this time, certain key elements are not addressed, including comprehensive cost estimates and program budgeting; a clear delineation of program responsibility and accountability; and schedules for repackaging and storage and a system for tracking progress in meeting these schedules. Finally, the Department has not thoroughly analyzed or resolved the concerns raised by its own laboratories and the Defense Nuclear Facilities Safety Board about monitoring the safety of pits while they remain in unsuitable AL-R8 containers. Although DOE did not conduct analyses and therefore had no estimate of the costs and benefits of an enhanced monitoring program, the Department nonetheless decided not to implement such a program. However, even under optimal circumstances, it will be many years before DOE can repackage all of its plutonium pits into safer containers, and therefore pits will continue to be stored in the unsuitable AL-R8 containers well past the time recommended by the laboratories to begin aggressive monitoring. Furthermore, the history of delays in DOE’s program for repackaging pits lends added significance to the need for ensuring their safety while they continue to be stored in AL-R8 containers. We recommend that the Secretary of Energy ensure the timely and cost-effective resolution of the wide range of issues surrounding pit storage, including ensuring that the plan being developed by the Department addresses such key items as a clear definition of responsibility and accountability for program activities; realistic cost estimates and a program budget; and detailed schedules for designing and developing replacement containers and repackaging the pits, as well as a means to track progress against these schedules. In addition, given the length of time pits will be stored in unsuitable containers, we recommend that the Secretary, in cooperation with the DOE laboratories and the Defense Nuclear Facilities Safety Board, conduct a thorough safety analysis of the recommended enhanced pit monitoring program as well as other possible monitoring options to identify the most appropriate and cost-effective approach to ensure the specified safety concerns about the prolonged storage of pits in the unsuitable containers are resolved. In its comments on our draft report, DOE concurred with all but one part of one of our recommendations. The Department concurred with our recommendation for the timely and cost-effective resolution of the issues surrounding pit storage and agreed to include the recommended key items in its Integrated Pit Storage Program Plan, which it expects to issue in April 1998. In addition, the Department concurred with the portion of our recommendation calling for the Secretary to work closely with the DOE laboratories and the Defense Nuclear Facilities Safety Board to identify the most appropriate and cost-effective approach to address their concerns about the prolonged storage of pits in unsuitable containers. The Department stated that it has worked with the laboratories and the Board in the past to address concerns about storage activities at Pantex and will continue to do so. In contrast, DOE raised concerns about our recommendation that the Department conduct a safety analysis of the enhanced pit monitoring program as well as other possible monitoring options, stating the Department has “approved safety analyses for operations at the Pantex Plant, which provide coverage for pit storage activities.” The Department further requested that we clarify our basis for this recommendation. Our review of DOE’s safety analyses for Pantex’s operations revealed that these analyses were conducted before the DOE laboratories and the Defense Nuclear Facilities Safety Board identified the safety problems of pits in prolonged storage in AL-R8 containers and the resultant need for increased monitoring. Therefore, these specific issues were not addressed in DOE’s safety analyses. While DOE’s analyses considered the AL-R8s as the baseline containers for storing pits, they did not include a detailed evaluation showing that these containers were safe for extended storage. Therefore, we continue to recommend, in light of the prolonged storage of pits in the AL-R8 containers and the fact that safety concerns about these pits were not addressed in DOE’s safety analyses, that the Secretary conduct a thorough safety analysis of the Department’s pit monitoring options, including the enhanced monitoring program recommended by the laboratories, to ensure that the specific concerns raised are resolved. In addition, the Department raised a general concern that our report “does not present complete and accurate information about many important DOE initiatives to meet the challenges for managing plutonium in an environmentally safe and reliable manner which protects workers as well as the general population.” We disagree. Our report describes initiatives that the Department raised in its comments—the disposition program for excess plutonium and that program’s implications for plutonium storage; the revisions to Los Alamos National Laboratory’s plutonium stabilization program; and the development of a plan for repackaging the pits out of the AL-R8 containers, expected to be issued in April 1998. On the basis of DOE’s comments, we updated information on these initiatives and added information on additional pit surveillance activities to the report’s discussion of pit monitoring issues. Furthermore, the Department’s comments on our report discussed an initiative to revise its implementation plan for plutonium stabilization to integrate nuclear materials management activities complexwide. This initiative was not included in our report because Department officials did not mention it in our meetings with them in February; the Department’s comments on this report were the first indication that such an initiative was formally under way. In its comments, the Department noted that the final disposition plans for surplus plutonium and ongoing nonproliferation initiatives (i.e., bilateral and trilateral inspection agreements) are examples of the types of issues that have made it difficult to develop storage containers for pits. While we recognize that there are many outside factors that have affected and will continue to affect DOE’s management of its pits, we do not believe that these factors should have prevented the Department from resolving its pit storage problems. We note that despite the factors cited, the Department invested 5 years and nearly $50 million to develop a replacement container for the AL-R8, although this replacement container was ultimately determined to be too expensive. The following are GAO’s comments on the Department of Energy’s letter dated March 18, 1998. 1. To address the Department’s comment concerning the transfer of national defense missions to Los Alamos, we added the following to our report: “In commenting on our draft report, the Department clarified that as DOE reduces the overall size of its weapons complex, missions and programs considered still vital to national defense are being relocated and consolidated at the Department’s remaining operational sites. Los Alamos has become the new site for some of these relocated missions and programs.” The remainder of this comment provides information on the Department’s redefinition of the scope of plutonium remediation efforts at Los Alamos. However, this information generally supports rather than contradicts the information contained in our report that competing priorities, between national defense work and other activities, have caused delays in Los Alamos’ ability to meet its implementation plan milestone to have its plutonium stabilized and packaged for long-term storage. Therefore we made no additional changes to the report. 2. The Department has not defined “short-term storage” nor provided evidence that the AL-R8 containers are safe for any length of storage. However, to address this comment, we limited our use of the term “unsuitable” to discussions of the use of AL-R8 containers for extended or prolonged storage of pits. We also added footnote 6 in the “Executive Summary,” which states that, “According to DOE and laboratory officials, some pits are more susceptible to corrosion than others, depending on the metal used to encase the pit.” 3. To respond to this comment concerning the Department’s formal pit monitoring efforts and other inspections of its pits, we revised the report to read, 30 pits are “formally” monitored per year. Furthermore, on the basis of this comment and additional information provided by the Department, we added footnotes to the report that provide additional information on visual inspections of pits transferred from Rocky Flats to Pantex. In its comments, DOE states that, in addition to the formal monitoring effort, “several more are handled regularly in other routine activities” and that these activities “require visual checks and radiation swipes which would detect the concerns referred to in this report.” However, since these statements were not supported by the information provided by the Department, we did not revise the report. The information provided supported only that additional pits have been visually inspected on specific occasions but did not support a systematic program of visual inspection. The additional visual inspections cited were due to extraordinary events (such as the pits’ transfer to Pantex from Rocky Flats); they were not presented as a regular occurrence or as a planned addition to the formal monitoring program. We note that visual inspections are much less extensive than the testing and analysis performed as part of the formal monitoring program, and the officials did not provide information that these inspections would be able to detect the problems cited. Therefore, we do not believe that the visual inspections that have been conducted take the place of formal monitoring or negate our recommendation that the Department carefully analyze the need for enhanced monitoring to resolve safety concerns raised by the laboratories and the Defense Nuclear Facilities Safety Board. 4. The Department’s development of a repackaging plan for the pits at Pantex is discussed in detail in chapter 3 of our report. This discussion includes the selection of a replacement container, the time frame for repackaging, and the Department’s preliminary cost estimates. However, we revised the report to include the range “from 4 to 7 years” for repackaging the pits. Furthermore, on the basis of additional information provided by the Department, we revised the report to reflect DOE’s most recent preliminary cost estimates for repackaging 12,000 pits and included a footnote to reflect that these estimates are in fiscal year 1998 dollars and exclude Pantex’s overhead costs. 5. We address this comment regarding external factors influencing pit storage activities under the heading “Agency Comments and Our Evaluation” at the end of chapter 3. 6. Although we requested documentation or other information to support the comment that “longer than previously expected processing times . . . have also contributed to delays in meeting implementation plan milestones,” the additional information provided by the Department did not do so. Therefore, no revision was made to the report. 7. See comment 1 above for information added to the report to address DOE’s comment concerning the relocation of programs from other sites in the DOE weapons complex to Los Alamos. We also revised the report to reflect the Department’s position that remediation efforts will continue at Los Alamos, and that the Department is reviewing proposals for additional personnel and equipment. 8. We did not revise the report to reflect DOE’s comment on the distinction between the 1995 recommendation by the laboratories and the 1997 “Pit Storage Specification.” As discussed in our report, the 1995 recommendation by the laboratories concerns the need for increased monitoring of the pits while they remain in the existing AL-R8 containers. The 1997 “Pit Storage Specification” will apply to pits as they are repackaged out of the AL-R8 containers. According to DOE and laboratory officials, pits currently packaged in AL-R8 containers cannot comply with this specification. Furthermore, the 1997 specification does not negate the laboratories’ 1995 recommendation for increased pit monitoring. Rather, the specification states, “Increased sampling may be required if . . . aspects of this specification are not met”—which is exactly the case while the pits remain in AL-R8 containers. 9. The development of the AT400A container is discussed in our report. No changes were made to the report because the Department did not provide support for the suggestion that the work on the AT400A was the reason the AL-R8 was not viewed as an extended storage container. 10. On the basis of additional information provided by the Department, we revised the footnote to describe the 1997 recertification of the FL containers for transportation, revised the number of FL containers in service at this time, and included two additional reasons the Department provided for not using these containers for long-term storage: They are very expensive, at approximately $10,000 per container; and they were designed for transporting pits not storing them. 11. We did not revise the footnote concerning the pit that cracked because the Department did not provide additional support for the statement that this occurred “due to extreme conditions experienced during the disassembly process, which far exceed storage conditions.” 12. These general comments concerning the integration of stabilization activities with other departmental activities and the revision of the implementation plan are addressed under the heading “Agency Comments and Our Evaluation” at the end of chapter 2. 13. This comment concerning the stabilization work that may be done by May 2002 is addressed in the “Agency Comments and Our Evaluation” section of chapter 2. 14. We added a footnote to the report quoting DOE’s point that “It must be acknowledged that even after stabilization and packaging, some small level of risk remains associated with handling and storage of plutonium materials.” In addition, we revised the report to clarify that delays result in continuing the existing level of risk to workers’ health and safety by delaying the risk reduction that is achieved by stabilization and packaging activities. 15. We revised the report to clarify that the interim storage criteria do not provide the level of safety afforded by DOE Standard 3013 and explicitly attributed the comments about the continuing risk to workers’ health and safety to DOE site officials and a Defense Nuclear Facilities Safety Board staff member. 16. We did not revise the report because the Department did not provide support for the statement that the “safety significance” of the delay at Hanford is “manageable.” 17. To reflect this information about future funding at Hanford, we added the following to the report: “In commenting on a draft of this report, the Department stated that questions remain about how plutonium stabilization work will be prioritized by the site. The Department believes that if the risk is determined to be high enough, funds will be provided.” 18. We added a footnote to the report that reads, “Some low-risk residues with low plutonium content do not have to be converted through either technology as they can be disposed of in the Waste Isolation Pilot Plant when it becomes available.” In addition, we revised a later footnote to reflect this information. 19. We revised the report to read, “DOE is currently assessing the possible environmental impacts of several likely sites where plutonium disposition activities may take place and plans to have a final decision in late 1998 or early 1999.” We further revised the report to indicate that there are technical, institutional, and cost uncertainties and that the uncertainties cited are examples, not an all-inclusive list. 20. The Department comments that a more recent draft of the disposition criteria, dated December 1997, has been issued. However, we were not able to include the details of this draft because it was not available at the time of our review. 21. We revised the report to remove the issue of stabilization temperatures because the Office of Fissile Materials Disposition is planning to address this issue through an additional processing step. However, there are still issues to be resolved regarding differences between the draft disposition criteria and the current standard for plutonium storage. Therefore, on the basis of DOE’s comments, we included examples of the differences between the information that would be required by the draft disposition criteria and the information currently required by DOE Standard 3013 for long-term storage. Finally, we did not revise the report to address the effort by the Office of Environmental Management to develop a new standard, as this initiative is still in its preliminary stages. 22. The statements about delays in meeting implementation plan milestones are not our conclusions, but the comments of the Defense Nuclear Facilities Safety Board (and are cited as such in the report). Therefore, we did not revise the report. 23. On the basis of further discussions with DOE officials, we revised the report to read, “Workers at Department of Energy (DOE) facilities must be protected from plutonium because exposure to small quantities is dangerous to human health, and if not safely contained and managed, plutonium can be unstable and can even spontaneously ignite under certain conditions.” 24. In further discussions with DOE officials, they stated that these comments were informational and that the Department did not require any change to the report. 25. We revised the report on the basis of a further discussion with DOE officials concerning the dangers of plutonium. During this discussion, the officials agreed that the Department’s concerns would be addressed if we deleted the word “extremely” from the report. 26. We revised the report as suggested, changing the referenced date to “1997.” Lisa P. Gardner Ronald J. Guthrie Christopher M. Pacheco Victor S. Rezendes John H. Skeen, III Pamela J. Timmerman The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the status of the Department of Energy's (DOE) efforts to stabilize, package and store its plutonium, focusing on: (1) plutonium that is not in the form of nuclear weapons components, or pits; and (2) plutonium in pits. GAO noted that: (1) although DOE has made some progress in stabilizing its plutonium, it is unlikely to meet its May 2002 target date to have its plutonium that is not in its pits stabilized, packaged, and stored; (2) DOE sites with the majority of this plutonium have experienced many delays and anticipate more in meeting their implementation plan milestones; (3) various problems contribute to these delays, including: (a) changes from the technologies originally chosen to stabilize plutonium residues at Rocky Flats to meet a security requirement; (b) a suspension of plutonium stabilization operations because of safety infractions at Hanford; (c) competing priorities for funding, staff, and equipment at Los Alamos; and (d) delays in obtaining a system for stabilizing and packaging plutonium at three sites; (4) given the inherent dangers of plutonium, such delays result in continuing the existing level of risk to workers' health and safety by delaying the risk reduction that is achieved by stabilization and packaging activities; (5) moreover, because DOE has not yet finalized the criteria the plutonium must meet to be acceptable for the disposition technologies, it is unclear if current activities to stabilize, package, and store the plutonium will be compatible with the means of converting it for disposal; (6) in addition to its delays in stabilizing and packaging its plutonium that is not in pits, DOE is currently storing approximately 10,000 pits in containers that both DOE and the Defense Nuclear Facilities Safety Board believe are not suitable for extended storage, thus risking workers' exposure to plutonium; (7) DOE is preparing a plan, which it intends to issue in April 1998, to develop new containers and repackage the remaining 95 percent of the pits; (8) without conducting an analysis of the costs or benefits of the laboratories' recommendation for increased monitoring, DOE decided not to change its existing monitoring program, which formally examines about 30 pits per year; and (9) DOE hopes that it can repackage the pits before enhanced monitoring is necessary.
The CCDF is a key federal funding source to states for providing child care subsidies to low-income parents so they can work, look for jobs, or participate in education and training activities. States also use CCDF funds to improve the quality and availability of child care for all families through activities such as providing training to providers or targeting funds to increase the supply of limited types of care, such as for infants. HHS’s Office of Child Care (OCC) administers the CCDF at the federal level and provides guidance and technical assistance to states on how to operate their subsidy programs. Federal law includes some broad eligibility requirements, such as children must generally be under the age of 13 and reside with families whose income must be at or below 85 percent of state median income (SMI). Within these broad federal requirements, states have substantial flexibility in establishing more restrictive eligibility criteria and other child care subsidy policies and these policies affect how resources for child care subsidies are allocated in a state. For example, the CCDBG Act requires states to give priority to very low-income families and children with special needs for subsidy receipt, but states may define “very low-income” and “special needs child” and determine how to prioritize these groups. Some states may establish priority groups to serve such families before other types of families and other states may use different methods, such as making it easier for these families to access care by waiving co- payments or paying higher reimbursements to providers for serving such families. States also may set any maximum family income eligibility limit at or below 85 percent of SMI for families of the same size. Other policies that may affect access to child care subsidies include whether income from all members in a household are counted toward the income limit states set for CCDF and the amount families have to pay for a portion of their child care as a condition of receiving a subsidy. Federal law and regulations establish state reporting requirements for administering CCDF funds. Under these requirements, states must report their CCDF expenditures (e.g., direct and non-direct services, quality improvement activities, administrative expenses). States must also report data on the number of children served via CCDF subsidies, submitting it to HHS monthly (or quarterly) as well as annually. HHS uses these data to estimate the average monthly number of children served via CCDF subsidies nationally and state-by-state (see table 1). CCDF is not an entitlement program, which means that states are not required to serve all eligible families who apply for CCDF subsidies; thus some eligible families who do apply for subsidies may not receive them. This can occur when a state does not have sufficient funds to serve all eligible applicants. Such states may wait list applicants or place limits on when families can apply for the program or which families receive subsidies. In early 2015, 30 states reported that they served all eligible applicants who applied for CCDF subsidies and 21 reported that they wait listed applicants or stopped taking applications when they could no longer serve new clients. Families who qualify for, but do not receive, CCDF subsidies could still receive public assistance with their child care through other federal or state programs such as Temporary Assistance for Needy Families (TANF) (accessed directly through the TANF program), Head Start, the Social Services Block Grant (SSBG), or a state’s pre- kindergarten program. In November 2014, Congress passed the Child Care and Development Block Grant Act of 2014. This Act included several new provisions for the program, including: (1) permitting children to remain in the program for at least 12 months as long as their family’s income does not exceed 85 percent of SMI; (2) at a state’s option, terminating assistance when a parent experiences a non-temporary job loss (or cessation of education or training), but only after continuing assistance for at least 3 months in order for parents to look for work; and (3) requiring states to implement a graduated phase-out of assistance if, at re-determination, the family’s income exceeds state eligibility limits, but is still under 85 percent of SMI. The estimated population eligible for and receiving child care subsidies varied among states. According to GAO’s analysis, out of the estimated 14.2 million children under age 13 nationwide who met federal work and income requirements for subsidies in an average month in 2011 and 2012, an estimated 8.6 million were eligible according to the eligibility policies in their states, and about 1.5 million received them. Children who received child care subsidies differed across a variety of characteristics when compared to the eligible population in their state. In particular, subsidy-recipient children were more often age 2 to 4, in very low-income families, Black, and in single-headed households than the overall population of eligible children in their state. The number of children in families receiving subsidies does not equate to the population of eligible children in families who are interested in pursuing subsidies or who need them, which can be difficult to predict. States have the flexibility to establish specific eligibility policies within broad federal eligibility requirements, and generally fewer families qualify for subsidies after state policies are applied and even fewer receive them. According to our analysis, an estimated 14.2 million children were in families who met federal CCDF eligibility requirements for child care subsidies in an average month over calendar years 2011 and 2012. When state eligibility policies were applied to this population, an estimated 8.6 million (about 61 percent of the population meeting federal requirements) were eligible. Moreover, of the 14.2 million children meeting federal requirements, 1.5 million (11 percent) received them (see fig. 1). The extent to which children meeting federal requirements also met the eligibility policies specific to their states varied widely by state. Figure 1 shows, for example, that in three states (Iowa, Nebraska, and Maryland) under 40 percent of children meeting federal requirements were estimated to be eligible based on their state’s policies; and in three others (Maine, New Mexico, and the District of Columbia), all or nearly all such children were estimated to be eligible. States such as Kansas, New Hampshire, and Pennsylvania were closer to the national percentage, with about 60 percent of children who met federal requirements meeting their state eligibility policies. The dark portions of the state bars in figure 1 also show that fewer than 25 percent of the estimated children meeting federal eligibility requirements received subsidies in an average month in any state. Subsidy receipt ranged from a high of approximately 21 percent of estimated children meeting federal eligibility requirements in New Mexico to a low of approximately 5 percent in Nevada. States’ policies include a range of eligibility criteria related to income, employment, and educational activities, among other criteria, which can influence the size of the eligible population in each state. It is the interplay of these various policies that influences the size of the eligible population in a given state. Income: Each state sets income limits requiring families to have income below a certain threshold in order to be eligible for child care subsidies. Across states, the initial eligibility threshold for a three-person family ranged from 121 percent of the 2014 federal poverty guidelines to 298 percent of this level. Table 2 shows where states fall within that range. States that used wait lists to manage demand for subsidies tended to have higher income limits. Specifically, nearly two-thirds of states that used wait lists had income limits at or above 175 percent of the poverty guidelines. For states with the lowest income limits, wait lists were less common. Specifically, all but one state that limited eligibility to families with incomes below 150 percent of the federal poverty guidelines served all eligible families instead of using wait lists as of early 2015. Employment: Policies related to employment may include whether a state specifies a minimum number of hours worked or whether searching for a job is a qualifying activity. For example, based on our analysis of the CCDF Policies Database Book of Tables, 31 states did not consider searching for a job to be a qualifying activity for initial eligibility as of October 2014, while 9 states allowed it for up to a month, and 11 allowed it for more than a month. For those who qualify based on employment, some states specify that parents or guardians must work a minimum number of hours per week or per month. Among the states with such a requirement, some specified a minimum of 15 hours per week, and others specified up to 30 hours per week (see table 3). Education: Based on our analysis of the CCDF Policies Database Book of Tables, most states allow secondary or postsecondary education as qualifying activities. However, states may have additional requirements, such as requiring parents to work while pursuing education in order to qualify for subsidies, placing limits on how long education can be used as a qualifying activity, or requiring that student-parents maintain a certain grade average. In some cases, states have established multiple policies regarding education as a qualifying activity. For example, in Kansas, postsecondary students must both maintain a GPA of 2.0 and work a minimum of 15 hours per week to remain eligible for child care assistance. In Illinois, high school students generally must maintain a C average and, beginning in the 25th month of participation in the child care subsidy program, must also work 20 hours or more per week. According to our analysis, of the 8.6 million children estimated to be eligible for subsidies in an average month in years 2011-2012, about 1.5 million of them received subsidies. Our analysis of HHS and TRIM3 data shows that children whose families receive child care subsidies have different levels of family income and other characteristics compared to the population of children whose families are potentially eligible for subsidies based on the policies in their states. These differences include lower levels of family income, younger ages of children, and differences in family structure and racial composition. According to state child care officials, a difference such as lower family income may in part reflect state policies that target limited subsidies to this priority population. According to stakeholders, a difference such as lower levels of subsidy receipt among Hispanics may reflect barriers to accessing the program. Subsidy recipients are substantially poorer than the overall population of children eligible for subsidies, according to our analysis. Nationwide, children who lived in families with incomes below 100 percent of the federal poverty guidelines were over-represented among subsidy recipients by an estimate of nearly 15 percentage points when compared to eligible children who lived in families with incomes at this same level. In contrast, children in families with incomes between 100 percent and 149 percent of the poverty guidelines were under-represented among subsidy recipients by an estimated 3.4 percentage points when compared to eligible children of the same income level and children at 150 percent of the poverty guidelines or greater were under-represented among subsidy recipients by an estimated 7.7 percentage points (see fig. 2). Figure 3 shows state-by-state whether subsidy recipients with incomes below 100 percent of the federal poverty guidelines were over- or under- represented when compared to children eligible for subsidies in their states. Statistically significant results were available for 38 out of 50 states that had reliable data. In 35 of these states, subsidy recipients were poorer when compared to eligible children and, in 3 states, subsidy recipients tended to have higher incomes than eligible children. The differences had a wide range. In one state, New Hampshire, an estimated 38 percentage point difference exists because an estimated 55.8 percent of subsidy recipients were below the poverty guidelines, compared to an estimated 17.5 percent of eligible children. In another, Indiana, an estimated 57 percent of subsidy recipients were below the poverty guidelines, compared to an estimated 76 percent of eligible children, resulting in a negative estimated 19 percentage point difference. These differences may reflect, in part, states targeting subsidies to the lowest income families when states are unable to serve all eligible families that apply. Several state officials participating in our group interviews, for example, told us that they target their programs to those most in need, including those with the lowest income. Moreover, in their Issue Brief on eligibility and receipt of CCDF subsidies, HHS suggests that states target subsidies to families with the lowest incomes. Subsidy recipients tended to be younger than the overall population of eligible children, according to our analysis. Nationwide, subsidy recipients age 2 to 4 years old were over-represented by an estimated 17 percentage points when compared to eligible children of the same age. Subsidy recipients under age 2 were also over-represented, but by an estimate of less than 2 percentage points. In contrast, older children were under-represented among subsidy recipients by an estimated 18 percentage points when compared to eligible children of the same age (see fig. 4). The national pattern of over-representation of children age 2 to 4 within subsidy recipients is consistent across all states with reliable data, according to our analysis. In all 48 states with reliable data, children age 2 to 4 were over-represented in the subsidy population, and in 36 states by more than an estimated 15 percentage points (see fig. 5). This over- representation may reflect a greater need for non-parental care among families with children age 2 to 4 because they are not yet in elementary school, which provides many hours of care each week. Subsidy recipients tended to live in single-parent households to a greater extent than the overall population of eligible children, according to our analysis. Nationwide, subsidy recipients who lived in single-parent households were over-represented by an estimated 14 percentage points when compared to eligible children in similar households. In contrast, subsidy recipients who lived in two parent households were under- represented by an estimated 14 percentage points when compared to eligible children in similar households. An estimate of nearly 4 percent of subsidy recipients and eligible children were child-only units (see fig. 6). The national pattern of over-representation of subsidy recipients in single- parent families is consistent across most states, according to our analysis. For example, in 33 of the 50 states with reliable data, children in single-parent families were over-represented among subsidy recipients when compared to the population of eligible children, in one state they were under-represented, and in the remaining 16 states the results were not statistically significant. The differences have a wide range, but exceed an estimated 20 percentage points in 14 states. At one end of the range, Wyoming, an estimated 43 percentage point difference exists because an estimated 99 percent of subsidy recipients lived in single- parent families, compared to an estimated 56 percent of eligible children (see fig. 7). One reason single-parent households may be over-represented is that our analysis showed that such households tended to have lower incomes than two parent households and, as told to us by some state child care officials and reported by HHS, states often try to target their subsidy programs toward families with the lowest incomes. Nationwide, our analysis showed subsidy recipients were more frequently Black, and less frequently of other racial or ethnic groups, when compared to the population of children eligible for subsidies. Black children were over-represented among subsidy recipients by an estimated 17 percentage points when compared to eligible Black children. In contrast, Hispanic children were under-represented to a large degree among subsidy recipients (an estimated 15 percentage points when compared to eligible Hispanic children) and White children were slightly under-represented (an estimated 2 percentage points when compared to eligible White children) (see fig. 8). The national pattern of Black children being over-represented among subsidy recipients was true for most states. Statistically significant results were available for 31 of the 44 states that had reliable data and in all of them the proportion of subsidy recipients that were Black was higher than the proportion of the overall population of eligible children that were Black. Moreover, in about half of these states (16 of the 31) a difference of an estimated 15 percentage points or more existed. For example, Tennessee had an estimated 71 percent of subsidized children who were Black, compared to an estimated 32 percent of all eligible children, a difference of an estimated 39 percentage points (see fig. 9). Nationally, Black children that received subsidies more often lived in single-parent households and in families with very low income than White and Hispanic children, according to our analysis of HHS and TRIM3 data. This may partially explain why Blacks are over-represented among subsidy recipients. The national pattern of Hispanic children being under-represented among subsidy recipients was true in many states, according to our analysis. Statistically significant results were available for 23 out of 44 states that had reliable data and Hispanic children were under-represented among subsidy recipients in all but 1 of these 23 states. In 11 of them, the difference was about an estimated 10 percentage points or higher. In the 1 state where Hispanic children were over-represented among subsidy recipients, New Mexico, Hispanic children made up an estimated 75 percent of subsidy recipients and an estimated 63 percent of eligible children (see fig. 10). The lower level of subsidy receipt among Hispanics in many states may reflect differing preferences for child care or barriers to accessing child care subsidies, or both. Some Hispanic families may be getting their child care needs met through other means, such as Head Start and universal pre-school programs. Stakeholders said that families without immigration documentation may have concerns if they encounter application forms that request Social Security Numbers or live in states that give priority to families receiving TANF because this program can require verification of immigration status. Finally, navigating the complexity of eligibility requirements may be particularly difficult for families with limited English proficiency, which we and others have also highlighted in previous reports. The national pattern of White children being under-represented among subsidy recipients was true in some, but not all states. In 26 of the 42 states that had reliable data, Whites were neither over nor under- represented—the differences were not statistically significant. Statistically significant results were available for 16 out of 42 states that had reliable data. In 10 of these 16 states, White children were under-represented, most commonly by more than an estimated 10 percentage points among subsidy recipients when compared to the eligible children in their states. In another 6 states, the reverse was true—with White children being over- represented, by more than an estimated 10 percentage points (see fig 11). The number of children in families receiving subsidies does not equate to the population of eligible children in families who are interested in pursuing subsidies or who need them. Various officials and stakeholders told us it is difficult to accurately predict the extent to which families with eligible children are likely to apply for and receive subsidies. This is in part because several factors influence families’ child care decisions that can make it difficult or unappealing to pursue subsidies and also because available indicators of need are imprecise. Factors affecting families’ decisions include information, program policies, and supply issues, according to stakeholders and officials we spoke with, in addition to some relevant studies we reviewed. Information: Lack of awareness of the subsidies could prevent families from applying, in addition to misinformation about eligibility criteria and perceptions of limited availability of subsidies. Program policies and procedures: Some families may be unable or unwilling to manage the administrative burden associated with applying for the subsidy, which could include in-person meetings, dealing with multiple benefit systems such as TANF, and other bureaucratic procedures. Moreover, a family’s eligibility status may have changed after a potentially lengthy approval process. In addition, the amount of the co- payment or the low amount of the subsidy may be a deterrent. HHS officials noted that these types of state policies can affect demand for child care subsidies because the more challenging it is to receive and maintain a subsidy, the less likely families are to want one. Supply issues: The availability of appropriate care options is interrelated with subsidy demand in that some families may not apply if they know there are no viable child care options in their area that accept subsidies or that can care for children with special circumstances (e.g., behavioral or developmental needs, non-English speakers). In addition to the complex interplay of factors influencing families’ child care decisions, there are various indicators officials use to help estimate how many families are likely to receive subsidies. At a national level, HHS monitors need for child care subsidies in part by calculating coverage rates. Coverage rates estimate the extent to which children who may meet eligibility criteria actually receive subsidies. HHS estimated this rate at 15 percent of children meeting federal eligibility requirements in fiscal year 2012 (the most current year available). While HHS’s analyses provide some information about the need for child care subsidies, they rely on eligibility estimates and the extent to which eligible families are actually interested in or would apply for subsidies is unknown. State child care officials we spoke with said that various other indicators can demonstrate whether families need child care subsidies, such as waiting lists, Census data, and surveys. Many state officials— representing 18 of 32 states in our group interviews—said they use their own program data to help assess need. This could include looking at funding allocations and the number of applicants, including those placed on wait lists. In addition, officials representing 10 states told us they used Census data, officials from 7 states said they conducted needs assessments, and 6 worked with other programs to determine the extent of program need. The latter might include sharing data with partner entities (like child care referral agencies and schools), either formally or informally. Based on their assessments and experience, numerous stakeholders, federal officials, and state officials we spoke with believe unmet need exists. In particular, officials in 26 out of the 32 states that participated in our group interviews told us that there are likely more potentially eligible families than these states can serve. These included officials from 6 of the 12 states we spoke with where all applicants determined eligible received subsidies. This is consistent with the observation that there are an unknown number of potentially eligible families who never apply. Several officials and stakeholders we spoke with cautioned that some of the methods used to estimate need are imperfect indicators. For example, representatives of one large state noted that Census figures and other demographic data can only provide rough estimates and do not reflect the full extent of need. Officials in two other states said these types of data do not include information about key eligibility characteristics or are out of date. Several state officials and stakeholders we spoke with also expressed concerns about using wait lists to measure need for child care subsidies. They noted wait lists can be inaccurate and out of date. Another drawback to using wait lists to assess demand is that they only reflect the population of potentially eligible beneficiaries who actually follow through with their applications, and therefore such lists could underestimate need. On the other hand, wait lists can artificially inflate estimates of need in cases where families’ eligibility status changes or they remain on the wait list even after they have found other child care arrangements. While the nuances of family decisions and the lack of comprehensive indicators can make it difficult for program officials to more accurately predict subsidy need, the CCDBG Act of 2014 places requirements on states to do so. It requires states as part of their state plan to develop strategies to increase the supply of child care services for certain groups, such as children with disabilities, children in under-served areas, infants and toddlers, and children who receive care during nontraditional hours. In its 2015 guidance about how to do this, HHS encourages states to use data to assess need and identify supply shortages, for example by assessing where low-income families live and where high-quality child care providers are located. This would provide information about the gap between the existing supply of child care and the population that is likely to need access to this care. The guidance also suggests that states leverage existing data from market rate surveys, referral agencies, and other agencies that conduct needs assessments, such as the Maternal and Infant Early Childhood Home Visitation Program and Head Start. HHS advises states to consider the unmet needs that are most pressing in a particular area and, accordingly, create an appropriate strategy for building supply. Wait lists are a common way that states manage their caseloads when more families qualify for subsidies than states can fund. Of the 33 states that had wait list policies at the end of 2014, 19 used wait lists at the start of 2015 and the others did not, according to the National Women’s Law Center (NWLC). The results of our group interviews showed that states manage wait lists in a variety of ways. For example: Number of lists: Of the 19 states that used wait lists in 2015, 10 states had a single statewide wait list and the remaining 9 states had more than one list at a sub-state level, according to state child care officials who participated in our group interviews. With the exception of New York and California, all wait list states can report the number of children on their wait lists statewide. State child care officials from states with multiple wait lists also said that these states can have a mix of statewide and sub- state policies regarding how to develop and manage their lists. Eligibility determination: Some states require full eligibility determinations prior to placing children on wait lists for subsidies and other states do not have such a requirement. Child care officials in 5 of the 19 states that used wait lists in 2015 told us, for example, that their states require full eligibility determinations prior to wait listing children, whereas officials in 11 other states said that they either rely solely on self- reported information or partial eligibility screenings prior to wait listing children. Periodic review: States also varied in whether and how often they require agencies to review their wait lists. Many states require reviews at set intervals to ensure that those on the list continue to be eligible for and want subsidies, and others do not require these reviews. According to state child care officials, reviews tend to consist of sending letters to families asking if they want to remain on the list. Of the 19 wait list states that participated in our group interviews, officials from 13 mentioned that their states require periodic reviews of wait lists and 6 others said that periodic reviews are either not required or left to the discretion of the local entities administering the subsidy programs in their states. HHS officials and state child care officials knowledgeable about wait lists told us that wait lists can be a valuable tool for managing caseloads and expenditures. Child care officials from 14 states that participated in our group interviews mentioned, for example, that one benefit of wait lists is that they track children eligible or likely to be eligible for a subsidy and who is next in line to receive one. Such tracking is important because these policies can be complex and, according to HHS guidance and a child care subsidy stakeholder, states often prioritize the types of children eligible for subsidies to ensure that they serve those most in need. Two officials told us, for example, that their wait lists help manage caseloads by ensuring that families with higher priorities for service (such as those most in need) are served before families with lower priorities. Child care officials in 10 states also mentioned that these lists provide insight into where additional funds may be needed in their states. Two officials mentioned, for example, that their lists help them determine whether funds should be reallocated from one county to another. Few families on wait lists may end up receiving subsidies, according to state child care officials and other child care stakeholders knowledgeable about wait lists. This may occur when wait listed families experience changes in circumstances that cause them to either no longer want subsidies or make them ineligible. State officials from 15 wait list states who participated in our group interviews characterized the number of wait listed families that eventually received subsidies as small or they estimated that no more than 50 percent of wait listed families received subsidies. One official told us that historically about 17 to 20 percent of families on her state’s wait list apply for subsidies in response to notifications they receive that funds are available. At the other end of the range, one state official said that most wait listed families in her state eventually receive subsidies and she attributed this to her state adequately funding child care subsidies. Wait lists can be challenging to administer. Because few families placed on wait lists may receive subsidies, it can be difficult to manage wait lists in a way that is not overly resource- and time-intensive for the agency administering the subsidy program nor overly burdensome for families who want to remain on the list to be able to do so. State child care officials from 23 of the 32 states that participated in our group interviews, for example, reported various challenges, such as being able to easily contact wait listed families and ensuring that they continued to want and qualify for subsidies. In its technical assistance on wait lists, HHS echoes similar views, saying that wait lists become inefficient when they take significant resources to establish and maintain. Specific challenges include: Keeping wait lists current and accurate: Two challenges that HHS officials, stakeholders, and state officials participating in our group interviews mentioned were ensuring that families on the wait list: (1) could be easily contacted and (2) continued to qualify for and want child care subsidies. According to child care officials in 11 states, for example, without accurate information it is difficult to contact families to see if they want to remain on the list or want a subsidy should one become available. One state official estimated for example, that her state gets a roughly 50 percent response rate to letters sent to wait listed families during periodic reviews, excluding letters that are returned as undeliverable. According to HHS officials knowledgeable about subsidies, maintaining accurate contact information for wait listed families can be difficult because low- income families tend to frequently move. Child care officials in 13 states also told us that families often remain on the wait list despite no longer wanting subsidies or becoming ineligible for them. According to 6 state officials, families may no longer want subsidies because they made alternative child care arrangements once they were wait listed. Several state officials also said that families may become ineligible for subsidies while wait listed because their children become too old for subsidies; parents stop working; or the family’s income grows to exceed the state’s limit. One state child care official told us that it takes a lot of time and money to find a single family on his state’s wait list who will take a child care subsidy slot that opens up. He said that sometimes as many as 50 to 100 letters have to go out to wait listed families to get a single, affirmative response. Multiple wait lists in a state: HHS officials and stakeholders knowledgeable about wait lists told us that states with wait lists maintained at the sub-state level can face some unique challenges. For example, such individuals said if there are sub-state lists, families in these states can place their names on more than one list, and this duplication makes it difficult to efficiently manage the lists. Resources may be wasted when more than one agency in a state attempts to contact the same family to see if they want to remain on their lists. Also, when local lists are rolled up to the state level, it may be difficult to eliminate the duplicate entries and thereby get an accurate count of the number of families statewide that are waiting for subsidies. Some state child care officials echoed the view that decentralized wait lists pose unique challenges. One official said that it is difficult to ensure that all counties collect the same information on families who they place on their wait lists. Two other state officials mentioned that families may experience different wait times across counties and that their states try to counterbalance this by moving funds from one county to another. Insufficient technology to manage wait lists: Officials from seven states mentioned that limited technology hindered their ability to efficiently manage wait lists. One state official whose state maintains sub-state lists said that in most cases these lists are on paper and that her state could benefit from more centralized lists. Another state official said that, although her state has a web-based wait list system, various wait list management tasks still must be done manually because this new system does not always integrate well with their older system used to determine eligibility. The CCDBG Act of 2014 and the CCDF regulations require both HHS and states to develop websites that, among other things, must contain localized lists of eligible child care providers such that parents can enter a ZIP code and obtain information on the availability of specific providers in their area. An HHS official suggested that, as states develop data for these websites, they may want to consider greater use of technology to help manage wait lists. Although whether and how to use wait lists is at the discretion of state child care subsidy programs, HHS does provide states with technical assistance on how to develop and manage their lists, when requested. In an effort to promote efficiently-run lists, HHS’s written technical guidance encourages states to develop wait list management strategies that minimize the use of staffing resources. Strategies include conducting partial eligibility reviews for at least some families placed on the wait list as opposed to full eligibility determinations, periodically reviewing wait lists by contacting families to validate an ongoing need for subsidies, and requiring families to report address changes to maintain wait list eligibility. Officials from most of the 19 wait list states that participated in our group interviews said that they use at least one of these strategies. The CCDBG Act of 2014 also might influence use of state wait lists over the next few years. Officials from 15 states that participated in our group interviews said, for example, that they might need to establish new wait lists, their existing wait lists may increase, or families may remain on the lists for longer periods of time. Several also mentioned that whether their use of wait lists was impacted depended upon how certain provisions in this law affect their state and whether subsidy funding increases. For example, some state officials told us new requirements, such as families receiving subsidies for a minimum of 12 months, may result in families being subsidized for longer time periods. If funding for subsidies remains flat, this may result in these states not being able to serve as many newly qualified applicants and therefore wait lists might expand. Other provisions that state officials cited as potentially increasing the use or size of wait lists included those pertaining to improving the quality of child care and gradual phase-out of subsidies for families once their income exceeds their state’s limits to receive subsidies. According to HHS written technical assistance and state child care officials and child care stakeholders, other caseload management techniques include prioritizing serving some types of families over other types of families, limiting applications by freezing intake, or changing eligibility policies so that fewer families qualify for subsidies. Prioritizing certain eligible families over others: Based on our analysis of the CCDF Policies Database Book of Tables, most states (40 out of 51) develop priority policies that specify which eligible families they will serve before other types of eligible families. Common priority groups included: Families receiving TANF (37 states, including 18 wait list states); children in protective services (28 states, including 14 wait list states); children with special needs (26 states, including 12 wait list states); and children in families with very low incomes (23 states, including 10 wait list states). As long as states provide subsidies to all families who they determine to be eligible, priority groups do not result in some families going unserved. However, in the 19 wait list states, priority groups influence which families get subsidies in their states because not all families determined eligible get served. Priorities can take two forms—guaranteeing a subsidy to all eligible applicants that meet the criteria of the prioritized group or providing these families subsidies before other types of applicants only if funds are available. When children are guaranteed subsidies, they are exempt from wait lists. When children are prioritized over other children without a guarantee, they are placed higher up on wait lists than children with a lesser or no priority. Policies in wait list states tended to guarantee subsidies for TANF recipients and mostly prioritized other groups for subsidies without a guarantee. Freezing intake: Freezing intake occurs when a state or locality stops taking applications for subsidies for either the entire program (a full program freeze) or for specific priority groups or income levels (a partial program freeze). In its annual reports on CCDF policies, the NWLC reported that three states froze intake in 2013, two in 2014, and three in 2015. In all of these cases, the states instituted partial program freezes, funding subsidies for some families (for example, families in priority groups) and froze intake for other applicants. States or localities also may freeze intake in conjunction with wait lists, for example freezing intake for some priority groups and maintaining wait lists for others. In its written technical assistance, HHS describes the pros and cons of managing programs by instituting partial program freezes, full program freezes, or managing the program according to wait listed priority groups. According to this technical assistance, partial freezes (which close the program to some priority groups) may be more effective than full freezes (which close the program to everyone), because it can be easier to manage the flow of cases and manage funds. This technical assistance also says it may be preferable to maintain wait lists when closing the program to certain priority groups as opposed to simply instituting a program freeze for these groups. This is because wait lists allow states to determine which priority groups to reopen and to make informed decisions about the number of families to select from the list. An official from one state who participated in our group interviews told us, however, that her state now freezes intake instead of using wait lists when funds run low because they found that wait lists were too burdensome to administer. She said that it was difficult to find families on the wait list to receive subsidies once funding became available because it was difficult to ensure an accurate list that consisted of families who continued to want or qualify for subsidies. Modifying eligibility policies: According to child care stakeholders and state child care officials, states may modify their eligibility policies to manage the size of their subsidy caseloads. Modifications in eligibility policies can be statewide or at the sub-state level. Child care officials mentioned this case management strategy in states that used waiting lists (5 out of 19 states) and in states that served all eligible applicants (7 out of 12 states). Officials said that modifications can decrease caseloads, ensure that the program serves the neediest children, or increase caseloads. State officials reported modifying policies such as income limits, co-payment amounts that parents pay to providers, and requirements related to employment or education. Officials from a few states also told us their states typically do not modify eligibility policies to manage caseloads because these policies are based in statute and it is difficult to go through the legislative process to change them. Child care officials from 22 states described ways in which their states leverage funds and other programs to meet the child care needs of low- income working families. Programs and funds mentioned included Head Start, Early Head Start, state funded pre-kindergarten, TANF, Social Services Block Grant (SSBG), and various state or county run programs. Two officials, for example, said that that their states’ pre-kindergarten programs help meet the child care needs of 4-year-olds with low-income working parents. One of these officials specifically mentioned that fewer 4-year-olds receive subsidies because of the pre-kindergarten program and the other official said that, in her state, parents of 4-year-olds that apply for subsidies are referred to the pre-kindergarten program. Another official, however, described his child care subsidy program as the last resort for low-income working families. This official explained that by the time families apply to his program, they have already been turned down or the other programs also have wait lists. Officials from 13 states mentioned additional programs their states or localities leverage to help meet the child care needs of low-income families. For example, one official mentioned that his state tries to reduce the number of families with school-aged children that receive child care subsidies by referring them to after school programs. Officials from 3 states mentioned that some of their counties have local funding for assistance with child care expenses and officials in 2 of these states specifically mentioned that these funds were county responses to CCDF waiting lists. We provided a draft of this report to the Department of Health and Human Services (HHS). The agency noted that the report provides valuable information to states and federal officials about access to subsidies, and breaks new ground by analyzing state-by-state data on families eligible for and those that actually receive Child Care and Development Fund (CCDF) services. HHS also expressed concerns about the overall funding level for CCDF and its impact on state decisions as to which eligible families to serve as well as the amount of the subsidy to provide. The agency also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to appropriate congressional committees, the Secretary of Health and Human Services, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-7215 or brownbarnesc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our review focused on: (1) what is known about the number and types of families eligible for child care subsidies and the extent to which they receive them; and (2) how states determine which eligible families receive subsidies when subsidy need exceeds supply. We used a variety of methods to address these objectives, including analyzing data from Urban Institute’s Transfer Income Model, version 3 (TRIM3) on subsidy eligibility, the U.S. Department of Health and Human Services’ (HHS) summary tables of administrative data on subsidy receipt (HHS administrative data), and HHS public use sample data on Child Care and Development Fund (CCDF) recipients (HHS sample data). We interviewed HHS officials, researchers in the child care subsidy field, and other knowledgeable stakeholders, in addition to holding group interviews with child care officials from 32 states. We also identified and reviewed applicable studies as needed that discussed eligibility and access issues associated with child care subsidies. We identified these studies by consulting with interviewees and reviewing key online research websites and repositories. Finally, we reviewed relevant federal laws, such as the Child Care and Development Block Grant Act of 2014, and summaries of state CCDF policies. We conducted our work from May 2015 to December 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Further details on our key methodologies are discussed below. To determine what is known about families eligible for subsidies, we used data from TRIM3 to estimate the number of children nationwide and state- by-state that: (1) met federal requirements for CCDF eligibility and (2) were eligible to receive CCDF subsidies based on state policies. We then compared the characteristics of children eligible for subsidies to children receiving subsidies, using TRIM3 and HHS data (described below). TRIM3 is developed and maintained by staff at the Urban Institute with funding primarily from HHS. The TRIM3 model simulates major governmental tax, transfer, and health programs using federal data from the Current Population Survey containing detailed information on the demographic characteristics and economic circumstances of U.S. households. TRIM3 models eligibility for CCDF-funded subsidies on a monthly basis. In other words, each family is checked for eligibility in each month of the simulation year, and a family might be found eligible for CCDF-funded subsidies in some months of the year but not the entire year. The eligibility policies in the simulation reflect the variations in eligibility policies across states. To identify the number of children estimated to be eligible for child care subsidies, we examined TRIM3 data on children who met federal requirements and who also met state eligibility policies. In addition, to understand more about the characteristics of children estimated to be eligible for subsidies according to their state policies, we analyzed the following variables: age; race; Hispanic ethnicity; whether the head of household is single, not single, or the child is in a child-only unit; and family income-to-poverty ratios. To improve the reliability of our estimates, we used multiple calendar years of TRIM3 data to generate subsidy eligibility estimates for an average month. For our estimates of the number of children meeting federal requirements and estimates of children eligible under state policies, we used data from calendar years 2011 and 2012, which was the most recent at the time of our analysis. We obtained these data from the HHS publication, “ASPE Issue Brief: Estimates of Child Care Eligibility and Receipt for Fiscal Year 2012.” A relatively small number of children (216,000 out of 8.9 million in 2012) are defined as eligible under state policies in this brief, but are not considered eligible under federal parameters. This can occur because some children are considered child-only units under state policies, making them eligible, but are not considered child-only units under federal parameters, and because some states define the assistance unit differently than the federal requirements. In addition, estimates of children eligible under federal parameters do not consider state-allowable income disregards when determining whether a child’s family income is below 85 percent of state median income; in some states, income disregards can therefore lead to a slightly higher estimate of children eligible under state policies than children eligible under federal parameters. This was the case in the District of Columbia and New Mexico. For these two states, we subtracted a small number of children (170) from the estimated number of state eligible children so we could use the percentage formula to estimate the margin of error of the percentage of children meeting federal parameters that are eligible under state rules. For our state-by-state estimates of the age and race and ethnicity of children eligible under state policies, we used data from calendar years 2010 through 2012 because the additional year increased the reliability of estimates for subgroups within states. We obtained these data from the Urban Institute’s publicly available TRIM3 baseline microdata files. Due to TRIM3 data limitations with 2010 data, however, we could only use 2011 and 2012 data for head of household and income-to-poverty ratios. For estimated percentages, we excluded from our analysis any estimates where the margin of error exceeded 15 percentage points. For count estimates, we excluded from our analysis any estimates where the relative margin of error exceeded 15 percent. In addition, we assessed the reliability of TRIM3 by (1) performing electronic testing of required data elements, (2) reviewing publicly available information about the data and systems that produced them, (3) reviewing additional information about the Child Care Module provided by HHS and the Urban Institute, (4) interviewing staff at the Urban Institute who developed and maintain the TRIM3 microsimulation, and (5) interviewing HHS agency officials knowledgeable about the data. We determined that none of the data limitations or modeling assumptions affected or compromised the analysis for this report and the data are considered to be sufficiently reliable for our purposes. We used HHS administrative data to gather information on children receiving subsidies. Data on subsidy recipients are derived from mandatory state reports that include case-level data (specifically, the ACF-801 that states submit monthly to HHS). State ACF-801 reports are based on information families provide to caseworkers, who then input the data into existing state information technology systems. Using the ACF- 801 reports, HHS provided us with summary tables of state-by-state data on the number and characteristics of children and families receiving subsidies in an average month (HHS administrative data). To conduct further analyses using different categories and combinations than the full administrative data offered, we also obtained sample case-level data through the Research Connections Public Use Sample Data Sets (HHS sample data). For states that submitted full population data in their ACF- 801 reports, a random sample of families is selected for each month and only the children of those families are included in the child Public Use Sample Data Set. For the states that submitted sample data in their ACF- 801 reports, all families and all children were included in the Public Use Sample Data Sets. To analyze the HHS subsidy receipt data, we selected variables that characterized the subsidy population and could be readily compared to data from TRIM3 microsimulation: children’s age, children’s race and Hispanic ethnicity, whether the head of household is single, and the family income-to-poverty ratio. We used the HHS administrative data to analyze age, and we used the HHS sample to analyze the remainder of the characteristics. For age and race and ethnicity, we combined the totals from fiscal years 2010, 2011, and 2012 in order to be comparable with our TRIM3 analysis. Due to data limitations in TRIM3, we could only analyze 2011 and 2012 for the income-to-poverty ratio and head of household analyses, so we used HHS sample data from fiscal years 2011 and 2012 for these characteristics as well. For each characteristic, we did not report state-level results for any states where the data on that characteristic were missing or invalid for 15 percent or more of the children. We also did not report any state-level estimates (from the HHS sample data) where the margin of error (for percentage estimates) exceeded 15 percentage points, or where the relative margin of error (for count estimates) exceeded 15 percent. Finally, we excluded territories from all HHS administrative data because TRIM3 only includes data on the 50 states and the District of Columbia. The territories removed from national analyses were: American Samoa, Guam, Northern Mariana Islands, Puerto Rico, and Virgin Islands. When reporting national totals, we included all states and the District of Columbia. We also reported HHS’s estimates of the extent to which children who may meet eligibility criteria actually received subsidies for fiscal year 2012 nationwide, known as a coverage rate. Coverage rates specifically measure children eligible based on federal CCDF requirements in comparison to those receiving or estimated to receive subsidies from the major federal funding sources that subsidize the cost of child care for low- income working families—CCDF, two sources of Temporary Assistance for Needy Families (TANF) funds spent directly on child care, and Social Services Block Grant (SSBG). For this reason, HHS officials told us they did not believe data on SSBG, TANF funds spent directly on child care, and TANF maintenance of effort to be reliable on a state-by-state level, and therefore they only estimate coverage rates on a national level. HHS allows states to report the total number of children they subsidize using any funds on the ACF-801 because states may not have the ability to identify children served only by CCDF. In these cases, HHS must estimate the number served by CCDF using a state-reported pooling factor, which is the percentage of funds spent on child care subsidies from CCDF. HHS uses the pooling factor to weight the state-reported data to determine the number of children and families served solely by CCDF. The department multiplies a state’s pooling factor by the total number served to develop adjusted counts of those served by CCDF. We assessed the reliability of HHS administrative data by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data and systems that produced them, and (3) interviewing agency and other officials knowledgeable about the data. We determined that the administrative data used to estimate participation and coverage rates were sufficiently reliable for the purposes of this report. For our analysis of race and ethnicity in both TRIM3 and HHS data, we constructed the following mutually exclusive categories: White, non- Hispanic; Black, non-Hispanic; Other or multi-racial, non-Hispanic; and Hispanic. In our state-level analysis, we present information on non- Hispanic White children, non-Hispanic Black children, and Hispanic children. For our analysis of family income-to-poverty status, we constructed a variable in the HHS sample data that compared monthly family income to the HHS poverty guidelines for the appropriate year. For our estimates of income-to-poverty status among eligible children, we used a variable from the TRIM3 Child Care module with monthly family income as a percentage of poverty. We did not include children in child-only units in the analysis of family income-to-poverty status, because any family income associated with those children would not affect their eligibility, and we did not include children whose family income exceeded four times the poverty guideline as we determined that these values were likely to be errors. We determined statistical significance for all comparisons between subsidy recipients and eligible children at a 95 percent confidence level. To review state CCDF policies and how they may vary across states, we used data from Urban Institute’s CCDF Policies Database, which is a cross-state, cross-time database of CCDF policy information funded by HHS. Urban Institute publishes selected policies for all states annually in its Book of Tables, and we primarily relied upon the 2014 Book of Tables. The information contained in this database was collected primarily from caseworker manuals, which are documents that caseworkers use as they work with families and providers. The tables created from the database were reviewed by almost all state administrators and verified for accuracy, with two states not providing verification. As noted in various places in this report, in a few states, child care policies vary across sub-areas within a state. In those cases, the Urban Institute’s tables show information for the most populous area. Although some caseworker manuals may include policies derived from state legal requirements, GAO did not examine state statutes and regulations nor did we independently verify the information in the Urban Institute’s tables. To understand how states manage demand for child care subsidies, we held interviews with child care administrators or their designees (collectively referred to as state child care officials in the report) from 32 states that had wait list policies in 2014 about how they assess need for subsidies in their states, prioritize who they serve, and manage wait lists (see table 4). Of these 32 states, 19 had active wait lists as of early 2015. The remaining 13 states served all applicants they determined eligible for the program. The results of these interviews are not generalizable to all states. We used two sources to compile a list of states with a wait list policy and to identify whether a state had families on the wait list in the most recent year. To identify states with a wait list policy in place as of October 1, 2014, we used the CCDF Book of Tables. This table was released in October 2015 and was the most current data compilation available. To identify whether states had families on a wait list, we used the National Women’s Law Center survey results for 2015. The results of the survey were current as of February 2015. These data represent the most current available compilation of states’ wait lists at the time we began our outreach. We held five group interviews—three with states that had families on the wait list and two with states that served all eligible applicants. Group interview participation ranged from three to seven states per session. Limiting the number of states per session helped ensure that all states would have a chance to share their experiences during the discussion. In addition, eight states were unable to participate during the scheduled group interview times. In order to obtain the most complete information possible, we held individual interviews with each of these states. Five were states that served all eligible applicants and three had an active wait list. To gather information and perspectives on child care eligibility and access issues, we conducted interviews with a variety of stakeholders. We assembled a group of stakeholders knowledgeable about wait lists and access to child care. We selected seven stakeholders from research and advocacy organizations, as well as the government, based on referrals from the HHS entrance conference and other interviews, as well as a review of relevant literature. The selected stakeholders had authored reports on child care subsidies or had experience managing related programs. Our discussion with these individuals focused on good wait list practices, which we used to inform our questions for the group interviews we held with state child care officials. We conducted individual interviews with stakeholders from research and advocacy organizations, as well as academia, to discuss access issues and data sources. We identified these stakeholders through reviewing policy papers and applicable websites, attending webinars, and asking interviewees to suggest other knowledgeable stakeholders to contact. We met with federal officials who administer the CCDF and contractors who provide technical assistance on eligibility and access issues. Cindy Brown Barnes, (202) 512-7215 or brownbarnesc@gao.gov. In addition to the contact named above, Janet Mascia (Assistant Director), Nancy Cosentino (Analyst-in-Charge), Rhiannon Patterson, Amy Moran Lowe, Nyree Ryder Tee, and Eric Anderson made significant contributions to this report. Also contributing to this report were James Rebbe, Mimi Nguyen, Jean L. McSween, Karen O’Conor, Anna Maria Ortiz, Kate van Gelder, Jessica Orr, Rachael Chamberlin, Kelly Snow, and Chris Schmitt.
Child care subsidies help low-income families pay for care, allowing parents to work or attend school or training. Through the CCDF, the federal government provides states funding to assist these families. Federal law sets broad subsidy eligibility requirements and allows states to establish more restrictive policies. Due to limited funds, some eligible families may not be able to get subsidies and may be placed on wait lists. Congress included a provision in statute for GAO to review participation in the CCDF program across states. GAO examined: (1) what is known about the number and types of families eligible for child care subsidies and the extent to which they receive them; and (2) how states determine which eligible families receive subsidies when subsidy need exceeds supply. GAO used Urban Institute data from 2010-2012 to estimate eligible children (most recent at time of analysis) and U.S. Department of Health and Human Services (HHS) data on subsidy receipt (same years as Urban's data). GAO also held interviews with child care officials from 32 states with wait list policies about subsidy need and management. GAO also interviewed HHS officials and child care stakeholders (selected by reviewing studies and websites, and obtaining suggestions); reviewed federal laws and regulations; and examined state policies in the CCDF Policies Database Book of Tables, an HHS funded project that compiles policies for the 50 states and District of Columbia. GAO makes no recommendations in this report. HHS noted that the report provides valuable information about access to CCDF subsidies. According to GAO's analysis of nationwide data for an average month in 2011-2012 approximately 8.6 million children under age 13 were estimated to be eligible for subsidies under the Child Care and Development Fund (CCDF) program based on policies in their states, and about 1.5 million received them. When compared with all eligible children, those receiving subsidies tended to be younger (under age 5) and poorer (in families below federal poverty guidelines). (See figure.) Some state-by-state variations existed in these and in other characteristics GAO analyzed, such as race, when comparing children eligible for and receiving subsidies. According to various officials and stakeholders, the number of families receiving subsidies does not equate to the population of eligible families who are interested in pursuing them or who may need them. They also said that it is difficult to accurately predict the extent to which eligible families are likely to apply for and receive subsidies. For example, some eligible families may not pursue subsidies because they may not know about them or find applying burdensome. Child care officials GAO interviewed said that they use wait lists and other strategies to manage caseloads when more families want subsidies than their states can serve. Wait lists can be challenging to manage, according to child care officials from 23 of the 32 states that GAO interviewed. Challenges included keeping lists current and accurate. Forty states also prioritize certain families for subsidies, such as recipients of the Temporary Assistance for Needy Families program and children in protective services. States also stop taking applications from all or some types of eligible families and modify eligibility policies to manage caseloads. Child care officials also noted that they leverage other programs and funds to meet the child care needs of low-income working families.
STARS is designed to replace FAA’s automated radar terminal system, which is composed of 15-to 25-year-old controller workstations and supporting computer systems. According to FAA, this system is prone to failures, is maintenance intensive, and requires long repair times. The system also has capacity constraints that restrict the agency from making required safety and efficiency enhancements. Automated radar terminal systems are located at 180 Terminal Radar Approach Control facilities (TRACON) and allow FAA controllers to separate and sequence aircraft near airports. STARS equipment (see fig. 1) is also expected to provide the platform needed to make system enhancements that would increase the level of air traffic control automation and improve weather display, surveillance, and communications. In addition, STARS is expected to permit FAA to consolidate some TRACONs and replace all Digital Bright Radar Indicator Tower Equipment systems. In September 1996, FAA signed a contract with Raytheon Corporation and, as mentioned, currently plans to acquire as many as 171 STARSs. In producing STARS, Raytheon intends to rely fully on commercially available hardware and, to a large extent, on commercially available software. Some original software development will still be required. In August 1996, the contractor projected that 124,000 new lines of software code will need development to meet FAA’s requirements. This estimate was revised in December 1996 to 140,000 new lines of code. STARS is an outgrowth of the troubled Advanced Automation System acquisition. As originally designed, the terminal segment of this system, known as the Terminal Advanced Automation System, would provide controllers in TRACONs with new workstations and supporting computer systems. However, in June 1994, the FAA Administrator ordered a major restructuring of the acquisition to solve long-standing schedule and cost problems. These schedule delays were up to 8 years behind the original schedule, and estimated costs had increased to $7.6 billion from the original $2.5 billion estimated in 1983. Specifically, regarding terminal modernization, the Administrator canceled the Terminal Advanced Automation System and expanded the STARS project to include all terminal facilities. In April 1996, FAA established a new acquisition management system, as directed by the Congress. Included in this system is the concept of life-cycle management, which is intended to be a more comprehensive, disciplined full-cost approach to managing the acquisition cycle, from analysis of mission needs and alternative investments through system development, implementation, operation, and, ultimately, disposal. Under this new system, decisions related to resource allocation (mission and investment) are made by FAA’s Joint Resources Council, which is composed of associate administrators for operations and acquisition and other key executives. Decisions associated with program planning and implementation are made within Integrated Product Teams (IPT). IPTs are responsible for bringing together all essential elements of program implementation, including scheduling, allocation of funding, and the roles and responsibilities of stakeholders. To ensure successful program implementation, the acquisition management system dictates that these issues be resolved before contracts are awarded. IPTs also generate schedule and cost baselines, which the Joint Resources Council authorizes the teams to operate under. Team members include representatives from FAA units responsible for operating and maintaining air traffic control equipment and other stakeholders in the acquisition process. To achieve the implementation schedule approved by the Joint Resources Council in January 1996, FAA will have to obtain commitment from key stakeholders, resolve scheduling conflicts between STARS and other terminal modernization efforts, and overcome difficulties in developing the system. FAA is aware that these issues pose a risk for STARS and has begun several risk mitigation initiatives. While such actions are encouraging, it is too early to tell how effective they will be. FAA’s schedule for developing and implementing STARS by its January 1996 approved baseline is shown in table 1. Figure 2 shows FAA’s plans for ordering, delivering, and operating STARS. FAA intends to begin operating STARS at only three TRACONs before fiscal year 2000. Operation increases after this time, with FAA expecting to operate 55 additional systems in fiscal year 2002. FAA has yet to obtain commitment from all key stakeholders responsible for ensuring that STARS equipment is properly installed. FAA’s new acquisition management system stresses that IPTs need to reach agreement before contracts are awarded. Such agreement is necessary to ensure that all stakeholders’ roles are defined and agreed upon, facilities are ready to receive STARS, and all other equipment necessary for the operation of STARS is in place. In the past, poor coordination among key stakeholders has caused schedule delays in other modernization projects at FAA. The IPT for STARS has yet to obtain commitment to the STARS schedule from the entire Airway Facilities Service—a key stakeholder. Located in headquarters and regions, maintenance technicians who work for the Airway Facilities Service are responsible for installing and maintaining air traffic control equipment. FAA’s current schedule anticipates that STARS will be installed at most sites using a turnkey concept whereby the contractor, not FAA employees, will install the equipment. This concept presumes that a significant level of regional resources will still be required to support and oversee contractor installation. IPT officials told us that while Airway Facilities Service officials at headquarters have committed to the turnkey concept, regions’ commitment is incomplete. IPT and Airway Facilities Service officials told us that a process has been established to ensure regions’ understanding and obtain their commitment. As part of this process, the IPT has begun regional briefings and has formed implementation teams to gain regions’ commitment on turnkey issues. In addition, the IPT has yet to obtain commitment to the STARS schedule from the Professional Airways Systems Specialists—the technicians’ union. Top union officials told us that, as of late February 1997, they have not been briefed on the STARS turnkey concept and have not agreed as to how it will be implemented. The union is concerned that the turnkey installation may jeopardize the job security of its members. IPT officials said that while union representatives have been involved in reviewing vendors’ proposals for STARS, the union has not been briefed on the specifics of STARS deployment. Although FAA’s Acquisition Management System stresses that all key program implementation issues be resolved before contracts are awarded, the IPT believed that it could obtain the union’s commitment at a later date. As required by the union’s collective bargaining agreement, in January 1997, the IPT initiated actions to brief the union and obtain its commitment. FAA’s schedule for STARS can be jeopardized by scheduling conflicts with other modernization efforts. For example, each year, various TRACONs are scheduled to be renovated or replaced. If STARS equipment is delivered during this time, installation could be delayed. Currently, the IPT is unsure of the number of these potential conflicts. In September 1996, the IPT identified 12 potential scheduling conflicts at the first 45 STARS sites. One month later, the number of conflicts was reduced to four, but the team did not provide us with an explanation for this decrease. We believe that the number of potential conflicts will not be known until the IPT ascertains the readiness of each facility to receive and install STARS equipment. The IPT plans to start conducting site reviews in 1997. Another potential scheduling conflict involves terminal surveillance radars, which track aircraft position and use analog or digital processing and communications to transmit the information to TRACONs. Many existing surveillance radars are not digital, but STARS requires digital processing and communications. FAA plans to replace nondigital Airport Surveillance Radar-7s (ASR-7) with new digital ASR-11s. The agency has not decided yet whether to replace other nondigital radar, ASR-8s, or to digitize them. In January 1997, FAA was concerned that 47 of 98 ASR-7s and –8s might not be upgraded in time to meet the STARS schedule. FAA officials told us that, as of late February, they had reduced the number of potential conflicts from 47 to 10 through efforts to coordinate the STARS and digital radar schedules. According to an IPT official, if digital radar does not provide coverage for a TRACON’s entire airspace, FAA may have to delay STARS or reorder the sequence of TRACONs receiving STARS. FAA officials told us that they are taking actions to identify and resolve potential scheduling conflicts. The IPT has developed project guides for the FAA regions receiving STARS. These guides identify possible scheduling conflicts with other modernization efforts. Also, Airway Facilities Service officials told us that as a result of a recent reassessment in December 1996 of the schedule for the first 39 STARSs, FAA was able to avoid potential conflicts by repositioning the order in which TRACONs received STARS. Finally, the Airway Facilities Service is developing a database to assist the IPT in maintaining current planning information. Although STARS depends on the use of commercial off-the-shelf computer hardware and a significant amount of commercially available software, FAA and Raytheon have numerous tasks to accomplish before system development is completed. However, the nature and extent of these tasks are not completely known, and such development inevitably poses continual managerial and technical challenges. As noted in table 1, FAA’s schedule calls for software development to proceed in two phases. For the initial phase, the agency expects to complete software testing in September 1998, about 2 years from the time when the contract was awarded. For the second phase, the agency expects to complete testing of the full STARS software in July 1999. As an example of the challenge that software development poses for FAA, as recently as December 1996, FAA and Raytheon were discussing (1) how the system would provide specific functions and (2) whether certain functions would be needed, and if so, whether the functions would be included in the equipment with initial- or full-system capability. According to Raytheon officials, these discussions ended with FAA and Raytheon coming to closure on all of the 28 issues needing resolution. As a result, some 16,000 lines of additional software code—beyond the planned 124,000 lines of new code—must be written. Of the 140,000 lines of code, about 138,000 are for flight data processing, training, and maintenance functions, and 2,000 are to fulfill safety requirements, such as warning controllers when aircraft are not maintaining proper separation or minimum safe altitudes. Raytheon officials believe the additional code development will not affect their ability to meet the original milestones. All new code will have to be tested in conjunction with the nearly 840,000 lines of existing STARS software code. If potential difficulties in developing and testing the system are realized, initial implementation of STARS—particularly at the three TRACONs targeted for operation before fiscal year 2000—will likely be delayed. FAA’s life-cycle cost baseline has the potential to increase—from $2.23 billion, the level approved by the Joint Resources Council in January 1996, to as much as $2.76 billion. This possible increase is attributable to expected higher costs for operating and maintaining STARS equipment. FAA expects the estimate for facilities and equipment costs to remain stable for the immediate future. FAA’s January 1996 facilities and equipment cost baseline is $940 million. During 1996, this baseline was reviewed by the IPT. Through September 1996, the IPT was estimating that the baseline could increase to $1.18 billion. At that time, the IPT (1) estimated higher expected costs for software development; (2) estimated higher expected implementation, technical support, and maintenance costs because of the addition of necessary equipment; and (3) included costs for communications because the baseline estimate overlooked them. In December 1996, the IPT assessed the STARS costs on the basis of the signed contract with Raytheon. As a result, the IPT determined, that while some cost elements will increase, other elements will decrease. Specifically, significantly lower costs for hardware—key components were $40,000 less per unit than what FAA had estimated—will enable the STARS project for the present time to stay within the original baseline. Table 2 shows the differences in cost elements between the original cost baseline and the IPT’s December 1996 assessment. FAA’s January 1996 operations cost baseline is $1.29 billion. However, based on a September 1996 analysis, FAA staff identified a potential $529 million increase that could revise the baseline to $1.82 billion. FAA officials told us that this increase occurred, in part, because the agency overlooked maintenance costs in the initial estimates. Also, the officials attributed the increase to FAA’s deploying more STARS equipment than originally planned. IPT officials told us that on the basis of more current information from the contractor, operations and maintenance costs are expected to be significantly closer to the $1.29 billion baseline estimate than the $1.82 billion figure. The officials could not, however, provide us with an updated cost estimate or detailed support for their views. The IPT officials told us that they are reviewing the latest cost estimates and expect to brief the Joint Resources Council on any potential changes to the baseline in March 1997. Separate and distinct from STARS life-cycle costs are two additional costs that FAA will incur to make STARS operational. First, FAA will have to prepare the TRACONs for the delivery of STARS equipment. FAA officials estimate that the agency will incur at least $18 million in costs to get the first 46 TRACONs and related facilities ready to accept the STARS equipment. Roughly half of this amount is for asbestos removal; the balance is for power upgrades and building improvements. FAA has yet to develop estimates for readying the remaining sites. Second, FAA will incur costs for upgrading radars. FAA plans to modernize the existing analog ASR-8 radars that provide data to its TRACONs. Because the implementation of STARS is approaching, FAA is faced with an immediate decision between digitizing these existing analog radars or replacing them with new digital radars. FAA officials estimate that the 20-year life-cycle costs for modifying and digitizing all the ASR-8s will be $459 million and for replacing them will be $474 million. According to FAA officials, the estimated cost difference between digitizing existing radars and buying new radars is minimal because of the higher costs of maintaining older analog equipment. The agency is continuing to refine these cost estimates, and it expects to decide later this year on which option to select. We provided the Department of Transportation with a draft of this report for its review and comment. We met with FAA officials, including the IPT leader for Terminal Air Traffic Systems Development; the Program Director for National Airspace System Transition and Implementation; and representatives of FAA’s Air Traffic and Airway Facilities Services. FAA was concerned about our use of the $1.82 billion estimate for operations and maintenance costs. The estimate came from a September 1996 study done by FAA’s Program Analysis and Operations Research staff. FAA told us that this estimate was preliminary and should not be reported as a basis for evaluating the STARS project. While FAA acknowledged that there may be some cost growth in the STARS project, it did not anticipate growth as large as we reported. We continue to include the September 1996 estimate in this report. This estimate was developed by experienced cost analysts, including a member of the STARS IPT, and was the only documented estimate available since the official baseline was approved in January 1996. Furthermore, FAA could not provide us with a more current estimate or detailed support for its views on why the September 1996 analysis may have overstated the cost estimate for operations and maintenance. FAA also expressed concern about the way the draft report characterized the extent to which key stakeholders were committed to the implementation schedule, which relies heavily on the use of the turnkey concept. We revised the report to recognize that (1) while regions’ commitment is incomplete, Airway Facilities Service officials at headquarters have committed to the turnkey concept and (2) FAA has established a process, including the formation of implementation teams to ensure regions’ understanding and obtain their commitment on turnkey issues. However, because the turnkey concept will affect regional resources and employees’ responsibilities, FAA agreed that the potential lack of regions’ commitment is a risk that must be mitigated throughout the implementation of STARS. To obtain information for this report, we interviewed officials at FAA headquarters, its New England Regional Office in Burlington, Massachusetts, its New York Regional Office in Jamaica, New York, and its William J. Hughes Technical Center in Pomona, New Jersey. We reviewed agency documentation on current schedule and life-cycle costs for STARS. We reviewed guidelines pertaining to system acquisition, compared FAA’s actions to the guidance, and identified key issues that could affect the success of the STARS project. To identify any labor issues that could affect the scheduled deployment, we interviewed union officials with the Professional Airways Systems Specialists. We conducted our review from July 1996 through January 1997 in accordance with generally accepted government auditing standards. However, we did not assess the reliability of the process used to generate cost information. We are sending copies of this report to the Secretary of Transportation, the Administrator of FAA, and other interested parties. We will also make copies available to others on request. Please call me at (202) 512-3650 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix I. John H. Anderson, Jr. Gregory P. Carroll Robert E. Levin Peter G. Maristch John T. Noto The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Federal Aviation Administration's (FAA) acquisition planning to date, focusing on the extent to which: (1) the schedule estimate for the Standard Terminal Automation Replacement System (STARS) is attainable; and (2) cost estimates to make STARS operational are likely to change. GAO noted that: (1) the STARS schedule, which calls for implementation of 171 air traffic control facilities between December 1998 and February 2005, is attainable only if FAA is successful in its efforts to mitigate certain risks; (2) specifically, FAA will need to obtain commitment by key stakeholders to the STARS schedule, resolve schedule conflicts between STARS and other modernization efforts, and overcome difficulties in developing system software that could delay implementing STARS; (3) FAA is aware that these issues pose a risk for STARS and has begun several risk mitigation initiatives; (4) while such actions are encouraging, it is too early to tell how effective they will be; (5) FAA's cost estimate for STARS has the potential to increase; (6) FAA's total cost estimate for STARS is $2.23 billion; (7) FAA approved this estimate in January 1996, however, a September 1996 analysis by agency officials pointed to potential cost increases that could drive the total cost estimate to as much as $2.76 billion; (8) this possible increase is attributable to expected higher costs for operating and maintaining STARS equipment; (9) FAA officials are continuing to revise the STARS cost estimate and now believe that cost increases may be significantly lower; and (10) at this time, however, FAA could not provide GAO with an updated estimate.
Congress passed the Soldiers’ and Sailors’ Civil Relief Act in 1940 to provide servicemembers protections to help them meet the unique circumstances they face when serving their country. In response to the increased use of Reserve and National Guard military units in the Global War on Terrorism, Congress enacted SCRA in December 2003 as a modernized version of the Soldiers’ and Sailors’ Civil Relief Act. In addition to providing protections related to residential mortgages, the act covers other types of loans, such as credit card and automobile and a variety of other issues, such as rental agreements, eviction, installment contracts, civil judicial and administrative proceedings, motor vehicle leases, life insurance, health insurance, and income tax payments. SCRA provides the following mortgage-related protections to servicemembers: Interest Rate Cap. Servicemembers who obtain mortgages prior to serving on active duty status are eligible to have their interest rate capped at 6 percent for the duration of their active duty status and for 12 months after returning from active duty service. Interest above 6 percent is to be forgiven by the servicer. Servicemembers are required to inform their servicer of their active duty status in order to avail themselves of this provision. Foreclosure Proceedings. A servicer cannot sell, foreclose, or seize the property of a servicemember for breach of a pre-service obligation unless a court order is issued prior to the foreclosure on the property. This protection is effective until 9 months after the term of active duty service ends. If the servicer files an action in court to enforce the terms of the mortgage, the court may stay any proceedings or adjust the obligation to preserve the interests of the parties. Mortgage prepayment penalties. A court may decide that servicemembers who have mortgages that impose penalties for paying off the balance early are not subject to these penalties if the servicemember incurs such fees due to military service and the ability of the servicemember to pay the fees is materially affected by military service. change-of-station order to relocate to another area may receive a court order that waives the penalty for selling his or her home and paying off the mortgage early. For example, a servicemember who receives a permanent Adverse credit reporting protections. A servicer may not report adverse credit information to a credit reporting agency solely because a servicemember exercises his or her SCRA rights, including a request to have his or her mortgage interest rate and fees be capped at 6 percent. 50 U.S.C. app. §523(b). In addition to SCRA, the Housing and Urban Development Act of 1968 includes a requirement applicable to institutions that service mortgages. This act requires that all mortgage servicers that service home loans provide notification of the availability of homeownership counseling offered by the lender to eligible homeowners who fail to pay any amount by the due date. In 2006, changes were made to the homeownership counseling notice requirement. Mortgage servicers are required to alert borrowers of SCRA protections if they are in default on their mortgage, and the notice instructs borrowers to notify their servicer if they believe they are eligible for SCRA protections. Servicers must provide the notification within 45 days from the date a payment was missed by a borrower. The Department of Housing and Urban Development (HUD) developed and disseminated the format for this notice. SCRA provides protections to active duty servicemembers in all five of the military services—Army, Navy, Air Force, Marine Corps, and Coast Guard—as well as members of each of these services’ reserve These components include the Army Reserve, Navy component. Reserve, Marine Corps Reserve, Air Force Reserve, Coast Guard Reserve, Army National Guard, and Air National Guard. In 2010, active duty servicemembers comprised 63 percent of the military’s force, and the reserve components represented the remaining 37 percent of the military force. Figure 1 shows the distribution of the military population and shows that the Army constitutes the greatest percentage of both active duty servicemembers and the reserve forces. While the Army Reserve, the Navy Reserve, the Marine Corps Reserve, and the Air Force Reserve are federal entities, the Army National Guard and the Air National Guard (known collectively as the National Guard) have both federal and state missions. Members of the National Guard who are eligible for SCRA protections are those who have been called into federal active duty service. In addition, members of the National Guard recalled for state duty are also eligible for SCRA protections under certain circumstances. The responsibility of extending mortgage-related SCRA protections to eligible servicemembers often falls to mortgage servicers. While some institutions that originate home mortgage loans hold the loans as assets on their balance sheets, institutions generally sell them to other financial institutions or the enterprises—Fannie Mae or Freddie Mac. The enterprises purchase mortgages from primary mortgage lenders. They hold some of the mortgages they purchase in their portfolios, but they package the majority into mortgage-backed securities and sell them to investors in the secondary mortgage market. The enterprises guarantee these investors the timely payment of principal and interest. If a mortgage originator sells its loans to either an investor or to an institution that securitizes them, another financial institution or other entity is appointed as the mortgage servicer to manage payment collections and other activities associated with these loans. Mortgage servicers, which can be large mortgage finance companies, commercial banks, or small specialty companies unaffiliated with a larger financial institution, earn a fee for duties they perform, such as sending borrowers monthly account statements, answering customer-service inquiries, collecting monthly mortgage payments, maintaining escrow accounts for property taxes and hazard insurance, and forwarding proper payments to the mortgage owners. Other mortgage lenders that hold the mortgages they originate may service the loans internally or outsource this function. In the event that a borrower becomes delinquent on loan payments, the mortgage servicer must decide whether to pursue a home retention workout or foreclosure alternative, such as a short sale, or proceed with foreclosure. If the mortgage servicer determines that foreclosure is the most appropriate option, it follows one of two foreclosure methods, depending on state law. In a judicial foreclosure, a judge presides over the process in a court proceeding. Mortgage servicers initiate a formal foreclosure action by filing a lawsuit with a court. A nonjudicial foreclosure process takes place outside the courtroom and is typically conducted by a trustee named in the deed-of-trust document that accompanied the mortgage. Trustees, and sometimes mortgage servicers, generally send a notice of default to the borrower and publish a notice of sale in area newspapers or legal publications. Prudential regulators—FDIC, Federal Reserve, NCUA, and OCC—have the authority to conduct reviews of any aspect of banks’ activities, including compliance with applicable consumer protection laws, such as SCRA. OCC charters and supervises national banks and federal thrifts. The Federal Reserve supervises state-chartered banks that opt to be members of the Federal Reserve System, bank holding companies, thrift holding companies, and the nondepository institution subsidiaries of those institutions. FDIC supervises FDIC-insured state-chartered banks that are not members of the Federal Reserve System, as well as federally insured state savings banks and thrifts. NCUA charters and supervises federally chartered credit unions and insures savings in federal and most state- chartered credit unions. OCC regulates the vast majority of mortgage servicing in the United States. For example, OCC-regulated servicers accounted for close to 80 percent of the unpaid principal balance on serviced mortgages in the third quarter of 2011. The prudential regulators conduct risk-based examinations of the institutions they oversee on a routine basis. Because examinations are risk-based and there are a number of consumer compliance laws for which examiners assess compliance during an examination, SCRA compliance is not assessed during every examination. The Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd- Frank Act) established CFPB and provided it with the authority to regulate mortgage servicers with respect to federal consumer financial law. Consumer financial protection functions from seven existing federal agencies were transferred to the new agency. For mortgage servicers that are depository institutions with more than $10 billion in assets or their affiliates, CFPB will have exclusive supervisory authority and primary enforcement authority to ensure compliance with federal consumer financial law. Additionally, if a mortgage servicer is a nondepository institution, CFPB will have both supervisory and enforcement authority to ensure compliance with federal consumer financial law. Finally, CFPB will have rulemaking authority with respect to mortgage servicers, including authority that transfers from other federal agencies such as the Federal Reserve and the Federal Trade Commission. SCRA, however, was not one of the enumerated laws for which oversight transferred to CFPB. The prudential regulators remain responsible for overseeing compliance with the law for any of the entities they supervise that are servicing mortgages. Other federal agencies are involved in the mortgage market by operating mortgage programs aimed at expanding homeownership for populations who may encounter difficulties in obtaining mortgages. For example, FHA has played a large role in assisting minority, lower-income, and first-time homebuyers in obtaining mortgages. FHA’s program insures private lenders against losses from borrower defaults on mortgages that meet FHA criteria for properties with one to four housing units. As of September 2011, almost 3,700 lending institutions were approved to participate in FHA’s mortgage insurance programs for single-family homes. FHA also offers special protections for servicemembers who have FHA-insured loans. For example, FHA-approved lenders are authorized to postpone principal payments and foreclosure proceedings for servicemembers on active duty who have FHA-insured mortgages. VA is also active in the mortgage market through its Home Loan Guaranty program, which provides lenders a guaranty on a portion of mortgage loans for eligible veterans, active duty servicemembers, surviving spouses, and members of the reserve components in recognition of their service. According to VA, the program operates by substituting the federal government’s guaranty for a down payment that might otherwise be required. VA guarantees a portion of the mortgage loan in the event that borrowers default, providing lenders with substantial financial protections against some of the losses that may be associated with extending such mortgage loans. In 2011, VA guaranteed over 350,000 loans to veteran borrowers. The Housing and Economic Recovery Act of 2008 created the Federal Housing Finance Agency (FHFA) and gave it responsibility for, among other things, the supervision and regulation of the housing-related enterprises: Fannie Mae, Freddie Mac, and the 12 federal home loan banks.each of the regulated entities operates in a safe and sound manner, including maintenance of adequate capital and internal controls, and carries out its housing and community development finance mission. FHFA has no direct authority over mortgage servicers, but does have authority to ensure that the housing enterprises are being run safely and soundly, as well as the power to impose operational, managerial, and internal control standards on the companies. The total number of servicemembers eligible for the mortgage protections provided by SCRA is not known, but the size of this population is likely limited because the act provides protections only to servicemembers who meet certain eligibility requirements. The maximum number of servicemembers potentially eligible for mortgage protections under SCRA at any one time includes those servicemembers on active duty service and those who have recently left it. According to DOD, between 2007 and 2010 about 2 million servicemembers, including those activated from the reserve components, were on active duty.servicemembers who may actually qualify for the SCRA mortgage protections is a smaller portion of this population because some of the act’s protections only extend to servicemembers who obtained their mortgages prior to entering active duty service or servicemembers whose military service materially affects their ability to pay their mortgage. However, representatives from all the mortgage servicers with whom we spoke stated that they do not assess whether a servicemember’s ability to pay has been materially affected by their active duty status and that they provide eligible servicemembers SCRA protections regardless of whether their ability to pay is materially affected or not. However, the number of According to DOD officials, representatives from industry trade groups, SCRA experts, and military service organizations, the servicemembers most likely to be eligible for SCRA mortgage protections are members of the reserve components. These servicemembers are more likely to have had mortgages prior to entering active duty service and some may potentially experience a decline in their incomes as they leave their civilian employment and begin receiving their military pay. We have previously reported, however that servicemembers belonging to the reserve components on average earn more income while activated. According to DOD officials, the number of servicemembers activated from the reserve components from 2007 through 2010 was approximately 576,500. The maximum number of servicemembers who are eligible for SCRA mortgage protections is also a smaller portion of the total military population because many do not own homes for which they have taken out mortgage loans. According to the Census Bureau, the U.S. homeownership rate was about 67 percent in 2010. However, research shows that servicemembers are generally less likely to own their own homes. surveys—surveys that DOD sends annually to active duty servicemembers and members of the reserve components to evaluate various programs and policies and their impact on servicemembers—only 34 percent of active duty servicemembers and 55 percent of reserve component servicemembers reported that they owned or made mortgage payments on a home in the previous 12 months. However, even those military families who have mortgages may not be eligible for SCRA protections. First, some SCRA mortgage protections only apply to servicemembers who took out their mortgage before being placed on active duty. Also, given that mortgage interest rates have been at historic lows in recent years, servicemembers who took out mortgage loans during this period before being placed on active duty may be likely to have loans with rates lower than the SCRA-mandated level of 6 percent. Census computes the homeownership rate by dividing the number of owner-occupied housing units by the number of occupied housing units or households. Although the total number of SCRA violations is not known, thousands of SCRA violations have been identified from a number of sources. First, DOJ—which is responsible for enforcing SCRA—settled investigations in 2011 with two mortgage servicers and identified 165 instances of active duty servicemembers who had their homes foreclosed upon without the mortgage servicer seeking the proper court order as required by the act. Second, in July 2011, as part of its investigation into SCRA violations, the U.S. House of Representatives Committee on Oversight and Government Reform sent letters to 10 large mortgage servicers requesting them to identify the total number of improper foreclosures and interest-rate and fee violations they had committed. In their responses, 6 mortgage servicers reported having conducted a total of at least 148 improper foreclosures against servicemembers and failing to reduce interest rates or fees on the mortgages for over 14,000 servicemembers since 2005. Third, as the result of a class-action lawsuit filed by several servicemembers, as of January 2012, Chase Home Finance, LLC had issued refunds to approximately 13,500 borrowers for interest and fees charged in excess of SCRA protections since 2005. Many of the mortgage servicers involved in these investigations are among the largest in the industry and service millions of loans. Table 1 summarizes the various SCRA violations identified by these sources to date. Through their compliance examinations, prudential regulators identified 251 instances of SCRA compliance problems at depository institutions between 2007 and 2011. FDIC identified the vast majority—230—of these issues, with Federal Reserve staff identifying 16, OCC staff identifying 4, and NCUA staff identifying 1 instance. However, these SCRA compliance issues may not specifically concern mortgages—for example, they may have involved non-mortgage-loan products, such as credit card loans. A more complete picture of the extent of SCRA violations may result from three large-scale federal agency reviews that are ongoing. Recent enforcement actions taken by DOJ, Federal Reserve, and OCC require mortgage servicers to conduct historical reviews of their mortgage loan files to determine if servicemembers who were eligible for the SCRA mortgage protections received them, among other things. If violations are identified, the mortgage servicers are required to provide compensation to the servicemembers. Appendix II contains a detailed explanation of these reviews. In the wake of identified SCRA violations, some mortgage servicers have implemented procedures to enhance their compliance with SCRA. Some large mortgage servicers have instituted several military status checks during the foreclosure process. For example, one large mortgage servicer now requires its foreclosure counsel to check a customer’s military status prior to the initiation of foreclosure proceedings, 1 week prior to a foreclosure sale, and 1 day prior to the scheduled sale date. Some mortgage servicers have also created dedicated customer service support for military servicemembers, including telephone hotlines and websites. For example, representatives from one mortgage servicer told us that they had developed a dedicated team that is staffed with former servicemembers to assist customers with SCRA requests. These customer-support representatives also receive training on military financial issues and serve as the points of contact for any problems with delinquency, remediation, and foreclosure. Finally, as a result of identified violations and SCRA investigations, some servicemembers will be receiving SCRA protections that go beyond those stated in the act. For example, three mortgage servicers that responded to the House Committee on Oversight and Government Reform letters noted that they have reduced the interest rate they charge on servicemembers’ mortgages to 4 percent—which is below the 6 percent required in SCRA. Additionally, the National Mortgage Settlement between the federal government, 49 state attorneys general, and five large mortgage servicers that occurred in February 2012 requires the five mortgage servicers to implement new mortgage servicing standards. These new standards expand protections to certain servicemember customers of these five mortgage servicers beyond those provided in SCRA. For example, the new standards extend foreclosure protections to any servicemember— regardless of whether their mortgage was obtained prior to active duty status—who is receiving Hostile Fire/Imminent Danger Pay or is serving at a location more than 750 miles away from their home. This means that any servicemember meeting these conditions and living in a nonjudicial state who obtained a mortgage after obtaining active duty status could not be foreclosed upon without a court order. More information on the National Mortgage Settlement is contained in appendix II. Representatives from some mortgage servicers and industry associations cited challenges that make complying with SCRA difficult. First, mortgage servicers may not know at the time a mortgage is originated whether a borrower will be eligible for SCRA protections in the future. For example, a borrower would become eligible for SCRA mortgage protections after obtaining his or her mortgage by joining the active duty military or being called into active duty service while serving as a member of the reserve components. Therefore, mortgage servicers may not be able to flag loans at origination that could potentially become eligible for SCRA protections at a later date. Second, representatives from some mortgage servicers and industry associations also noted that military orders, which servicemembers must provide to their mortgage servicers in order to receive the SCRA interest rate protection, can be difficult to interpret. In particular, a representative from one mortgage servicer noted that the orders do not always clearly specify the start and end dates of active duty service and that the format and content of these orders can vary considerably across services, which may lead to mistakes by mortgage servicer personnel responsible for determining eligibility. Further, a DOD official explained that in some instances, military orders may not be available in a timely manner. For example, he stated that members of the reserve components may be alerted that their unit is being mobilized on a certain date; however, the servicemembers may not get the actual military orders until weeks later. This delay could lead to problems for both a servicemember and a mortgage servicer. For example, if a servicemember has been deployed, he or she may encounter difficulties sending orders to his or her mortgage servicer. Without the orders, a mortgage servicer may encounter difficulties verifying the servicemember’s active duty start date in order to appropriately adjust their payment amounts. One of the primary tools mortgage servicers use to comply with SCRA is a website operated by DOD’s Defense Manpower Data Center (DMDC) that allows mortgage servicers and others to query DMDC’s database to determine the active duty status of a servicemember. DMDC collects, archives, and maintains DOD personnel data. Representatives from mortgage servicers indicated that they use this website to confirm if a borrower is an active duty servicemember and may be eligible for SCRA protections and that they rely on the site to confirm if a servicemember is on active duty status prior to conducting a foreclosure. Representatives from one mortgage servicer also noted that they use the website to confirm the period of time that borrowers are eligible for the SCRA interest rate protections. The website is an important compliance tool because servicemembers are eligible for the foreclosure protections even if they do not notify their mortgage servicers that they are serving on active duty. However, many representatives from mortgage servicers and industry associations with whom we spoke cited challenges with the usability of the website. Moreover, confusion appears to exist in the mortgage servicing industry about the availability of information in the database. For example, prior to April 2012, the website only allowed mortgage servicers to inquire about borrowers’ active duty status one individual at a time. The inability to test large numbers of borrowers simultaneously—known as batch testing—made confirming borrowers’ SCRA eligibility difficult given the large volumes of mortgages that some institutions service. Representatives from some mortgage servicers also indicated that sometimes the personnel information available from DMDC is not complete or accurate and that the database may produce a false-negative result. That is, it will indicate that servicemembers were not on active duty status when in fact they were. DMDC officials explained that information contained in the database depends on information provided to DMDC by the various services. Therefore, if a service has not reported a servicemember to DMDC as being on active duty status, the database will report that the servicemember is not on active duty. Additionally, representatives from mortgage servicers told us that they believe some servicemembers are not listed in the database. For example, one explained that, in some instances they have received orders from servicemembers, but when they query the database to confirm the active duty status, the servicemembers are not listed as on active duty. Other mortgage servicer representatives believed that some servicemembers may not be listed in the database for national security reasons, such as those serving in the Special Forces. However, DMDC officials told us that active duty status is updated for all servicemembers, including those on special operations. To help address these challenges, DOD is working with the mortgage servicing industry and industry associations to improve both the usability of the website and the readability of military orders. First, to aid mortgage servicers’ ability to query the database, DMDC has developed and implemented a way for mortgage servicers and others to conduct batch queries of the database from the website for up to 250,000 servicemembers at a time.to develop the capability of the database to query historical information and also to distinguish between those active duty periods for servicemembers in the National Guard that provide SCRA protections and those that do not. DOD officials also noted that they are trying Second, DOD has collaborated with the financial industry through the Financial Services Roundtable’s Housing Policy Council—a consortium of financial institutions that provide mortgage credit—to develop an alternative military order form that servicemembers can attach to or provide in lieu of their military orders when requesting relief under SCRA from their mortgage servicers. This form is intended to be easier for mortgage servicers to interpret as it is shorter and more standardized than official orders, which can vary by service. According to DOD officials, this alternative form was approved by DOD in December 2011 and has been distributed to the military services as well as to financial institutions and is being used by servicemembers. Prudential regulators—FDIC, Federal Reserve, NCUA, and OCC—are responsible for supervising depository institutions’ compliance with various federal consumer laws including SCRA. Consumer compliance examinations are one of the primary tools regulators use to assess this compliance. Prudential regulators all use a risk-based approach to consumer compliance examinations to determine which areas to target, with areas of higher risk receiving greater focus during examinations. For example, according to the FDIC consumer compliance examination manual, riskier areas may include ones that involve regulatory changes or complex products. Regulatory officials also told us that because of this risk-based approach, SCRA may not be included or fully addressed within the scope of an examination. For example, officials from one regulator told us that when deciding to include SCRA in an examination they may consider, among other things, consumer complaints, internal audit results of the institution’s compliance management system, and problems raised in the media. Regulators also use the risk-based approach to determine the specific examination procedures they use to assess compliance. Areas of higher risk would be subject to more extensive review procedures, while areas of lower risk would receive less extensive review. For example, according to OCC’s examination manual, areas of greater risk may involve more extensive testing of loan transactions for compliance. In 2009, the regulators developed interagency examination procedures related to SCRA through the Federal Financial Institutions Examination Council (FFIEC), including a specific checklist that examiners can use in their examinations. The interagency SCRA procedures and checklist indicate that examiners should determine whether depository institutions applied and properly calculated interest-rate reductions, whether any foreclosures were conducted without a court order, and whether any servicemember requests for SCRA protection were inappropriately reported as adverse information to a credit reporting agency. Additionally, the interagency procedures suggest, among other things, that examiners (1) consider reviewing SCRA policies, procedures, and account documentation when assessing the adequacy of the institution’s internal controls and (2) review whether the depository institution’s compliance reviews and audit materials include transaction testing of samples covering relevant product types. The checklist contains a series of questions related to different sections of SCRA, including the ones that apply to residential mortgages. In addition to routine risk-based consumer compliance examinations, prudential regulators conduct targeted reviews of areas of high concern. For example, FDIC, Federal Reserve, and OCC conducted an interagency review of the foreclosure policies and practices of 14 mortgage servicers in late 2010, in response to the large number of foreclosures since 2007 and continued weaknesses in the mortgage market. The examiners evaluated the adequacy of each mortgage servicer’s operating procedures and controls and preparation of foreclosure documentation, among other things. Although the interagency review was not intended to directly assess SCRA compliance, during the course of this effort, two mortgage servicers nonetheless identified SCRA compliance problems. Additionally, in June 2011, OCC issued guidance to all of its regulated institutions that required them to conduct self-assessments of their foreclosure management practices. OCC examiners will review the self-assessments in the subsequent examination of the institutions. The extent to which SCRA was reviewed varied by the size of the depository institution, the year in which the examination took place, and the regulator that conducted the examination from 2007 through 2011. Based on our review, we estimate that from 2007 through 2011, prudential regulators reviewed SCRA compliance in at least one examination for 48 percent of all the institutions they oversaw that serviced mortgages. This estimate includes documentation of an SCRA review for any type of loan product (e.g., residential mortgage, credit card, automobile, and other types of products). Some of the reasons bank examiners cited for including SCRA in the scope of an examination included the need to follow up on previous violations and deficiencies, changes in regulatory requirements, and identification of SCRA loans being serviced. To determine the extent to which SCRA compliance was included in examinations of depository institutions and the procedures examiners used to assess SCRA compliance, we reviewed workpapers for examinations conducted by FDIC, Federal Reserve, NCUA, and OCC. We reviewed the workpapers for examinations from 2007 to 2011 for a sample of 152 institutions that service mortgages they hold in their loan portfolios or service mortgages for other institutions. The 152 institutions represented a stratified random sample of institutions based on size and regulator examined from 2007 through 2011. Because officials from some regulators told us that they may not conduct an examination for every institution every 12 months, and because SCRA might not be covered in each risk-based examination, we looked at examinations spanning a 5- year period. Based on our sample, we found that prudential regulators included a review of SCRA compliance in at least one examination for a greater percentage of large institutions than all other institutions. In this report, the 40 large institutions are comprised of the 10 largest mortgage servicers regulated by each of the four prudential regulators. Specifically, we found that about 70 percent of these large institutions were reviewed for SCRA compliance at least once from 2007 through 2011 compared with an estimated 48 percent of all other institutions for the same period. Officials from one regulator indicated that one reason for this difference might be that the larger institutions conducted more mortgage lending than smaller institutions; therefore, examiners may be more likely to review SCRA compliance at larger institutions. We reviewed examinations for all 40 of the large institutions, and therefore the percentage presented is the percentage of these 40 large institutions, and is not an estimate. For our estimates of the remaining institutions, we are 95 percent confident that the actual population of these institutions that were examined for SCRA is between 45 percent and 52 percent. occurred in an estimated 26 percent of all institutions, compared with 2007 when about 4 percent of all institutions were reviewed for SCRA. Figure 2 shows the distribution in the percentage of institutions examined for SCRA compliance for each year from 2007 through 2011. Some of the regulatory officials told us that reasons for the differences by year may include the adoption of SCRA interagency examination procedures in 2009 and increased attention to the impacts of the financial crisis on servicemembers in recent years. We also found that among just the 40 large institutions, a greater percentage had an SCRA compliance review in 2010 and 2011 compared with earlier years: in 2010 and also in 2011, about 40 percent of the institutions had an SCRA review, about 13 percent of these institutions were reviewed for SCRA about 23 percent of these institutions were reviewed for SCRA compliance in 2008, and in 2007, 10 percent were reviewed for SCRA compliance. Our analysis also revealed differences by regulator in the extent to which SCRA was reviewed for compliance. Figure 3 shows that both FDIC and Federal Reserve reviewed a significantly higher percentage of institutions for SCRA compliance compared with NCUA and OCC. It also shows that OCC reviewed a greater percentage of institutions than NCUA. NCUA officials explained that the agency does not have a separate consumer compliance examination function and that consumer compliance is part of its overall evaluation of the safety and soundness of institutions. The officials said that given the recent economic crisis, the agency has placed more focus on the safety and soundness of credit unions than on compliance with consumer regulations. They said that this is part of the reason the percentage of credit unions that received an SCRA compliance review is so low. However, our prior work has found that mortgage servicing problems, including inadequate controls over foreclosure processes, have led to risks to the safety and soundness of depository institutions. For the estimated 52 percent of institutions that were not examined for SCRA compliance from 2007 through 2011, examiners did not document their reasons for excluding SCRA for at least 95 percent of these institutions. In our review, we found four examinations for which examiners had documented in the workpapers a reason for not including SCRA compliance. For three of these examinations, the reason cited was that examiners had recently examined for SCRA compliance and found no violations, deficiencies, or other concerns. The fourth examination reviewed the depository institutions’ progress in addressing consumer compliance issues identified in the previous examination and because SCRA compliance was not one of the issues of concern identified in the previous examination, it was excluded from the examination we reviewed. Regulatory officials offered a few reasons to explain why an examiner may not include SCRA compliance in an examination. For example, officials from one regulator said that some depository institutions might not serve large military populations. Therefore, examiners might not consider compliance with SCRA mortgage protections a substantial risk to these institutions. Additionally, officials from one prudential regulator indicated that examiners may choose to exclude SCRA compliance from an examination if the institution had received few complaints concerning SCRA-related issues. The regulators indicated that they had received very few SCRA complaints related to residential mortgages between 2007 and 2011 compared with the number of consumer complaints they received overall during this period. As part of our review of examination workpapers for the 152 institutions in our sample, we collected information on the procedures examiners used to assess compliance with SCRA if an examination reviewed residential mortgage loans or if the workpapers did not specify the loan product being addressed. We included examinations in which the loan product was not specified to help ensure that we reviewed any examination procedures that may have addressed residential mortgages. Our review found a total of 83 institutions for which examiners either reviewed SCRA compliance for residential mortgage loans or did not specify the loan product being reviewed. The figures presented for this analysis are not generalizable to the population of institutions that service mortgages. After reviewing examination guidance and auditing standards, we grouped examiners’ documented examination procedures into three categories based on our professional judgment as to the extent that each type of procedure would provide assurance that financial institutions were complying with SCRA: Interviews with depository institution personnel. This category includes activities in which examiners interviewed staff at the depository institution for information on, among other things, their compliance management systems and whether the institution services loans to servicemembers eligible for SCRA protections. Assessments of depository institutions’ compliance management systems. This category includes instances in which examiners documented that they reviewed the quality of depository institutions’ compliance management systems, such as reviewing institutions’ SCRA policies and procedures, internal controls, and training programs. Testing loan files for SCRA compliance. This category includes activities such as testing a limited number of loan files the institution identified as SCRA-eligible or conducting more comprehensive testing, such as reviewing a statistical sample of loan files. Of these categories, the first category—interviews with depository institution personnel—provides the least assurance of SCRA compliance because the examiner would be relying primarily on assertions provided by institution personnel rather than an independent assessment or verification of these assertions. The second category—assessments of institutions’ compliance management systems—provides greater assurance of SCRA compliance because these procedures require examiners to independently assess the quality of the depository institutions’ procedures and internal controls. The final category—testing of loan files—provides even greater assurance of SCRA compliance because examiners can independently verify whether the institution’s personnel provided all necessary SCRA protections. Although in many examinations examiners documented that they used an assortment of examination procedures from different categories to assess compliance with SCRA, we categorized each of the 83 institutions whose SCRA compliance was assessed during the 5-year period of our review by the highest assurance level of the examination procedures that were used in any of the examinations done of that institution from 2007 through 2011. Based on this analysis, we found that only about half of these institutions had any testing conducted during this 5-year period. Specifically, of these 83 institutions, we found that 6 institutions had examinations during this period that relied on interviews of depository institution staff to assess SCRA compliance as their highest category of examination procedure, 36 institutions had examinations in which the highest category of examination procedure used to assess SCRA compliance was to review the institution’s compliance management system, and 41 institutions had examinations that involved testing of loan files as the highest category of examination procedure—the examination procedure category that provides a greater level of assurance for SCRA compliance than the previous two categories. However, at the 41 institutions at which examiners tested loan files, we found that the type of testing conducted was limited. Examiners can choose from different types of testing methods that provide differing levels of assurance that an institution is complying with SCRA. For example, within the testing category, testing a limited sample of loan files that depository institutions identified as SCRA-eligible provides less assurance of compliance because it relies on assertions by depository institutions of SCRA eligibility, whereas testing a statistical sample of loans provides greater assurance because it allows examiners to independently select files for testing, and the results would be representative of the institution’s compliance. In the examinations we reviewed, the examiners mostly tested a limited sample of loans that the depository institution had identified as SCRA-eligible and, therefore, provided less assurance that the institution was complying with SCRA. We found no instances between 2007 and 2011 in which examiners tested a statistical sample of either loans in foreclosure or mortgage loan files in general, which would have provided the greatest assurance of an institution’s SCRA compliance. By testing only foreclosure files or mortgage loan files that the depository institution had identified as SCRA- eligible, examiners cannot fully determine if the institution has appropriately identified all eligible servicemembers. By expanding the scope of testing to include a larger sample of foreclosure and mortgage loan files, beyond just those files that the depository institution had identified as SCRA-eligible, examiners could better ensure that institutions are appropriately identifying eligible servicemembers and providing them all of the protections to which they are entitled. To minimize the burden on institutions and examiners, such reviews could be conducted as part of samples of loans drawn for examining compliance with other regulatory requirements. In addition to the prudential regulators, other federal agencies conduct oversight of SCRA compliance. SCRA authorizes DOJ to commence a civil action against any person who engages in a pattern or practice of violating the act or if a violation of the act raises an issue of significant public importance. DOJ staff indicated that they consider military attorneys to be the most likely staff to help ensure that a servicemember is afforded their SCRA protections. For cases in which a military attorney is unable to obtain voluntary compliance from a mortgage servicer or other person or entity doing business with a servicemember, DOJ has a system in place to receive referrals for these cases and to open investigations. DOJ officials told us and military attorneys confirmed that, in most cases, military attorneys are able to resolve SCRA matters without referring them to DOJ. DOJ also receives referrals for SCRA investigations from private attorneys and individual servicemembers and their families. When DOJ receives an SCRA referral, officials investigate the matter and determine if a full investigation should be opened.Investigations can result in DOJ filing a civil action against the party in court for alleged SCRA violations, or a resolution with the party may be reached without filing the case in court. DOJ filed a total of five cases in court from 2007 through 2011 for SCRA violations. Two of these cases—BAC Home Loans Servicing, LP and Saxon Mortgage Services, Inc.—involved SCRA violations regarding servicemembers’ mortgages. In May 2011, DOJ took enforcement actions against both mortgage servicers for wrongfully foreclosing upon active duty servicemembers without obtaining court orders. DOJ alleged that both mortgage servicers did not consistently check the military status of borrowers on whom they foreclosed, resulting in 165 improper foreclosures between 2006 and December 2010 (as listed previously in table 1). In its enforcement actions, DOJ required each of these mortgage servicers to pay damages to servicemembers and conduct a variety of remedial actions. For example, BAC Home Loans Servicing agreed to pay at least $20 million to resolve the lawsuit, and Saxon Mortgage Services agreed to pay at least $2.35 million. The mortgage servicers were also required to, among other things, (1) implement revised SCRA policies and procedures for using the DMDC website, (2) implement a foreclosure monitoring program, (3) provide SCRA compliance training to all applicable employees, and (4) conduct reviews to identify additional servicemembers who may have had their SCRA rights violated and compensate them. Appendix II discusses these reviews in more detail. In addition to the 5 SCRA cases DOJ filed in court, DOJ opened 45 additional SCRA investigations from referrals it received between 2007 and 2011, 9 of which involved servicemembers’ mortgages. One of these referrals involved a servicemember’s request to waive the prepayment penalty on her mortgage when she received a permanent change-of-station order and sold her home to move closer to the new base. DOJ was able to reach a resolution with the mortgage servicer without trying the case in court. Another investigation involved allegations of a mortgage servicer charging interest in excess of the SCRA maximum of 6 percent. DOJ officials stated that this investigation was resolved in favor of the servicemember. Finally, in February 2012, DOJ settled with five of the nation’s largest mortgage servicers for a variety of improper mortgage servicing procedures, including allegations of SCRA violations. More information on the National Mortgage Settlement is contained in appendix II. Other federal agencies that operate mortgage programs also oversee certain aspects of SCRA compliance. For example, to participate in FHA’s mortgage programs, mortgage servicers must comply with the agency’s program requirements, which include complying with all applicable laws and regulations, including SCRA. FHA officials explained that they use a risk-based approach to monitor the institutions that service the loans the agency insures. Officials told us that from 2007 through 2011, FHA conducted about 200 mortgage servicer monitoring reviews. They explained that each review consists of a sample of the mortgage servicer’s loan files and FHA staff use a checklist to help ensure that the mortgage servicer is in compliance with a variety of servicing requirements for each loan in the sample. One of the requirements reviewed for each loan is the distribution of the HUD counseling notice that includes information on SCRA eligibility to borrowers who are at least 45 days delinquent. Agency officials explained that a more thorough review of SCRA compliance is conducted if a mortgage servicer has identified that the borrower is an active duty servicemember. For loans that a mortgage servicer has marked with a code to indicate that the borrower is an active duty servicemember, FHA staff conduct additional steps to better ensure that the mortgage servicer has provided the servicemember appropriate SCRA protections, as well as additional protections that FHA provides to active duty servicemembers who have FHA-insured loans. These steps include ensuring that the interest rate has been appropriately adjusted and that foreclosure was postponed. FHA officials stated that they rely on mortgage servicers to appropriately identify active duty servicemembers. Although they may review SCRA compliance on specific loans, FHA officials told us that their reviews are not intended to assess the adequacy of the mortgage servicers’ SCRA compliance policies and procedures or to determine whether these policies are functioning for all of a servicer’s activities. As a result of FHA’s servicer monitoring reviews, some SCRA compliance problems have been identified. For example, FHA officials told us that between 2007 and 2011 the agency found two instances of SCRA noncompliance during its mortgage servicer monitoring reviews. One of these instances involved a mortgage servicer failing to send the HUD counseling notice that includes information on SCRA eligibility to borrowers delinquent 45 or more days, and the other violation involved a mortgage servicer failing to verify a borrower’s active duty status prior to foreclosure. Although VA interacts with mortgage servicers as part of its Home Loan Guaranty Program, VA officials explained that the program currently does not conduct in-depth reviews of mortgage servicers’ policies and procedures and loan files to review overall compliance with SCRA mortgage protections. The officials explained that they are in the process of finalizing a program that will conduct on-site audits of mortgage servicers’ functions and that they expect the program to be implemented in late 2012. VA officials explained that this program will include reviews of servicers’ loan files and policies and procedures for monitoring and identifying SCRA-eligible borrowers to determine servicers’ overall compliance with SCRA mortgage protections. Officials explained that in the wake of recently identified SCRA violations, they conducted a review of all VA loan files that were in foreclosure from October 2009 to January 2011 to determine if any of the borrowers were possibly eligible for SCRA mortgage protections. The officials said that they identified approximately 30,000 borrowers in foreclosure during that period and that they conducted an in-depth review of 47 loans that were potentially eligible for SCRA mortgage protections. VA determined that none of these borrowers were improperly foreclosed upon. They have recently expanded this review to include a longer time period, but as of June 2012, they had not completed the review to determine if any borrowers were improperly foreclosed upon. VA officials explained that they do conduct reviews of the adequacy of servicing being conducted by servicers. These reviews—Adequacy of Servicing reviews—are conducted on all loans over 120 days delinquent to determine if servicers have provided adequate servicing to borrowers, but according to VA officials, they are intended to explore loss mitigation options and not to examine for SCRA compliance. The officials explained that during these reviews, VA reviews mortgage servicers’ notes on the account to determine if they have provided adequate servicing to the borrower. Specifically, they check to see if the mortgage servicer has contacted the borrower, if a reason for default has been determined, if loss mitigation options have been considered, and why any loss mitigation options that were considered were not completed. Officials explained that if the mortgage servicer has taken the appropriate steps, VA would determine that the servicing provided was adequate. If VA determines that the servicing being provided was not adequate, or if the servicer was unable to contact the borrower, it conducts supplemental servicing on the loan and works with the borrower directly to explore loss mitigation options. According to VA officials, they may learn during these reviews that the loan involves an active duty servicemember. However, the Adequacy of Servicing reviews currently do not evaluate the extent to which servicers have assessed whether borrowers are eligible for SCRA mortgage protections. They also explained that while their procedures for conducting these reviews do not address reviewing for compliance with SCRA mortgage protections, VA loan technicians encourage borrowers to review their SCRA mortgage protections with military attorneys. Additionally, VA officials explained that they do not have a mechanism for tracking if these reviews have identified SCRA-eligible borrowers. As part of VA’s mission to serve servicemembers, VA officials told us that they try to ensure that servicemembers have received every opportunity to keep their homes and avoid foreclosure. VA officials explained that they rely on federal regulators to investigate and enforce statutory requirements, such as SCRA. However, given that VA staff also oversee servicers’ activities, they do have the opportunity to review servicers’ efforts to determine SCRA eligibility, such as by making an inquiry with the servicer of the loan or consulting DOD records to determine if the borrower is an active duty servicemember. Without such a review, the extent to which the agency is ensuring servicemembers are receiving all protections to which they are entitled is not clear. The enterprises—Fannie Mae and Freddie Mac—also conduct SCRA compliance monitoring at the mortgage servicers that service loans on their behalf. This monitoring focuses on enforcing contractual requirements between the enterprises and mortgage servicers to ensure that mortgage servicers are following the servicing guidelines issued by the enterprises. The servicing guidelines outline mortgage servicers’ compliance obligations for several different laws and regulations, including SCRA. The SCRA components of the guidelines include information for mortgage servicers on, among other things, how SCRA relief is initiated and how interest rates are reduced, as well as foreclosure proceedings and credit reporting. Enterprise officials explained that SCRA compliance is not included in each review conducted. If it is included, Fannie Mae officials explained that examiners seek to understand how a mortgage servicer checks for SCRA compliance and conducts testing of the servicers accounting methods for SCRA compliance. For example, if a servicemember has an interest rate that is greater than 6 percent, examiners test to ensure that interest rate and payment amounts have been properly reduced. Freddie Mac officials told us they assess the mortgage servicer’s understanding of SCRA and the procedures in place to ensure compliance. The enterprises’ SCRA compliance monitoring efforts have identified some instances of noncompliance. For example, Fannie Mae identified 13 instances of noncompliance with its SCRA guidelines between 2007 and 2011, and Freddie Mac has identified 2 instances. These instances of noncompliance involved issues such as mortgage servicers not having comprehensive SCRA compliance policies and procedures and mortgage servicers not properly verifying the active duty status of servicemembers. Officials from FHFA—the enterprises’ regulator—stated that its supervisory focus for SCRA compliance is to confirm that the enterprises are taking steps to ensure that the mortgage servicers with which they have contracts comply with the contracts’ requirements which include compliance with applicable laws. Although the prudential regulators, FHA, VA, and FHFA all have a role in helping ensure that mortgage servicers provide appropriate SCRA protections to eligible servicemembers, currently none of these entities share information related to SCRA compliance with one another. While the extent of oversight conducted by these entities varies, they do review for some of the same SCRA provisions, such as those related to interest rate reductions and foreclosures. Furthermore, some of the mortgage servicers that participate in FHA’s and VA’s loan programs and service loans on behalf of the enterprises are also subject to oversight by one of the prudential regulators, which review for SCRA compliance during their examinations. Although these agencies obtain SCRA-related information about many of the same institutions, FHA, VA, and FHFA officials stated that they have not coordinated with the prudential regulators on SCRA compliance issues. Further, FHFA officials stated that while they participate in some forums with the prudential regulators to coordinate on various issues, they were not aware of any coordination related to SCRA compliance. GAO, Financial Market Regulation: Agencies Engaged in Consolidated Supervision Can Strengthen Performance Measurement and Collaboration, GAO-07-154 (Washington, D.C.: Mar. 15, 2007). information about their fair lending oversight programs. For example, the agencies established the Interagency Fair Lending Task Force to develop a coordinated approach to address discrimination in lending and adopted a policy statement on how the various agencies were to conduct oversight and enforce the fair lending laws. At that time, federal officials said that coordinating on fair lending issues allows the agencies to exchange information on a range of common issues, informally discuss fair lending policy, and confer about current trends or challenges in fair lending oversight and enforcement. FHFA officials explained that the agency has existing memorandums of understanding with prudential regulators and HUD that establish the protocols they use to discuss trends, risks, and other emerging issues on a variety of topics with these other agencies, but that SCRA has not been a topic during these discussions. FHFA does not currently have a memorandum of understanding with VA to share information, but the officials explained that they have worked with the agency in the past on issues such as appraisals and that they have done so through letter arrangements that allow them to share information. These existing arrangements could provide a mechanism for SCRA information to be shared between FHFA and the prudential regulators, FHA, and VA. However, currently no such sharing arrangements exist between the prudential regulators, FHA, and VA. Although FHA, VA, and the enterprises that FHFA oversees have identified limited instances of SCRA violations in recent years, the sharing of information related to SCRA trends, emerging risks, or types of weaknesses found in mortgage servicers’ policies among all agencies that play a role in SCRA compliance oversight could increase awareness of potential problems and improve their ability to identify SCRA violations. Under SCRA, DOD services’ Secretaries and the Secretary of Homeland Security have the primary responsibility for ensuring that servicemembers receive information on their SCRA rights and protections. Servicemembers are informed of their SCRA rights in a variety of ways. For example, briefings are provided on military bases and during deployment activities; legal assistance attorneys provide counseling; and a number of outreach media, such as publications and websites, are aimed at informing servicemembers of their SCRA rights. According to DOD officials, the legal assistance attorneys are primarily responsible for leading the military’s SCRA education efforts. Each of the military services, including the Coast Guard under DHS, operates a number of legal assistance offices throughout the country. Legal assistance offices are operated by military and civilian legal assistance attorneys who are responsible for providing support to servicemembers on a variety of legal issues, including family law and estate planning. As part of their responsibilities, they inform servicemembers about their rights and benefits under SCRA. Legal assistance attorneys provide SCRA support to servicemembers using various methods. We spoke with legal assistance attorneys at six military installations across the five services. They told us that they provide servicemembers with information on SCRA during routine briefings on military installations, in handouts, and during one-on-one sessions with individual servicemembers. Two legal assistance attorneys told us that they alert installation staff, including unit commanders, to direct servicemembers to their legal assistance offices if they have a problem. Legal assistance attorneys also told us that they will contact depository institutions on behalf of servicemembers to help them receive their SCRA protections. Some legal assistance attorneys also told us that they provide templates of letters for servicemembers to send to their mortgage servicer to request a reduction in their mortgage interest rate. Additionally, legal assistance attorneys told us that they will refer servicemembers to the American Bar Association’s (ABA) Military Pro Bono Project if they are unable to resolve an SCRA matter for a servicemember. ABA’s program connects active duty servicemembers to pro bono attorneys who assist them with civil legal problems. SCRA requires that servicemembers be informed of the rights and protections available under SCRA upon entry into the military, during initial orientation training, and, in the cases of members of the reserve components, when called to active duty for a period of more than 1 Predeployment briefings generally occur at the military installation year. that deploys the servicemember and, in addition to SCRA, cover a range of other legal and financial issues, such as the preparation of wills and powers of attorney. According to DOD officials, members of the reserve components may receive this briefing numerous times at their home station prior to deployment. Servicemembers are also provided with an additional opportunity to learn about their rights under SCRA upon returning from deployment. According to DOD officials, because some SCRA protections extend for a 9- or 12-month period beyond servicemembers’ active duty service, obtaining information at the end of deployment is critical for those servicemembers who will no longer be on active duty and will lose access to military-provided legal assistance. As a result, the Army reserve component—which includes the Army Reserve and Army National Guard and is the largest portion of the reserve components—requires that members receive standardized post- deployment training on SCRA. DOD and DHS use a number of other methods to deliver SCRA information to servicemembers, including military training courses, publications, websites, and other family support services. For example, DHS officials told us that all Coast Guard members are informed of their SCRA rights during basic training. Some others may receive additional SCRA training during their initial officer training at the Coast Guard Academy or other advanced classes. DOD also publishes general articles in newsletters and installation publications explaining servicemembers’ SCRA rights and more specific articles on the relationship between mortgage difficulties and SCRA. Additionally, several military websites contain information on SCRA, including websites for individual services and military installations and sites such as Military OneSource—a DOD online resource that is staffed with counselors who offer assistance to servicemembers on a variety of topics, including financial counseling. DOD also provides financial management and family support services through the family readiness centers located at military installations. These centers provide general financial management counseling on topics such as reducing debt and saving for college to servicemembers’ families during periods of deployment and also share information on SCRA and refer family members to the legal assistance office if they have an SCRA issue. Other federal agencies also provide SCRA outreach and support to servicemembers and financial institutions in a variety of ways, including oral briefings, written notifications, and websites. For example, VA officials told us that some servicemembers who leave active duty service participate in a multiday briefing conducted in partnership with VA, DOD, and the Department of Labor. This briefing discusses reentering civilian life, SCRA protections, and veterans’ benefits. Additionally, both VA and FHA provide SCRA-related outreach to the institutions that participate in their mortgage programs. For example, VA periodically sends written notifications to all of its loan servicers reminding them of their compliance responsibilities and alerts them to changes in the act when they occur. FHA also provides information to its mortgage servicers on SCRA. Its website contains a list of questions and answers for mortgage servicers on SCRA, servicemembers’ eligibility criteria, and FHA policies with respect to servicing FHA-insured mortgages in compliance with SCRA. The Consumer Financial Protection Bureau (CFPB) has an Office of Servicemember Affairs that also plays a role in providing SCRA outreach to servicemembers and mortgage servicers responsible for complying with the act. As of May 30, 2012, CFPB officials had conducted 37 visits to military installations and National Guard units and met with legal assistance attorneys to discuss consumer protection issues servicemembers have been facing, including SCRA. CFPB also sent letters to 25 large mortgage servicers in 2011 alerting them of servicemembers’ rights under SCRA and their responsibilities to comply with the act. The letters specifically urged mortgage servicers to educate their employees about SCRA and review their loan files to ensure compliance with the law. Additionally, CFPB has held meetings in which representatives from DOD and DHS and other federal agencies, financial institutions, and trade associations discussed issues related to SCRA compliance. CFPB officials also held a forum in which financial institutions discussed activities—some that go beyond those required by SCRA— they were undertaking to assist servicemembers’ with their mortgages. In July 2011, CFPB and the Judge Advocate Generals of the Army, Marine Corps, Navy, Air Force, and Coast Guard developed a joint statement of principles to provide stronger protections for servicemembers in connection with consumer financial products and services. Finally, through its consumer response function, CFPB also works directly with servicemembers by collecting consumer complaints against depository institutions and coordinating those complaints with servicemembers’ depository institutions and if necessary, the appropriate legal assistance offices. Finally, military servicemember groups also assist servicemembers with SCRA issues. Organizations such as the National Military Family Association, the Military Officers Association of America, the Reserve Officers Association, and others provide information on SCRA to their members in a variety of ways. A representative from one military servicemember group explained that its website—which contains background information on SCRA and legal reviews of specific SCRA provisions—is the group’s primary means of providing information to servicemembers and the public on SCRA issues. Other representatives with whom we spoke said that they provide information to their members when changes to SCRA have occurred. For example, one military servicemember group highlights applicable legislative changes in weekly electronic notifications to its members. DOD officials, legal assistance attorneys, and representatives of military servicemember groups with whom we spoke noted a number of challenges with ensuring that servicemembers are aware of their SCRA protections. One main challenge cited was servicemembers’ retention of the SCRA information they receive from DOD and DHS. Attorneys at each of the six legal assistance offices told us that servicemembers are not aware of the full extent of their SCRA rights. In addition, several military servicemember group representatives, a National Guard Bureau official, and an SCRA expert told us that despite available information on SCRA, servicemembers are not adequately prepared to invoke their rights when needed. According to DOD officials, the bulk of the SCRA education provided to servicemembers occurs at military installations that focus on regular active duty servicemembers. However, members of the reserve components—those most likely to qualify for SCRA’s mortgage protections—may not be located at military installations and, therefore, have less access to these services and trainings. One DOD official told us that members of the reserve components may receive SCRA briefings at their home station. However, legal assistance attorneys at five of the six legal assistance offices with whom we spoke told us that members of the reserve components have limited access to legal assistance offices on military installations when they are not on active duty. Having this limited access to legal assistance could affect reserve components members’ ability to avail themselves of their SCRA protections when needed. Additionally, some members of the reserve components face geographic challenges with accessing legal assistance offices due to their distance from military installations. About half of the military installations in the United States are located in just 10 states, while members of the reserve components live throughout the country. For example, the Chief Legal Assistant for the Ninth Coast Guard District explained that the legal assistance office for that district is located in Cleveland, Ohio, but the office provides legal services to the entire Great Lakes Region. Another challenge in ensuring that servicemembers are aware of their SCRA protections when needed is the effectiveness of the educational briefings provided by DOD and DHS. As discussed above, SCRA requires that servicemembers receive SCRA training upon entry into the military, during initial orientation training, and, for members of the reserve components, when called to active duty for a period of more than 1 year. However, legal assistance attorneys who conduct this training and military servicemember groups explained that its effectiveness is diminished because of the volume of information presented, the timing of the training, and the availability of legal assistance resources. For example, four military officials told us that during predeployment activities and annual National Guard weekend training activities, servicemembers attend multiple, back-to-back briefings, which cover a variety of legal and financial issues that are focused on a number of important topics, such as family law and estate planning. One military attorney referred to these briefings as “baptism by a fire hose” when trying to illustrate the volume of information provided to servicemembers at these critical times. Further, military attorneys with whom we spoke told us that the amount of time legal assistance attorneys are able to spend with servicemembers during pre- and postdeployment activities is limited due to the volume of servicemembers deploying and returning from deployment. For example, one military attorney told us that legal assistance attorneys might assist 250 deploying servicemembers with their legal affairs prior to deployment and that during deployment there is limited time available to provide legal assistance. Another legal assistance attorney stated that it would be beneficial for members of the reserve components to have more time to access military legal assistance resources when they return from deployment because of concerns that they do not retain the information they receive during postdeployment briefings. One legal assistance attorney that assists members of the reserves specifically explained that when he provides SCRA-related briefings to deployed servicemembers who should have received SCRA briefings prior to deployment; many seem like they are hearing the information for the first time. He suggested that servicemembers’ retention of information could be improved if deploying servicemembers receive more comprehensive briefings with smaller groups of servicemembers. Additionally, a National Guard Bureau official told us that predeployment briefings contain too much information for servicemembers to absorb, including the relatively small portions of the briefings that include information on SCRA. These methods of providing SCRA information to servicemembers raise concerns about their ability to retain the information they receive during these trainings. Without adequate awareness, servicemembers may not take full advantage of their protections under SCRA. As discussed above, the methods of SCRA training and outreach provided by DOD and DHS to regular active duty servicemembers and members of the reserve components may not be adequate to ensure that these servicemembers are aware of and benefiting from the full protections provided by the act. In 2008, DOD asked in its annual Status of Forces Surveys if active duty servicemembers and members of the reserve components had received SCRA trainings. Forty-seven percent of members of the reserve components—including those who had been activated in 2008—reported in the survey that they had received SCRA training and only 35 percent of regular active duty servicemembers reported that they had received training. While these numbers may not reflect the number of servicemembers who received SCRA training, they do provide an indication as to the number of servicemembers who recalled receiving such training. DOD also surveys servicemembers on a variety of issues related to their benefits; however, according to DOD officials who conduct these surveys, servicemembers have not been surveyed on the effectiveness of DOD’s SCRA educational efforts. DHS officials also told us they have not evaluated the effectiveness of their SCRA education methods to members of the Coast Guard and Coast Guard Reserve. In addition to surveying servicemembers on the effectiveness of SCRA- related education methods, DOD and DHS could use other techniques to assess the effectiveness of their education efforts. For example, servicemembers could be tested after a period of time to determine how much information they retained from the SCRA component of their predeployment briefings. Additionally, focus groups could be held with servicemembers to review the understandability of written materials provided on SCRA. Without understanding the extent to which existing SCRA educational efforts are effective, DOD and DHS are not able to determine if their methods are adequate to ensure that servicemembers avail themselves of the benefits to which they are entitled. Ensuring that mortgage servicers fully comply with SCRA can protect servicemembers from undue financial harm. Our analysis of risk-based compliance examinations conducted by the four prudential regulators estimated that about half of all the depository institutions that serviced mortgages were reviewed for SCRA compliance from 2007 through 2011. However, during these examinations, examiners only conducted testing of loan files at 41 of the 83 institutions for which we reviewed the procedures used by examiners over the 5-year period to verify that mortgage servicers’ SCRA compliance processes and controls were functioning properly. For these 41 institutions, examiners did not use the testing procedures most likely to detect instances of noncompliance. Examination guidance and auditing standards suggest that testing is a part of effective monitoring. Furthermore, additional testing of loan files using methods that provide greater assurance of compliance is warranted given that thousands of violations at some large mortgage servicers have been documented through federal agencies’ targeted reviews and mortgage servicers’ own internal reviews, but not through the prudential regulators’ routine compliance examinations. Without additional testing of foreclosure files and, as appropriate, other mortgage loan files not identified by the depository institution as SCRA-eligible, and without employing testing methods that provide greater assurance of compliance, prudential regulators may not be able to determine whether these institutions are extending protections to all eligible servicemembers. Although not a direct regulator of financial institutions that service mortgages, VA does interact with mortgage servicers as part of its Home Loan Guaranty program and therefore has an interest in ensuring that these institutions are complying with SCRA. However, the current level of monitoring that VA conducts of mortgage servicers that participate in its program provides little assurance that eligible servicemembers with VA- guaranteed loans are receiving their full SCRA mortgage protections. By not routinely reviewing mortgage servicers’ overall compliance with SCRA mortgage protections, the agency cannot be assured that mortgage servicers participating in its program have policies and procedures that function properly to provide these protections. The agency’s development of a new program to conduct on-site audits of mortgage servicers’ overall operations provides a good opportunity for the agency to expand its efforts related to SCRA compliance. Further, if VA determines that servicers’ loss mitigation efforts have either not been successful or adequate during its Adequacy of Servicing reviews, it provides supplemental servicing on loans. During its Adequacy of Servicing reviews and while conducting supplemental servicing, VA would have the opportunity to take steps to determine if servicers assessed whether borrowers were eligible for SCRA protections. Because the agency’s entire mission is dedicated to benefiting individuals who have served the country through military service, expanding its procedures to review for SCRA compliance at mortgage servicers that participate in its mortgage program could help the agency achieve its mission and better ensure that servicemembers are receiving all benefits to which they are entitled. Because multiple federal agencies’ play a role in ensuring that mortgage servicers provide SCRA protections to eligible servicemembers, sharing information on SCRA compliance could benefit these agencies’ respective SCRA oversight efforts. Most agencies responsible for SCRA oversight conduct risk-based reviews and therefore do not always include SCRA compliance in their reviews. Sharing information on SCRA compliance issues could alert agencies to potential problems and improve agencies’ ability to identify SCRA violations. We have previously found that collaboration among supervisory agencies can lead to more effective supervision and that such collaboration does occur for certain consumer compliance laws. However, no such sharing of information related to SCRA compliance information currently takes place routinely between the prudential regulators, FHA, VA, and FHFA. Further, because these entities monitor SCRA compliance at many of the same institutions, the sharing of information could help them to more quickly identify compliance problems that may adversely affect servicemembers. Many of these agencies already have existing mechanisms for sharing information that could be used or expanded to periodically share information on SCRA compliance. The Secretaries of the Army, Navy, Air Force, and Homeland Security are responsible for educating servicemembers on their SCRA rights. DOD and DHS provide this information through a variety of methods throughout servicemembers’ military careers. However, servicemembers may often be unaware of their SCRA rights for a variety of reasons, such as the volume and variety of information they must retain from educational briefings. Members of the reserve components in particular face unique challenges that can affect whether they learn of and are able to obtain assistance with SCRA protections because they have more limited access to military legal assistance locations and to SCRA-related training opportunities. Additionally, the recently created CFPB’s Office of Servicemember Affairs has been working with DOD and DHS to identify opportunities to increase servicemembers’ awareness of SCRA protections and its results could provide useful information to assist in this effort. While DOD has surveyed servicemembers on whether they had received SCRA training, neither DOD nor DHS has assessed the effectiveness of their educational methods to determine if better ways exist to ensure that servicemembers retain the information they receive on SCRA and can recall it when they need it. Without such an assessment, such as by using focus groups of servicemembers or testing to reinforce retention of SCRA information, DOD and DHS may not be able to ensure they are reaching servicemembers in the most effective manner. To better ensure SCRA compliance oversight, we recommend that the Comptroller of the Currency, the Chairman of the Board of Governors of the Federal Reserve System, the Chairman of the Federal Deposit Insurance Corporation, and Chairman of the National Credit Union Administration take steps to increase the frequency with which examiners (1) conduct testing of foreclosure files and as applicable, other mortgage loan files; and (2) employ testing methods that provide greater assurance that mortgage servicers are complying with SCRA. To help ensure that VA assists servicemembers with remaining in their homes and avoiding foreclosure, the Secretary of Veterans Affairs should ensure that a review for SCRA compliance is included in the department’s new mortgage servicer monitoring program and that additional steps to assess SCRA compliance are taken by VA staff during its Adequacy of Servicing reviews and while conducting supplemental servicing. Additionally, to increase agencies’ awareness of potential problems with SCRA compliance, the Comptroller of the Currency, the Chairman of the Board of Governors of the Federal Reserve System, the Chairman of the Federal Deposit Insurance Corporation, the Chairman of the National Credit Union Administration, the Acting Director of the Federal Housing Finance Agency, the Secretary of Housing and Urban Development, and the Secretary of Veterans Affairs should explore options to use existing mechanisms or develop new ones to share information related to SCRA compliance oversight. Finally, the Secretary of Defense—through the Secretaries of the Army, Air Force, and Navy—and the Secretary of Homeland Security should assess the effectiveness of their efforts to educate servicemembers on SCRA to determine better ways for making servicemembers aware of their SCRA rights and benefits, including improving the ways in which members of the reserve components obtain such information. We requested comments on a draft of this report from CFPB, DHS, DOJ, DOD, FDIC, Federal Reserve, FHFA, HUD, NCUA, OCC, and VA. We received formal written comments from DHS, DOD, FDIC, Federal Reserve, FHFA, HUD, NCUA, OCC, and VA; these are presented in appendixes III through XI, respectively. We also received technical comments from CFPB, DOJ, DOD, FDIC, Federal Reserve, FHFA, OCC, and VA, which we incorporated as appropriate. Federal Reserve, NCUA, and OCC agreed to take actions in response to our recommendations that they increase the frequency with which their examiners conduct testing of mortgage and foreclosure files and employ testing methods that will provide greater assurance of mortgage servicers’ compliance with SCRA. The Federal Reserve’s Director of the Division of Consumer and Community Affairs noted that Federal Reserve examiners apply interagency examination procedures to test the sufficiency of a depository institution’s program for ensuring its employees provide appropriate protections to active duty servicemembers, and that it will work with the other federal financial regulators to consider appropriate ways to update the interagency SCRA examination procedures. The Director’s letter notes that Federal Reserve considers interviews with bank staff and reviews institutions’ compliance management systems to be types of examiner testing. Although our report acknowledges that such steps can provide useful information regarding an institution’s SCRA compliance, we recommended that the regulators provide greater assurance of SCRA compliance by increasing the frequency of loan file testing. NCUA’s Executive Director agreed that additional testing of loan files would provide greater assurance of SCRA compliance. His letter also notes that NCUA has made recent changes to its examination process to raise the importance of consumer protection issues, noting that beginning with its 2011 examinations, staff separate from safety and soundness examiners review the lending practices of federal credit unions to ensure compliance with SCRA. Further, NCUA noted that it has also incorporated reviews for SCRA compliance into its analysis and investigations of complaints. The Comptroller of the Currency noted that OCC will update its examination guidelines to ensure that a review of SCRA compliance is conducted during each supervisory cycle for its regulated institutions, and that such reviews will include the testing of loan files selected using an appropriate methodology to assess compliance with SCRA. FDIC’s Director of their Division of Depositor and Consumer Protection did not comment on our recommendation but agreed that testing a representative sample of loans for compliance with SCRA is an effective tool to assess compliance with SCRA for large mortgage servicers. However, his letter also noted that having examiners interview bank employees also serves as an effective tool for assessing compliance with consumer protection laws and regulations, and that such interviews are often used to verify that the depository institution is conducting sufficient employee training and is enforcing its policies and procedures. We agree that conducting interviews of depository institution personnel can be a useful procedure to examine for SCRA compliance, but supplementing such actions with increased testing of loan files provides an even greater level of assurance that an institution is complying with SCRA. VA concurred with our recommendation that they ensure that a review for SCRA compliance is included in its new mortgage servicer monitoring program and also indicated their staff would be taking additional steps to assess SCRA compliance during Adequacy of Servicing reviews and while conducting supplemental servicing. In a written response, VA’s Chief of Staff noted several activities the agency conducts to help ensure that veterans are aware of their SCRA protections. He stated that VA will revalidate and, as necessary, revise its focus and procedures to ensure veteran borrowers are receiving all SCRA protections to which they are entitled. Additionally, he noted that VA will include in its mortgage servicer monitoring program a review to ensure that servicers’ appropriately afford SCRA-eligible borrowers their mortgage protections as part of their loss mitigation efforts. Finally, he said that VA will incorporate additional steps into its Adequacy of Servicing reviews to assess whether the servicer appropriately provided SCRA mortgage protections to eligible borrowers. Federal Reserve, FHFA, HUD, NCUA, OCC, and VA agreed with our recommendation that the federal agencies involved in overseeing mortgage servicers’ SCRA compliance should explore using existing mechanisms or developing new ones to share information related to SCRA compliance oversight. Federal Reserve Division of Consumer and Community Affairs Director noted that additional interagency collaboration related to SCRA compliance trends and emerging risks may be appropriate and useful in improving supervisory practices related to SCRA compliance, and she agreed to explore other opportunities to share information related to SCRA compliance with other federal agencies. She stated that their staff are currently planning an interagency servicemember financial protection webinar for financial industry participants that is to include panelists from the federal supervisory agencies, as well as representatives from other agencies with SCRA oversight responsibility. FHFA’s Deputy Director for the Division of Enterprise Regulation also agreed that increased information sharing among supervisors of mortgage lending industry participants could assist in identifying potential compliance problems and in some cases could improve the identification of SCRA violations. He noted that FHFA’s supervision function will consider whether the agency’s existing memorandums of understanding are sufficient or should be expanded to cover more types of information or more agencies to broaden information sharing on issues of supervisory concern, including SCRA compliance. He also noted that the supervision function would consider whether compliance oversight would be improved by developing processes for more frequent routine communications with supervisors of other market participants subject to mortgage lending compliance requirements. HUD’s Acting Assistant Secretary for Housing-Federal Housing Commissioner agreed that HUD should participate in agencies’ discussions to explore options to share information related to SCRA compliance, noting that HUD’s should be a participatory role rather than a leadership one because it does not have responsibility for overseeing SCRA. Her letter also notes they believe that the scope of such collaboration should be broadened beyond just SCRA compliance to include all agencies’ mutual interests in single family housing issues, which we agree could be useful. The NCUA Executive Director’s letter notes that NCUA will use its participation in FFIEC and other interagency working groups to share information regarding the supervision of financial institutions and compliance concerns, and that it currently shares information with CFPB regarding consumer compliance oversight and is working with federal financial regulators to develop tools to facilitate information sharing. The Comptroller of the Currency stated in his response that OCC will continue to be an active member of the FFIEC Task Force on Consumer Compliance, which is an interagency organization that works collectively to develop examiner guidance and examination procedures and to discuss emerging risks or trends regarding new products and services. He also noted that OCC, the other prudential regulators, and CFPB have signed a memorandum of understanding on supervisory coordination that outlines the coordination of examinations and the sharing of compliance oversight information, including information on SCRA. VA’s Chief of Staff noted that VA will collaborate with the agencies involved in SCRA compliance oversight to share information related to SCRA compliance. FDIC did not comment on this recommendation. DHS concurred and DOD partially concurred with our recommendation that they assess the effectiveness of their efforts to educate servicemembers on SCRA to determine better ways for making servicemembers aware of their SCRA rights and benefits, including improving the ways in which members of the reserve components obtain such information. DHS’s Director of Departmental GAO-OIG Liaison Office noted that the Coast Guard strives to keep all its members fully aware of SCRA benefits and rights and that it will explore measures to assess the effectiveness of these efforts in the future. DOD’s Office of Legal Policy Director stated that the education and protection of servicemembers is DOD’s highest priority and that it continuously evaluates the effectiveness of training to servicemembers on their protections under SCRA and that it will continue to do so bearing our recommendation in mind. His letter also notes that DOD recently testified before Congress on efforts to conduct a survey on financial issues affecting servicemembers which will further inform DOD’s efforts. We are sending copies of this report to appropriate congressional committees, the Chairman of the Board of Governors of the Federal Reserve System, the Secretary of Defense, the Chairman of the Federal Deposit Insurance Corporation, the Acting Director of the Federal Housing Finance Agency, the Secretary of Homeland Security, the Secretary of Housing and Urban Development, the Chairman of the National Credit Union Administration, the Comptroller of the Currency, the Secretary of Veterans Affairs, the Director of the Consumer Financial Protection Bureau, and the U.S. Attorney General. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or sciremj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XII. Our objectives were to examine (1) what is known about Servicemembers Civil Relief Act (SCRA) eligibility, the number of violations that have occurred, and practices financial institutions use to comply with SCRA; (2) what oversight financial regulators and other federal agencies have taken to help ensure depository institutions’ compliance with the act; and (3) actions the Department of Defense (DOD), Department of Homeland Security (DHS), Department of Veterans Affairs (VA), and others have taken to ensure that servicemembers and others are informed of protections provided under the act. With the exception of our regulatory compliance review, the scope of our review includes only SCRA protections related to servicemembers’ residential mortgages. To describe what is known about the practices depository institutions use to comply with SCRA, we interviewed representatives from a non- generalizable sample of four large mortgage servicers and one national consumer credit reporting agency about their SCRA compliance practices and challenges and reviewed relevant policies and procedures. We selected 4 mortgage servicers that were among the 10 largest based on data from the Consolidated Reports of Condition and Income (Call Reports) on the unpaid principal balance of residential mortgages institutions own and service, plus mortgage loans they service on behalf of other institutions, and mortgage servicers that had participated in either the prudential regulators’ interagency review of foreclosure policies and practices or the U.S. House of Representatives Committee on Oversight and Government Reform’s investigation. We interviewed representatives of financial industry trade associations, including those that represent the mortgage industry, depository institutions with a large military customer base, and the credit reporting industry. We also interviewed officials from DOD’s Defense Manpower Data Center, which operates the website that depository institutions and others use to verify the active duty status of servicemembers. To determine what is known about SCRA violations that have occurred we reviewed letters from 10 large mortgage servicers written in response to a House of Representatives Committee on Oversight and Government Reform investigation on mortgage servicers’ SCRA compliance history and practices. We also reviewed data on SCRA violations found during bank and credit union examinations conducted from 2007 through 2011 by the prudential regulators—the Board of Governors of the Federal Reserve System (Federal Reserve), the Federal Deposit Insurance Corporation (FDIC), the National Credit Union Administration (NCUA), and the Office of the Comptroller of the Currency (OCC). We also reviewed available information from legal actions taken by the Department of Justice (DOJ) against two mortgage servicers for SCRA violations and a class action settlement against a large mortgage servicer for SCRA violations. Finally, we reviewed DOJ, Federal Reserve, and OCC enforcement actions against mortgage servicers for, among other things, foreclosure documentation problems that require the mortgage servicers to conduct reviews, which are currently ongoing, to determine historical SCRA violations. To assess the oversight prudential regulators have taken to help ensure depository institutions’ compliance with SCRA, we reviewed their examination policies and procedures and interviewed agency officials about their oversight activities related to SCRA. We also reviewed interagency examination procedures and checklists the regulators developed in 2009 to aid their oversight of SCRA. To assess the extent to which prudential regulators examined for SCRA, we selected a stratified random sample of 160 institutions from the population of all depository institutions that serviced mortgages as of November 2011 for institutions regulated by FDIC, Federal Reserve, and OCC, and September 2011 for institutions regulated by NCUA. We developed a certainty stratum composed of the 10 largest institutions regulated by each of the four prudential regulators—FDIC, Federal Reserve, NCUA, and OCC—for a total of 40 institutions. The remaining 120 institutions comprised an additional four strata of institutions of varying sizes—one stratum per prudential regulator. We used the Call Reports to identify mortgage servicers based on data the institutions reported on the unpaid principal balance of residential mortgages they own and service, plus mortgage loans they service on behalf of other institutions. We selected credit unions that service mortgage loans using data from the SNL Financial Database on the unpaid principal balance of real estate loans owned and serviced, plus those serviced on behalf of other institutions. We confirmed our use of the relevant mortgage variables with the SNL Financial Database with NCUA. We excluded any depository institutions that did not service mortgages. We then used these data to select a stratified random sample from the population of depository institutions that service mortgages. While we initially selected a sample of 160 institutions (40 for each regulator), we excluded 8 of the selected institutions from our analysis. Three institutions regulated by the Federal Reserve were excluded because they were recently chartered and therefore had not had an examination. We also excluded five credit unions because they were state-chartered, meaning that state supervisory authorities and not NCUA served as the primary regulator for these institutions. Table 2 provides more detail on the population, sample, and sample disposition by stratum. To determine the extent to which prudential regulators included SCRA compliance within the scope of their examinations, we requested SCRA- related examination workpapers, as well as documents examiners prepare to determine the scope of their examinations for all consumer compliance examinations the prudential regulators conducted from 2007 through 2011. We reviewed examinations conducted over a 5-year period because regulatory officials told us that they may not conduct an examination for a particular institution every 12 months, and because SCRA might not be covered in each risk-based examination. We relied on the examination documentation provided to us by the prudential regulators to represent the full universe of examinations that were conducted for each institution in our sample between 2007 and 2011. We did not independently verify that the examination documentation they provided to us represented the full universe of examinations they conducted over this period. We reviewed the documents we received and developed a data collection instrument (DCI) to capture the information we found in the examination documentation in a consistent manner. We determined that a depository institution had received an SCRA compliance review if examination workpapers revealed an SCRA compliance review for any type of loan product covered by the act (for example, residential mortgages, automobile loans, or credit cards loans). We aggregated this examination-level data to the institution level and used the data to produce estimates of the percentage of all institutions for which the prudential regulators included an SCRA compliance review within an examination at least once during the 5-year period. Because we followed a probability procedure based on random selections, our sample of institutions is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (for example, plus or minus 10 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. For estimates used in this report, we report the 95 percent confidence intervals along with the estimates themselves. We also report percentages based on the 10 largest institutions per regulator. Since these percentages are based on the total population of such institutions, they have no sampling error and consequently confidence intervals are not reported for these percentages. We reviewed examination workpapers and used our DCI to document the procedures examiners indicated they used to assess SCRA compliance. We only noted the examination procedures for SCRA compliance reviews that involved residential mortgage loans or did not specify the type of loan product covered. Eighty-three institutions in our sample met these criteria. SCRA examination procedures for exams that solely focused on other loan products, such as credit cards and automobile loans, were outside the scope of our review. We then grouped the data we collected on examination procedures into four categories: Requests for information from depository institutions. This includes activities such as requests for institutions’ internal audit results, policies and procedures, SCRA complaints, and lists of SCRA loans. Interviews with depository institution personnel. This includes activities in which examiners interviewed staff at the depository institution for information on, among other things, their compliance management systems and whether the institution services SCRA loans. Assessments of depository institutions’ compliance management systems. This category includes instances in which examiners documented that they reviewed the quality of depository institutions’ compliance management systems, such as reviewing institutions’ SCRA policies and procedures, internal controls, and training programs. Testing loan files for SCRA compliance. This category includes activities such as testing a limited or statistical sampling of loans the institution identified as SCRA-eligible or conducting more comprehensive testing, such as reviewing a statistical sample of loan files. Table 3 provides additional detail on the individual examination activities that comprise each of these categories. We reviewed prudential regulators’ examination guidance and government auditing standards, which note that various activities can provide increasing levels of assurance that reviewed entities are following their stated policies and procedures and that internal controls are functioning. Based on this review, we grouped examiners’ documented examination procedures into four categories based on our professional judgment as to the extent to which the examination activities involved verification of assertions made by the depository institution regarding compliance with SCRA. For example, based on our categories, category 1—requests for information from depository institutions—provides the least assurance of SCRA compliance because it does not involve an assessment of compliance, but rather the collection of information. Category 2—interviews with depository institution personnel—also provides less assurance because it relies primarily on assertions provided by the institution. Whereas category 4—testing of loan files—provides the greatest assurance of SCRA compliance within our categories because testing loan files allows examiners to independently verify whether an institution’s compliance procedures are functioning properly and whether SCRA protections are being appropriately extended to eligible borrowers. Examination guidance from three of the four prudential regulators cite the testing of individual loan transactions as the most extensive level of review for assurance that a depository institution is complying with laws and regulations. They also indicate that testing a larger sample of loans, including a statistical sample, provides a fuller assessment of compliance than testing a limited sample. We placed institutions in each of the four categories based on the highest level of examination activity conducted from 2007 through 2011. The figures presented for this analysis are not generalizable to the population of institutions that service mortgages. To describe the SCRA compliance oversight activities of other federal agencies, we reviewed DOJ’s policies and procedures for receiving SCRA referrals and investigating SCRA cases and interviewed agency officials. We also reviewed DOJ enforcement actions and investigations that DOJ was able to resolve without filing a court case related to servicemembers’ mortgages from 2007 through 2011. We also reviewed the SCRA compliance monitoring activities and policies and procedures of other federal agencies that play a role in the mortgage market. These agencies include the Federal Housing Administration (FHA), the Federal Housing Finance Agency (FHFA), and VA. We also reviewed the SCRA compliance monitoring efforts of two government-sponsored enterprises—Fannie Mae and Freddie Mac. We reviewed the guidance these agencies and enterprises provide to mortgage servicers participating in their programs and interviewed agency officials. To determine what actions DOD, DHS, VA, and others have taken to ensure servicemembers are informed of their SCRA rights, we reviewed the act to determine what it requires agencies to do and interviewed two SCRA experts. To describe what actions individual agencies were taking to inform servicemembers of their rights, we reviewed DOD and DHS policies and procedures and SCRA training materials and publications, and interviewed representatives from these agencies, including officials from DOD’s Office of Legal Policy, DHS, and the National Guard Bureau. We also reviewed DOD’s Status of Forces surveys to active duty servicemembers and members of the reserve components to determine efforts DOD has taken to assess the effectiveness of its methods of educating servicemembers about SCRA benefits. We selected six military installation legal assistance offices (one for the Army, Navy, Marine Corps, and Coast Guard and two for the Air Force) based on a geographic distribution of states with high numbers of foreclosures and large active duty and reservist populations and interviewed legal assistance attorneys who work in these offices to learn how the attorneys teach servicemembers about their SCRA protections and discuss the challenges servicemembers face asserting those protections. The six installations were: Fort Drum, New York; Randolph Air Force Base, San Antonio, Texas; Fort Sam Houston, San Antonio, Texas; Marine Corps Recruit Depot, San Diego, California; Coast Guard 9th District Command Center, Ohio; and Naval Air Station Pensacola, Florida. We reviewed examples of SCRA training and outreach that these offices develop and distribute to servicemembers. To learn about the specific challenges that members of the reserve components face, we also spoke with legal assistance attorneys from the Naval Reserves and the Ohio National Guard who were recommended to us by legal assistance attorneys with whom we spoke. To determine what actions other agencies, including VA, the Consumer Financial Protection Bureau, and FHA were taking to inform servicemembers and others of SCRA protections, we reviewed notifications they provide to mortgage servicers on SCRA compliance and interviewed officials at these agencies. We also interviewed representatives from the American Bar Association’s Legal Assistance for Military Personnel program to learn how they coordinate with legal assistance attorneys and assist servicemembers with SCRA issues. Finally, we interviewed representatives from seven military servicemember groups whose memberships represent a broad population of servicemembers and their families. These groups included the Reserve Officers Association, National Military Family Association, Military Officers Association of America, Air Force Sergeants Association, National Guard Association of the United States, Naval Enlisted Reserve Association, and Retired Enlisted Association. We conducted this performance audit from August 2011 to July 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As of June 2012, three federal agency reviews were under way to determine if servicemembers who were eligible for SCRA mortgage- related protections received them. A total of 14 mortgage servicers are involved in these reviews as a result of recent enforcement actions taken by the Office of the Comptroller of the Currency (OCC), the Board of Governors of the Federal Reserve System (Federal Reserve), and the Department of Justice (DOJ). While each review is separate, some overlap exists in the institutions and timeframes being reviewed. However, officials from DOJ told us they are coordinating the reviews to eliminate unnecessary duplication and overlap at institutions. DOJ also completed one review of a mortgage servicer—Saxon Mortgage Services—in May 2012. In response to deficiencies in the foreclosure process that various mortgage servicers publicly announced beginning in September 2010, OCC and the Federal Reserve conducted a coordinated (interagency) on- site review of 14 mortgage servicers to evaluate the adequacy of controls over their foreclosure processes and their policies and procedures for compliance with applicable federal and state laws. This review identified various weaknesses and deficiencies in these mortgage servicers’ foreclosure operations, including violations of SCRA. As a result of these reviews, OCC and the Federal Reserve issued consent orders to the 14 mortgage servicers and their affiliates in April 2011, requiring these institutions to make various corrective actions. One of these actions required each of the mortgage servicers to retain a third-party consultant to conduct independent reviews of foreclosure actions that were initiated, pending, or completed on primary residences from January 1, 2009 through December 31, 2010, to identify borrowers who suffered financial injury as a result of errors, misrepresentations, or other deficiencies in foreclosure actions, and to remediate those borrowers, as appropriate. As part of these independent reviews, the consultants are required to review 100 percent of the foreclosure actions during 2009 and 2010 that involved servicemembers who may have been protected under SCRA. Because examiners reviewed a relatively small number of foreclosure files during the original interagency review, the reviews required by the consent orders are intended to be more comprehensive. The consent orders require the third-party consultants to develop detailed sampling methodologies for identifying foreclosure actions to include in the review. These methodologies are subject to OCC’s and the Federal Reserve’s approval. OCC officials told us that, in conjunction with DOD and DOJ, they have worked with the third-party consultants to develop a process to access the Department of Defense’s Defense Manpower Data Center’s database with custom queries in order for the third-party consultants to accurately identify the pool of potential SCRA-eligible borrowers. To supplement the independent reviews, the regulators also required mortgage servicers and consultants to establish an outreach process for borrowers, including servicemembers, who believe they were financially harmed by improper foreclosure practices to request a review of their foreclosure case. These requests for review must be submitted to the mortgage servicers by September 30, 2012. According to officials from OCC and the Federal Reserve, as of May 2012, preliminary results from this review on instances of SCRA noncompliance were not available. On May 26, 2011, DOJ settled two cases against Saxon Mortgage Services and BAC Home Loans Servicing for allegations of violations of SCRA’s foreclosure provision.cases dictated that damages be paid to affected servicemembers and remedial actions be taken by the mortgage servicers. BAC Home Loans Servicing agreed to pay at least $20 million to resolve the lawsuit, and Saxon Mortgage Services agreed to pay at least $2.35 million. The consent orders also required the mortgage servicers to, among other things, (1) implement revised SCRA policies and procedures specifically about querying the Department of Defense’s Defense Manpower Data Center database that contains information on servicemembers’ active duty status, (2) implement a foreclosure monitoring program, (3) provide SCRA compliance training to all applicable employees, and (4) conduct reviews to identify additional servicemembers who may have had their SCRA rights violated and compensate them. The reviews required by the consent orders included the following: The consent orders for each of these Saxon Mortgage Services was required to review all nonjudicial foreclosures conducted from January 1, 2006, through December 31, 2010, to determine compliance with SCRA. This review was completed in May 2012 and a total of 22 servicemembers were identified as having been improperly foreclosed upon. BAC Home Loans Servicing is required to conduct reviews for both the foreclosure and interest-rate provisions of SCRA. Specifically, BAC Home Loans Servicing is to review all nonjudicial foreclosures it conducted from January 1, 2006, through December 31, 2010, for SCRA compliance. For the interest-rate review, the consent order required BAC Home Loans Servicing to retain an independent accounting firm to review a statistically valid sample of home mortgage files from January 1, 2008, through December 31, 2010, and issue a report on whether the mortgage servicer appropriately applied interest rates and fees to servicemembers’ mortgages as required by SCRA. DOJ officials told us that, as of June 2012, the BAC Home Loans Servicing review is ongoing. In February 2012, DOJ and 49 state attorneys general settled with five of the largest national mortgage servicers for a variety of improper mortgage servicing procedures, including allegations of SCRA violations. The $25 billion settlement was one of the largest financial recoveries obtained by the attorneys general in history and contains a number of provisions related to SCRA designed to protect servicemembers’ rights and to provide them additional benefits. To resolve allegations of liability that have not previously been settled, five mortgage servicers—Ally Financial Inc., Bank of America Corp., Citigroup Inc., JPMorgan Chase Bank, N.A., and Wells Fargo & Company—agreed to conduct a full review, overseen by DOJ’s Civil Rights Division, to determine whether any servicemembers were foreclosed on in violation of SCRA since January 1, 2006. Additionally, four of the mortgage servicers— Ally Financial Inc., Bank of America Corp., Citigroup Inc., and Wells Fargo & Company—agreed to conduct a thorough review of mortgage loans to determine whether any servicemember, since January 1, 2008, was charged interest in excess of 6 percent after submitting a valid request to lower the interest rate. The agreement also specifies compensation above the $25 billion settlement amount for any SCRA foreclosure or interest-rate violations. Compliance with the SCRA provisions of the settlement will be overseen by DOJ’s Civil Rights Division. Table 4 summarizes these three reviews. In addition to the individual named above, Cody Goebel, Assistant Director; Meghana Acharya; Rachel Batkins; Rudy Chatlos; Christine Houle; John McGrail; Mark Ramage; and Jennifer Schwartz made key contributions to this report.
SCRA protects servicemembers whose active duty military service prevents them from meeting financial obligations, by allowing interest rates on certain debts to be reduced and requiring a court order before certain foreclosures on their homes can occur. With foreclosures rising, reports surfaced of instances in which financial institutions failed to comply with SCRA. GAO examined the (1) eligibility for SCRA protections and extent of SCRA mortgage-related violations by depository institutions, (2) SCRA compliance oversight by prudential regulators and other federal agencies, and (3) the military services’ efforts to educate servicemembers on SCRA. GAO collected data on populations eligible for SCRA from DOD and SCRA violations from banking and law enforcement agencies and reviewed a stratified random sample of prudential regulators’ examinations of banks and credit unions. GAO also interviewed regulators, law enforcement and military officials, and military service organizations. Certain protections under the Servicemembers Civil Relief Act (SCRA) only apply to those servicemembers who obtained mortgages prior to becoming active duty, but at least 15,000 instances of financial institutions failing to properly reduce servicemembers’ mortgage interest rates and over 300 improper foreclosures have been identified by federal investigations and financial institutions in recent years. Additional independent reviews of financial institutions’ compliance are under way, and staff from some of these institutions told GAO that they have implemented improved practices—such as creating single points of contact familiar with military issues for borrowers—to better comply with SCRA. Federal regulators’ oversight of SCRA compliance has been limited. GAO estimates that from 2007 through 2011 prudential depository institution regulators—the Federal Deposit Insurance Corporation, Federal Reserve Board, National Credit Union Administration, and Office of the Comptroller of the Currency—reviewed 48 percent of all banks and credit unions for SCRA compliance. Of these institutions that were reviewed for SCRA compliance, only about half received examinations that involved testing of compliance by reviewing loan files. Further, GAO found that examiners had only reviewed loans identified by the institution as involving servicemembers and had not independently selected a statistical sample of loan files, which would have provided greater assurance of SCRA compliance. Without more testing, which examination and auditing guidance suggest provides increased verification, regulators are less likely to detect SCRA violations. Various other federal agencies are involved in SCRA compliance oversight. The Department of Justice has explicit SCRA enforcement authority and since 2007 has brought three cases against mortgage servicers for violations. The Department of Veterans Affairs (VA), Federal Housing Administration, and Federal Housing Finance Agency—which regulates the government-sponsored enterprises—all obtain information about SCRA compliance at the servicers that participate in the mortgage programs they administer or regulate, but the agencies and the prudential regulators do not share such information among themselves. Collaboration among these agencies could lead to more effective supervision and improve their awareness of potential problems with SCRA compliance. Further, VA oversight of mortgage servicers does not specifically review for SCRA compliance. By increasing its SCRA compliance monitoring efforts, VA could better ensure that servicemembers with VA loans are better protected. SCRA requires that the Department of Defense (DOD) and Department of Homeland Security (DHS)—which oversees the Coast Guard—inform servicemembers of their SCRA rights. The military services provide this information in various forms, such as briefings and websites. However, some military officials said that servicemembers—particularly members of the National Guard and reserve—often receive SCRA information as part of briefings with numerous other topics prior to deployment and do not always retain the necessary awareness when they need it later. DOD and DHS do not assess the effectiveness of their SCRA education methods, such as by using focus groups of servicemembers or testing to reinforce retention of SCRA information. Without such assessment, they may not be able to ensure that they are informing servicemembers of their rights in the most effective manner. Prudential regulators should conduct more extensive loan file testing for SCRA compliance. Regulators and other agencies that oversee mortgage activities should also explore opportunities for information sharing on SCRA compliance oversight, and VA should expand its SCRA compliance monitoring efforts. Finally, DOD and DHS should assess the effectiveness of their efforts to provide SCRA information to servicemembers. The agencies generally agreed and noted actions responsive to GAO’s recommendations.
For 16 years, DOD’s supply chain management processes have been on our list of high-risk areas needing urgent attention because of long-standing systemic weaknesses that we have identified in our reports. We initiated our high-risk program in 1990 to report on government operations that we identified as being at high risk for fraud, waste, abuse, and mismanagement. The program serves to identify and help resolve serious weaknesses in areas that involve substantial resources and provide critical services to the public. Removal of a high-risk designation may be considered when legislative and agency actions, including those in response to our recommendations, result in significant and sustainable progress toward resolving a high-risk problem. Key determinants include a demonstrated strong commitment to and top leadership support for addressing problems, the capacity to do so, a corrective action plan that provides for substantially completing corrective measures in the near term, a program to monitor and independently validate the effectiveness of corrective measures, and demonstrated progress in implementing corrective measures. Beginning in 2005, DOD developed a plan for improving supply chain management that could reduce its vulnerability to fraud, waste, abuse, and mismanagement and place it on the path toward removal from our list of high-risk areas. This supply chain management improvement plan, initially released in July 2005, contains 10 initiatives proposed as solutions to address the root causes of problems we identified from our prior work in the areas of requirements forecasting, asset visibility, and materiel distribution. DOD defines requirements as the need or demand for personnel, equipment, facilities, other resources, or services in specified quantities for specific periods of time or at a specified time. Accurately forecasted supply requirements are a key first step in buying, storing, positioning, and shipping items that the warfighter needs. DOD describes asset visibility as the ability to provide timely and accurate information on the location, quantity, condition, movement, and status of supplies and the ability to act on that information. Distribution is the process for synchronizing all elements of the logistics system to deliver the “right things” to the “right place” at the “right time” to support the warfighter. DOD’s success in improving supply chain management is closely linked with its overall defense business transformation efforts and completion of a comprehensive, integrated logistics strategy. In previous reports and testimonies, we have stated that progress in DOD’s overall approach to business transformation is needed to confront problems in other high-risk areas, including supply chain management. DOD has taken several steps intended to advance business transformation, including establishing new governance structures and aligning new information systems with its business enterprise architecture. Another key step to supplement these ongoing transformation efforts is completion of a comprehensive, integrated logistics strategy that would identify problems and capability gaps to be addressed, establish departmentwide investment priorities, and guide decision making. DOD’s success in improving supply chain management is closely linked with overall defense business transformation. Our prior reviews and recommendations have addressed business management problems that adversely affect the economy, efficiency, and effectiveness of DOD’s operations, and that have resulted in a lack of adequate accountability across several of DOD’s major business areas. We have concluded that progress in DOD’s overall approach to business transformation is needed to confront other high-risk areas, including supply chain management. DOD’s overall approach to business transformation was added to the high-risk list in 2005 because of our concern over DOD’s lack of adequate management accountability and the absence of a strategic and integrated action plan for the overall business transformation effort. Specifically, the high-risk designation for business transformation resulted because (1) DOD’s business improvement initiatives and control over resources are fragmented; (2) DOD lacks a clear strategic and integrated business transformation plan and investment strategy, including a well-defined enterprise architecture to guide and constrain implementation of such a plan; and (3) DOD has not designated a senior management official responsible and accountable for overall business transformation reform and related resources. In response, DOD has taken several actions intended to advance transformation. For example, DOD has established governance structures such as the Business Transformation Agency and the Defense Business Systems Management Committee. The Business Transformation Agency was established in October 2005 with the mission of transforming business operations to achieve improved warfighter support and improved financial accountability. The agency supports the Defense Business Systems Management Committee, which is comprised of senior-level DOD officials and is intended to serve as the primary transformation leadership and oversight mechanism. Furthermore, in September 2006, DOD released an updated Enterprise Transition Plan that is intended to be both a business transformation roadmap and management tool for modernizing its business process and underlying information technology assets. DOD describes the Enterprise Transition Plan as an executable roadmap aligned to DOD’s business enterprise architecture. In addition, as required by the National Defense Authorization Act for Fiscal Year 2006, DOD is studying the feasibility and advisability of establishing a Deputy Secretary for Defense Management to serve as DOD’s Chief Management Officer and advise the Secretary of Defense on matters relating to management, including defense business activities. Business systems modernization is a critical part of DOD’s transformation efforts, and successful resolution of supply chain management problems will require investment in needed information technology. DOD spends billions of dollars to sustain key business operations intended to support the warfighter, including systems and processes related to support infrastructure, finances, weapon systems acquisition, the management of contracts, and the supply chain. We have indicated at various times that modernized business systems are essential to the department’s effort in addressing its supply chain management issues. In its supply chain management improvement plan, DOD recognizes that achieving success in supply chain management is dependent on developing interoperable systems that can share critical supply data. One of the initiatives included in the plan is business system modernization, an effort that is being led by DOD’s Business Transformation Agency and includes achieving materiel visibility through systems modernization as one of its six enterprisewide priorities. Improvements in financial management are also integrally linked to DOD’s business transformation. Since our first report on the financial statement audit of a major DOD component over 16 years ago, we have repeatedly reported that weaknesses in business management systems, processes, and internal controls not only adversely affect the reliability of reported financial data, but also the management of DOD operations. Such weaknesses have adversely affected the ability of DOD to control costs, ensure basic accountability, anticipate future costs and claims on the budget, measure performance, maintain funds control, and prevent fraud. In December 2005, DOD issued its Financial Improvement and Audit Readiness Plan to guide its financial management improvement efforts. The Financial Improvement and Audit Readiness Plan is intended to provide DOD components with a roadmap for (1) resolving problems affecting the accuracy, reliability, and timeliness of financial information; and (2) obtaining clean financial statement audit opinions. It uses an incremental approach to structure its process for examining operations, diagnosing problems, planning corrective actions, and preparing for audit. The plan also recognizes that it will take several years before DOD is able to implement the systems, processes, and other changes necessary to fully address its financial management weaknesses. Furthermore, DOD has developed an initial Standard Financial Information Structure, which is DOD’s enterprisewide data standard for categorizing financial information. This effort focused on standardizing general ledger and external financial reporting requirements. While these steps are positive, defense business transformation is much broader and encompasses planning, management, organizational structures, and processes related to all key business areas. As we have previously observed, business transformation requires long-term cultural change, business process reengineering, and a commitment from both the executive and legislative branches of government. Although sound strategic planning is the foundation on which to build, DOD needs clear, capable, sustained, and professional leadership to maintain continuity necessary for success. Such leadership would provide the attention essential for addressing key stewardship responsibilities—such as strategic planning, performance management, business information management, and financial management—in an integrated manner, while helping to facilitate the overall business transformation effort within DOD. As DOD continues to evolve its transformation efforts, critical to successful reform are sustained leadership, organizational structures, and a clear strategic and integrated plan that encompasses all major business areas, including supply chain management. Another key step to supplement ongoing defense business transformation efforts is completion of a comprehensive, integrated logistics strategy that would identify problems and capability gaps to be addressed, establish departmentwide investment priorities, and guide decision making. Over the years, we have recommended that DOD adopt such a strategy, and DOD has undertaken various efforts to identify, and plan for, future logistics needs. However, DOD currently lacks an overarching logistics strategy. In December 2005, DOD issued its “As Is” Focused Logistics Roadmap, which assembled various logistics programs and initiatives associated with the fiscal year 2006 President’s Budget and linked them to seven key joint future logistics capability areas. The roadmap identified more than $60 billion of planned investments in these programs and initiatives, yet it also indicated that key focused logistics capabilities would not be achieved by 2015. Therefore, the Under Secretary of Defense for Acquisition, Technology, and Logistics directed the department to prepare a rigorous “To Be” roadmap that would present credible options to achieve focused logistics capabilities. According to officials with the Office of the Secretary of Defense, the “To Be” logistics roadmap will portray where the department is headed in the logistics area and how it will get there, and will allow the department to monitor progress toward achieving its objectives, as well as institutionalize a continuous assessment process that links ongoing capability development, program reviews, and budgeting. It would identify the scope of logistics problems and capability gaps to be addressed and include specific performance goals, programs, milestones, resources, and metrics to guide improvements in supply chain management and other areas of DOD logistics. Officials anticipate that the initiatives in the supply chain management improvement plan will be incorporated into the “To Be” logistics roadmap. DOD has not established a target date for completing the “To Be” roadmap. According to DOD officials, its completion is pending the results of the department’s ongoing test of new concepts for managing logistics capabilities. The Deputy Secretary of Defense initiated this joint capability portfolio management test in September 2006 to explore new approaches for managing certain capabilities across the department, facilitating strategic choices, and improving the department’s ability to make capability trade-offs. The intent of joint capability portfolio management is to improve interoperability, minimize redundancies and gaps, and maximize effectiveness. Joint logistics is one of the four capability areas selected as test cases for experimentation. The joint logistics test case portfolio will include all capabilities required to project and sustain joint force operations, including supply chain operations. According to DOD officials, initial results of the joint logistics capability portfolio management test are expected to be available in late spring 2007, and the results of the test will then be used to complete the “To Be” logistics roadmap. The results of the test are also expected to provide additional focus on improving performance in requirements determination, asset visibility, and materiel distribution, officials said. We have also noted previously that while DOD and its component organizations have had multiple plans for improving aspects of logistics, the linkages among these plans have not been clearly shown. In addition to the supply chain management improvement plan, current DOD plans that address aspects of supply chain management include the Enterprise Transition Plan and component-level plans developed by the military services and the Defense Logistics Agency. Although we are encouraged by DOD’s planning efforts, the department lacks a comprehensive, integrated strategy to guide logistics programs and initiatives across the department. Without such a strategy, decision makers will lack the means to effectively guide program efforts and the ability to determine if these efforts are achieving the desired results. Although DOD is making progress implementing supply chain management initiatives, it is unable to demonstrate at this time the full extent to which it is improving supply chain management. DOD has established some high- level performance measures but they do not explicitly address the focus areas, and an improvement in those measures cannot be directly attributed to the initiatives. Further, the metrics in DOD’s supply chain management improvement plan generally do not measure performance outcomes and costs. In addition to implementing audit recommendations, as discussed in the next section of this report, DOD is making progress improving supply chain management by implementing initiatives in its supply chain management improvement plan. For example, DOD has met key milestones in its Joint Regional Inventory Materiel Management, Radio Frequency Identification, and Item Unique Identification initiatives. Through its Joint Regional Inventory Materiel Management initiative, DOD began to streamline the storage and distribution of defense inventory items on a regional basis, in order to eliminate duplicate materiel handling and inventory layers. Last year, DOD completed a pilot for this initiative in the San Diego region and, in January 2006, began a similar transition for inventory items in Oahu, Hawaii, which was considered operational in August 2006. In May 2006, DOD published an interim Defense Federal Acquisition Regulation clause governing the application of tags to different classes of assets being shipped to distribution depots and aerial ports for the Radio Frequency Identification initiative. The Item Unique Identification initiative, which provides for marking of personal property items with a set of globally unique data items to help DOD value and track items throughout their life cycle, received approval by the International Organization for Standardization/International Electrotechnical Commission in September 2006 for an interoperable solution for automatic identification and data capture based on widely used international standards. DOD has sought to demonstrate significant improvement in supply chain management within 2 years of the plan’s inception in July 2005; however, the department may have difficulty meeting its July 2007 goal. Some of the initiatives are still being developed or piloted and have not yet reached the implementation stage, others are in the early stages of implementation, and some are not scheduled for completion until 2008 or later. For example, according to DOD’s plan, the Readiness Based Sparing initiative, an inventory requirements methodology that the department expects will enable higher levels of readiness at equivalent or reduced inventory costs using commercial off-the-shelf software, is not expected to begin implementation until January 2008. The Item Unique Identification initiative, which involves marking personal property items with a set of globally unique data elements to help DOD track items during their life cycles, will not be completed until December 2010 under the current schedule. While DOD has generally stayed on track, it has reported some slippage in meeting scheduled milestones for certain initiatives. For example, a slippage of 9 months occurred in the Commodity Management initiative because additional time was required to develop a departmentwide approach. This initiative addresses the process of developing a systematic procurement approach to the department’s needs for a group of items. Additionally, according to DOD’s plan, the Defense Transportation Coordination initiative experienced a slippage in holding the presolicitation conference because defining requirements took longer than anticipated. Given the long-standing nature of the problems being addressed, the complexities of the initiatives, and the involvement of multiple organizations within DOD, we would expect to see further milestone slippage in the future. The supply chain management improvement plan generally lacks outcome- focused performance metrics that track progress in the three focus areas and at the initiative level. Performance metrics are critical for demonstrating progress toward achieving results, providing information on which to base organizational and management decisions, and are important management tools for all levels of an agency, including the program or project level. Moreover, outcome-focused performance metrics show results or outcomes related to an initiative or program in terms of its effectiveness, efficiency, impact, or all of these. To track progress toward goals, effective performance metrics should have a clearly apparent or commonly accepted relationship to the intended performance, or should be reasonable predictors of desired outcomes; are not unduly influenced by factors outside a program’s control; measure multiple priorities, such as quality, timeliness, outcomes, and cost; sufficiently cover key aspects of performance; and adequately capture important distinctions between programs. Performance metrics enable the agency to assess accomplishments, strike a balance among competing interests, make decisions to improve program performance, realign processes, and assign accountability. While it may take years before the results of programs become apparent, intermediate metrics can be used to provide information on interim results and show progress towards intended results. In addition, when program results could be influenced by external factors, intermediate metrics can be used to identify the program’s discrete contribution to the specific result. DOD’s plan does include four high-level performance measures that are being tracked across the department, and while they are not required to do so, these measures do not explicitly relate to the focus areas. The four measures are as follows: Backorders—number of orders held in an unfilled status pending receipt of additional parts or equipment through procurement or repair. Customer wait time—number of days between the issuance of a customer order and satisfaction of that order. On-time orders—percentage of orders that are on time according to DOD’s established delivery standards. Logistics response time—number of days to fulfill an order placed on the wholesale level of supply from the date a requisition is generated until the materiel is received by the retail supply activity. Additionally, these measures may be affected by many variables; hence, improvements in the high-level performance measures cannot be directly attributed to the initiatives in the plan. For example, implementing RFID at a few sites at a time has only a very small impact on customer wait time. However, variables such as natural disasters, wartime surges in requirements, or disruption in the distribution process could affect that measure. DOD’s supply chain materiel management regulation requires that functional supply chain metrics support at least one enterprise-level metric. DOD’s plan also lacks outcome-focused performance metrics for 6 of the 10 specific improvement initiatives contained in the plan. For example, while DOD intended to have RFID implemented at 100 percent of its U.S. and overseas distribution centers by September 2007—a measure indicating when scheduled milestones are met—it had not yet identified outcome- focused performance metrics that could be used to show the impact of implementation on expected outcomes, such as receiving and shipping timeliness, asset visibility, or supply consumption data. Two other examples of improvement initiatives that lack outcome-focused performance metrics are War Reserve Materiel, which aims to more accurately forecast war reserve requirements by using capability-based planning and incorporating lessons learned in Operation Iraqi Freedom, and Joint Theater Logistics, which is an effort to improve the ability of a joint force commander to execute logistics authorities and processes within a theater of operations. One of the challenges in developing departmentwide supply chain performance measures, according to a DOD official, is obtaining standardized, reliable data from noninteroperable systems. For example, the Army currently does not have an integrated method to determine receipt processing for Supply Support Activities, which could affect asset visibility and distribution concerns. Some of the necessary data reside in the Global Transportation Network while other data reside in the Standard Army Retail Supply System. These two databases must be manually reviewed and merged in order to obtain the information for accurate receipt processing performance measures. Nevertheless, we believe that intermediate measures, such as outcome-focused measures for each of the initiatives or for the focus areas, could show near-term progress. According to a DOD official, in September 2006, DOD awarded a year-long supply chain benchmarking contract to assess commercial supply chain metrics. The official indicated that six outcome measures were chosen for the initial effort: on-time delivery, order fulfillment cycle time, perfect order fulfillment, supply chain management costs, inventory days of supply, and forecast accuracy. Furthermore, the specific supply chains to be reviewed will be recommended by the various DOD components and approved by an executive committee. According to the same DOD official, the contractor will be looking at the specific supply chains approved and the industry equivalent; and a set of performance scorecards mapping the target supply segment to average and best-in-class performance from the comparison population will be developed for each supply chain and provided to the component. This assessment is a good step but it is too early to determine the effectiveness of this effort in helping DOD to demonstrate progress toward improving its supply chain management. Further, we noted that DOD has not provided cost metrics that might show efficiencies gained through supply chain improvement efforts. In addition to improving the provision of supplies to the warfighter and improving readiness of equipment, DOD’s stated goal in its supply chain management improvement plan is to reduce or avoid costs. However, 9 of the 10 initiatives in the plan lack cost metrics. Without outcome-focused performance and cost metrics for each of the improvement initiatives that are linked to the focus areas, such as requirements forecasting, asset visibility, and materiel distribution, it is unclear whether DOD is progressing toward meeting its stated goal. Over the last 5 years, audit organizations have made more than 400 recommendations that focused specifically on improving certain aspects of DOD’s supply chain management. DOD or the component organization concurred with almost 90 percent of these recommendations, and most of the recommendations that were closed as of the time of our review were considered implemented. We determined that the three focus areas of requirements forecasting, asset visibility, and materiel distribution accounted for 41 percent of the total recommendations made, while other inventory management and supply chain issues accounted for the remaining recommendations. We also grouped the recommendations into five common themes—management oversight, performance tracking, policy, planning, and processes. Several studies conducted by non-audit organizations have made recommendations that address supply chain management as part of a broader review of DOD logistics. Appendixes I through V summarize the audit recommendations we included in our baseline. Appendix VI summarizes recommendations made by non-audit organizations. In developing a baseline of supply chain management recommendations, we identified 478 supply chain management recommendations made by audit organizations between October 2001 and September 2006. DOD or the component organization concurred with 411 (86 percent) of the recommendations; partially concurred with 44 recommendations (9 percent); and nonconcurred with 23 recommendations (5 percent). These recommendations cover a diverse range of objectives and issues concerning supply chain management. For example, one recommendation with which DOD concurred was contained in our 2006 report on production and installation of Marine Corps truck armor. To better coordinate decisions about what materiel solutions are developed and procured to address common urgent wartime requirements, we recommended—and DOD concurred—that DOD should clarify the point at which the Joint Urgent Operational Needs process should be utilized when materiel solutions require research and development. In another case, DOD partially concurred with a recommendation in our 2006 report on Radio Frequency Identification (RFID), which consists of electronic tags that are attached to equipment and supplies being shipped from one location to another, enabling shipment tracking. To better track and monitor the use of RFID tags, we recommended—and DOD partially concurred—that the secretaries of each military service and the administrators of other components should determine requirements for the number of tags needed, compile an accurate inventory of the number of tags currently owned, and establish procedures to monitor and track tags, including purchases, reuse, losses, and repairs. In its response to our report, DOD agreed to direct the military services and the U.S. Transportation Command to develop procedures to address the reuse of the tags as well as procedures for the return of tags no longer required. However, the department did not agree to establish procedures to account for the procurement, inventory, repair, or losses of existing tags in the system. On the other hand, an example of a recommendation that DOD did not concur with was contained in our 2005 report on supply distribution operations. To improve the overall efficiency and interoperability of distribution-related activities, we recommended—but DOD did not concur—that the Secretary of Defense should clarify the scope of responsibilities, accountability, and authority between U.S. Transportation Command’s role as DOD’s Distribution Process Owner and other DOD components. In its response to our report, DOD stated that the responsibilities, accountability, and authority of this role were already clear. The audit organizations had closed 315 (66 percent) of the 478 recommendations at the time we conducted our review. Of the closed recommendations, 275 (87 percent) were implemented and 40 (13 percent) were not implemented as reported by the audit agencies. For example, one closed recommendation that DOD implemented was in our 2005 report on oversight of prepositioning programs. To address the risks and management challenges facing the department’s prepositioning programs and to improve oversight, we recommended that the Secretary of Defense direct the Chairman, Joint Chiefs of Staff assess the near-term operational risks associated with current inventory shortfalls and equipment in poor condition should a conflict arise. In response to our recommendation, the Joint Staff conducted a mission analysis on several operational plans based on the readiness of prepositioned assets. On the other hand, an example of a closed recommendation that DOD did not implement was in our 2003 report on Navy spare parts shortages. To provide a basis for management to assess the extent to which ongoing and planned initiatives will contribute to the mitigation of critical spare parts shortages, we recommended that the Secretary of Defense direct the Secretary of the Navy to develop a framework that includes long-term goals; measurable, outcome-related objectives; implementation goals; and performance measures as a part of either the Navy Sea Enterprise strategy or the Naval Supply Systems Command Strategic Plan. DOD agreed with the intent of the recommendation, but not the prescribed action. The recommendation was closed but not implemented because the Navy did not plan to modify the Naval Supply Systems Command Strategic Plan or higher-level Sea Enterprise Strategy to include a specific focus on mitigating spare parts shortages. Audit recommendations addressing the three focus areas in DOD’s supply chain management improvement plan—requirements forecasting, asset visibility, and materiel distribution—accounted for 196 (41 percent) of the total recommendations. The fewest recommendations were made in the focus area of distribution, accounting for just 6 percent of the total. Other inventory management issues accounted for most of the other recommendations. In addition, a small number of recommendations, less than 1 percent of the total, addressed supply chain management issues that could not be grouped under any of these other categories. In further analyzing the recommendations, we found that they addressed five common themes—management oversight, performance tracking, policy, planning, and processes. Table 1 shows the number of audit recommendations made by focus area and theme. Most of the recommendations addressed processes (38 percent), management oversight (30 percent), or policy (22 percent), with comparatively fewer addressing planning (7 percent) and performance tracking (4 percent). The management oversight theme includes any recommendations involving compliance, conducting reviews, or providing information to others. For example, the Naval Audit Service recommended that the Office of the Commander, U.S. Fleet Forces Command should enforce existing requirements that ships prepare and submit Ship Hazardous Material List Feedback Reports and Allowance Change Requests, whenever required. The performance tracking theme includes recommendations with performance measures, goals, objectives, and milestones. For example, the Army Audit Agency recommended that funding for increasing inventory safety levels be withheld until the Army Materiel Command develops test procedures and identifies key performance indicators to measure and assess its cost-effectiveness and impact on operational readiness. The policy theme contains recommendations on issuing guidance, revising or establishing policy, and establishing guidelines. For example, the DOD-IG recommended that the Defense Logistics Agency revise its supply operating procedures to meet specific requirements. The planning theme contains recommendations related to plan, doctrine, or capability development or implementation, as well as any recommendations related to training. For example, the Army Audit Agency recommended the Defense Supply Center in Philadelphia implement a Quality Assurance Surveillance Plan that encompasses all requirements of the prime vendor contract. The largest theme, processes, consists of recommendations that processes and procedures should be established or documented, and recommendations be implemented. For example, we recommended that the Secretary of Defense direct the service secretaries to establish a process to share information between the Marine Corps and Army on developed or developing materiel solutions. Studies conducted by non-audit organizations contain recommendations that address supply chain management as part of a broader review of DOD logistics. For example, the Center for Strategic and International Studies and the Defense Science Board suggested the creation of a departmentwide logistics command responsible for end-to-end supply chain operations. In July 2005, the Center for Strategic and International Studies issued a report, “Beyond Goldwater-Nichols: U.S. Government and Defense Reform for a New Strategic Era,” which addressed the entire U.S. national security structure, including the organization of logistics support. In this report, the study team acknowledged that recent steps, such as strengthening joint theater logistics and the existence of stronger coordinating authorities have significantly increased the unity of effort in logistical support to ongoing operations. However, according to the study, much of this reflects the combination of exemplary leadership and the intense operational pull of Operation Iraqi Freedom, and has not been formalized and institutionalized by charter, doctrine, or organizational realignment. It further noted that the fact that a single Distribution Process Owner was needed to overcome the fragmented structure of DOD’s logistical system underscores the need for fundamental reform. The study team recommended the integration of the management of transportation and supply warehousing functions under a single organization such as an integrated logistics command. The report noted that the Commission on Roles and Missions also had recommended the formation of a logistics command back in 1995. In 2005, the Summer Study Task Force on Transformation, under the direction of the Under Secretary of Defense for Acquisition, Technology, and Logistics, convened to assess DOD’s transformation progress, including the transformation of logistics capabilities. In this assessment, issued in February 2006, the Defense Science Board suggested that each segment in the supply chain is optimized for that specific function. For example, in the depot shipping segment of the supply chain, packages are consolidated into truck-size loads in order to fill the trucks for efficiency. Yet, optimizing each segment inevitably suboptimizes the major objective of end-to-end movement from source to user. The Defense Science Board report further indicated that although the assignment of the U.S. Transportation Command as the Distribution Process Owner was an important step towards addressing an end-to-end supply change, it did not go far enough to meet the objective of an effective supply chain. The necessary step is to assign a joint logistics command the authority and accountability for providing this essential support to global operations. Unlike recommendations made by audit agencies, DOD does not systematically track the status of recommendations made by non-audit organizations. Hence, in our analysis, we did not determine the extent to which DOD concurred with or implemented recommendations from these organizations. Overcoming systemic, long-standing problems requires comprehensive approaches. Improving DOD’s supply chain management will require continued progress in defense business transformation, including completion of a comprehensive, integrated strategy to guide the department’s logistics programs and initiatives. In addition, while DOD has made a commitment to improving supply chain management, as demonstrated by the development and implementation of the supply chain management improvement plan, the plan generally lacks outcome-focused performance metrics that would enable DOD to track and demonstrate the extent to which its individual efforts improve supply chain management or the extent of improvement in the three focus areas of requirements forecasting, asset visibility, and materiel distribution. Furthermore, without cost metrics, it will be difficult to show efficiencies gained through supply chain improvement initiatives. To improve DOD’s ability to guide logistics programs and initiatives across the department and to demonstrate the effectiveness, efficiency, and impact of its efforts to resolve supply chain management problems, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to take the following two actions: Complete the development of a comprehensive, integrated logistics strategy that is aligned with other defense business transformation efforts, including the Enterprise Transition Plan. To facilitate completion of the strategy, DOD should establish a specific target date for its completion. Further, DOD should take steps as appropriate to ensure the supply chain management improvement plan and component-level logistics plans are synchronized with the department’s overall logistics strategy. Develop, implement, and monitor outcome-focused performance and cost metrics for all the individual initiatives in the supply chain management improvement plan as well as for the plan’s focus areas of requirements forecasting, asset visibility, and materiel distribution. In its written comments on a draft of this report, DOD concurred with our recommendations. The department’s response are reprinted in appendix VII. In response to our recommendation to complete the development of a comprehensive, integrated logistics strategy, DOD stated that the strategy is under development and is aligned with other defense business transformation efforts. DOD estimated that the logistics strategy would be completed 6 months after it completes the logistics portfolio test case in the spring of 2007. DOD did not address whether it would take steps to ensure the supply chain management improvement plan and component- level logistics plans are synchronized with the department’s overall logistics strategy. We continue to believe that these plans must be synchronized with the overall logistics strategy to effectively guide program efforts across the department and to provide the means to determine if these efforts are achieving the desired results. In response to our recommendation to develop, implement, and monitor outcome-focused performance and cost metrics, the department indicated it has developed and implemented outcome-focused performance and cost metrics for logistics across the department. However, DOD acknowledged that more work needs to be accomplished in linking the outcome metrics to the initiatives in the supply chain management improvement plan as well as for the focus areas of requirements forecasting, asset visibility, and materiel distribution. DOD stated that these linkages will be completed as part of full implementation of each initiative. We are pleased that the department recognized the need for linking outcome-focused metrics with the individual initiatives and the three focus areas in its supply chain management improvement plan. However, it is unclear from DOD’s response how and under what timeframes the department plans to implement this goal. As we noted in the report, DOD lacks outcome- focused performance metrics for supply chain management, in part because one of the challenges is obtaining standardized, reliable data from noninteroperable systems. In addition, initiatives in the supply chain management plan are many years away from full implementation. If DOD waits until full implementation to incorporate outcome-based metrics, it will miss opportunities to assess progress on an interim basis. We also continue to believe that cost metrics are critical for DOD to assess progress toward meeting its stated goal of improving the provision of supplies to the warfighter and improving readiness of equipment while reducing or avoiding costs through its supply chain initiatives. Our discussion of the integration of supply chain management with broader defense transformation efforts is based primarily on our prior reports and testimonies. We obtained information on DOD’s “To Be” logistics roadmap and the joint logistics capabilities portfolio management test from senior officials in the Office of the Deputy Under Secretary of Defense for Logistics, Materiel, and Readiness. We met regularly with DOD and OMB officials to discuss the overall status of the supply chain management improvement plan, the implementation schedules of the plan’s individual initiatives, and the plan’s performance measures. We visited and interviewed officials from U.S. Transportation Command, the Defense Logistics Agency, the military services, and the Joint Staff to gain their perspectives on improving supply chain management. To develop a baseline of recommended supply chain management improvements, we surveyed audit reports covering the time period of October 2001 to September 2006. We selected this time period because it corresponds with recent military operations that began with the onset of Operation Enduring Freedom and, later, Operation Iraqi Freedom. We surveyed audit reports issued by our office, the DOD-IG, the Army Audit Agency, the Naval Audit Service, and the Air Force Audit Agency. For each audit recommendation contained in these reports, we determined its status and focus. To determine the status of GAO recommendations, we obtained data from our recommendation tracking system. We noted whether DOD concurred with, partially concurred with, or did not concur with each recommendation. In evaluating agency comments on our reports, we have noted instances where DOD agreed with the intent of a recommendation but did not commit to taking any specific actions to address it. For the purposes of this report, we counted these as concurred recommendations. We also noted whether the recommendation was open, closed and implemented, or closed and not implemented. In a similar manner, we worked with DOD-IG and the service audit agencies to determine the status of their recommendations. We verified with each of the audit organizations that they agreed with our definition that a recommendation is considered “concurred with” when the audit organization determines that DOD or the component organization fully agreed with the recommendation in it entirety and its prescribed actions, and “partially concurred with” is when the audit organization determines that DOD or the component organization agreed to parts of the recommendation or parts of its prescribed actions. Furthermore, we verified that a recommendation is officially “closed” when the audit organization determines that DOD or the component organization has implemented its provisions or otherwise met the intent of the recommendation; when circumstances have changed, and the recommendation is no longer valid; or when, after a certain amount of time, the audit organization determines that implementation cannot reasonably be expected. We also verified that an “open” recommendation is one that has not been closed for one of the preceding reasons. We assessed the reliability of the data we obtained from DOD-IG and the service audit agencies by obtaining information on how they track and follow up on recommendations and determined that their data were sufficiently reliable for our purposes. In analyzing the focus of recommendations, we identified those addressing three specific areas—requirements forecasting, asset visibility, and materiel distribution—as well those addressing other supply chain management concerns. We selected these three focus areas as the framework for our analysis based on our prior work in this high-risk area and because DOD has structured its supply chain management improvement plan around them. We then analyzed the recommendations and further divided them into one of five common themes: management oversight, performance tracking, planning, process, and policy. To identify the focus area and theme for each report and recommendation, three analysts independently labeled each report with a focus area and identified a theme for each recommendation within the report. The team of analysts then reviewed the results, discussed any discrepancies, and reached agreement on the appropriate theme for each recommendation. In the event of a discrepancy which could not be immediately resolved, we referred to the original report to clarify what the intent of the report had been in order to decide on the appropriate focus area and theme. For the purpose of our analysis, if a recommendation consisted of multiple actions, we counted and classified each action separately. We excluded from our analysis recommendations that addressed only a specific piece of equipment or system. We also excluded recommendations that addressed other DOD high-risk areas, such as business systems modernization and financial management. While we included recommendations by non-audit organizations in our analysis, we did not determine the extent to which DOD concurred with or implemented them because their status is not systemically tracked. We conducted our review from January through November 2006 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Director, Office of Management and Budget; the Secretary of Defense; the Deputy Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology, and Logistics; and other interested parties. This report will also be available at no charge on our Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact me at (202) 512-8365 or solisw@gao.gov. Key contributors to this report are listed in appendix VIII. To ensure that the services make informed and coordinated decisions about what materiel solutions are developed and procured to address common urgent wartime requirements, GAO recommended that the Secretary of Defense take the following two actions: (1) Direct the service secretaries to establish a process to share information between the Marine Corps and the Army on developed or developing materiel solutions, and (2) Clarify the point at which the Joint Urgent Operational Needs process should be utilized when materiel solutions require research and development. GAO recommended that the Secretary of Defense direct the Under Secretary of Defense, Acquisition, Technology and Logistics to ensure that the Director of the Defense Logistics Agency provide continual management oversight of the corrective actions to address pricing problems in the prime vendor program. GAO recommended that the Secretary of Defense take the following seven actions: To ensure DOD inventory management centers properly assign codes to categorize the reasons to retain items in contingency retention inventory, direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to: (1) Direct the Secretary of the Army to instruct the Army Materiel Command to modify the Commodity Command Standard System so it will properly categorize the reasons for holding items in contingency retention inventory. (2) Direct the Secretary of the Air Force to instruct the Air Force Materiel Command to correct the Application Programs, Indenture system’s deficiency to ensure it properly categorizes the reasons for holding items in contingency retention inventory. To ensure that the DOD inventory management centers retain contingency retention inventory that will meet current and future operational requirements, direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to: (3) Direct the Secretary of the Army to instruct the Army Materiel Command to require the Aviation and Missile Command to identify items that no longer support operational needs and determine whether the items need to be removed from the inventory. The Army Materiel Command should also determine whether its other two inventory commands, the Communications-Electronics Command and Tank-automotive and Armaments Command, are also holding obsolete items, and if so, direct those commands to determine whether the disposal of those items is warranted. To ensure that DOD inventory management centers conduct annual reviews of contingency retention inventory as required by DOD’s Supply Chain Materiel Management Regulation, direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to: (4) Direct the Director of the Defense Logistics Agency to require the Defense Supply Center Richmond to conduct annual reviews of contingency retention inventory. The Defense Logistics Agency should also determine whether its other two centers, the Defense Supply Center Columbus and the Defense Supply Center Philadelphia, are conducting annual reviews, and if not, direct them to conduct the reviews so they can ensure the reasons for retaining the contingency retention inventory are valid. (5) Direct the Secretary of the Navy to instruct the Naval Inventory Control Point Mechanicsburg to conduct annual reviews of contingency retention inventory. The Naval Inventory Control Point should also determine if its other organization, Naval Inventory Control Point Philadelphia, is conducting annual reviews and if not, direct the activity to conduct the reviews so it can ensure the reasons for retaining the contingency retention inventory are valid. (6) Direct the Secretary of the Army to instruct the Army Materiel Command to require the Aviation and Missile Command to conduct annual reviews of contingency retention inventory. The Army Materiel Command should also determine if its other two inventory commands, the Communications-Electronics Command and Tank- automotive and Armaments Command, are conducting annual reviews and if not, direct the commands to conduct the reviews so they can ensure the reasons for retaining the contingency retention inventory are valid. To ensure that DOD inventory management centers implement departmentwide policies and procedures for conducting annual reviews of contingency retention inventories, direct the Office of the Deputy Under Secretary of Defense for Logistics and Materiel Readiness to take the following action: (7) Revise the DOD’s Supply Chain Materiel Management Regulation to make clear who is responsible for providing recurring oversight to ensure the inventory management centers conduct the annual reviews of contingency retention inventory. To ensure funding needs for urgent wartime requirements are identified quickly, requests for funding are well documented, and funding decisions are based on risk and an assessment of the highest priority requirements, GAO recommended the Secretary of Defense direct the Secretary of the Army to establish a process to document and communicate all urgent wartime funding requirements for supplies and equipment at the time they are identified and the disposition of funding decisions. GAO recommended that the Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology, and Logistics) to take the following two actions: (1) Modify the July 30, 2004, RFID policy and other operational guidance to require that active RFID tags be returned for reuse or be reused by the military services and other users. (2) Direct the secretaries of each military service and administrators of other components to establish procedures to track and monitor the use of active RFID tags, to include determining requirements for the number of tags needed, compiling an accurate inventory of the number of tags establishing procedures to monitor and track tags, including purchases, reuse, losses, repairs, and any other categories that would assist management’s oversight of these tags. To improve accountability of inventory shipped to Army repair contractors, GAO recommended that the Secretary of Defense direct the Secretary of the Army to instruct the Commanding General, Army Materiel Command, to take the following six actions: (1) Establish systematic procedures to obtain and document contractors’ receipt of secondary repair item shipments in the Army’s inventory management systems, and to follow up on unconfirmed receipts within 45 days of shipment. (2) Institute policies, consistent with DOD regulations, for obtaining and documenting contractors’ receipt of government-furnished materiel shipments in the Army’s inventory management systems. (3) Provide quarterly status reports of all shipments of Army government-furnished materiel to Defense Contract Management Agency, in compliance with DOD regulations. (4) Examine the feasibility of implementing DOD guidance for providing advance notification to contractors at the time of shipment and, if warranted, establish appropriate policies and procedures for implementation. (5) Analyze receipt records for secondary repair items shipped to contractors and take actions necessary to update and adjust inventory management data prior to transfer to the Logistics Modernization Program. These actions should include investigating and resolving shipments that lack matching receipts to determine their status. (6) To ensure consistent implementation of any new procedures arising from the recommendations in this report, provide periodic training to appropriate inventory control point personnel and provide clarifying guidance concerning these new procedures to the command’s repair contractors. To enhance DOD’s ability to take a more coordinated and systemic approach to improving the supply distribution system, GAO recommended that the Secretary of Defense take the following three actions: (1) Clarify the scope of responsibilities, accountability, and authority between the Distribution Process Owner and the Defense Logistics Executive as well as the roles and responsibilities between the Distribution Process Owner, the Defense Logistics Agency, and Joint Forces Command. (2) Issue a directive instituting these decisions and make other related changes, as appropriate, in policy and doctrine. (3) Improve the Logistics Transformation Strategy by directing the Under Secretary of Defense (Acquisition, Technology, and Logistics) to include specific performance goals, programs, milestones, and resources to achieve focused logistics capabilities in the Focused Logistics Roadmap. To address the current underfunding of the Very Small Aperture Terminal and the Mobile Tracking System, GAO recommended that the Secretary of Defense direct the Secretary of the Army to determine whether sufficient funding priority has been be given to the acquisition of these systems and, if not, to take appropriate corrective action. To address the risks and management challenges facing the department’s prepositioning programs and improve oversight, GAO recommended that the Secretary of Defense take the following five actions: (1) Direct the Chairman, Joint Chiefs of Staff, to assess the near-term operational risks associated with current inventory shortfalls and equipment in poor condition should a conflict arise. (2) Direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to provide oversight over the department’s prepositioning programs by fully implementing the department’s directive on war reserve materiel and, if necessary, revise the directive to clarify the lines of accountability for this oversight. (3) Direct the Secretary of the Army to improve the processes used to determine requirements and direct the Secretary of the Army and Air Force to improve the processes used to determine the reliability of inventory data so that the readiness of their prepositioning programs can be reliably assessed and proper oversight over the programs can be accomplished. (4) Develop a coordinated departmentwide plan and joint doctrine for the department’s prepositioning programs that identifies the role of prepositioning in the transformed military and ensures these programs will operate jointly, support the needs of the war fighter, and are affordable. (5) Report to Congress, possibly as part of the mandated October 2005 report, how the department plans to manage the near-term operational risks created by inventory shortfalls and management and oversight issues described in this report. Defense Logistics: Better Strategic Planning Can Help Ensure DOD's Successful Implementation of Passive Radio Frequency Identification (GAO-05- 345, September 12, 2005) GAO recommend that the Secretary of Defense take the following three actions: (1) Direct the Under Secretary of Defense (Acquisition, Technology, and Logistics) to expand its current RFID planning efforts to include a DOD-wide comprehensive strategic management approach that will ensure that RFID technology is efficiently and effectively implemented throughout the department. This strategic management approach should incorporate the following key management principles: an integrated strategy with goals, objectives, and results for fully implementing RFID in the DOD supply chain process, to include the interoperability of automatic information systems; a description of specific actions needed to meet goals and performance measures or metrics to evaluate progress toward achieving the goals; schedules and milestones for meeting deadlines; identification of total RFID resources needed to achieve full an evaluation and corrective action plan. (2) Direct the secretaries of each military service and administrators of other DOD military components to develop individual comprehensive strategic management approaches that support the DOD-wide approach for fully implementing RFID into the supply chain processes. (3) Direct the Under Secretary of Defense (Acquisition, Technology, and Logistics), the secretaries of each military service, and administrators of other military components to develop a plan that identifies the specific challenges impeding passive RFID implementation and the actions needed to mitigate these challenges. Such a plan could be included in the strategic management approach that GAO recommended they develop. To improve the effectiveness of DOD’s supply system in supporting deployed forces for contingencies, GAO recommended that the Secretary of Defense direct the Secretary of the Army to take the following three actions and specify when they will be completed: (1) Improve the accuracy of Army war reserve requirements and transparency about their adequacy by: updating the war reserve models with OIF consumption data that validate the type and number of items needed, modeling war reserve requirements at least annually to update the war reserve estimates based on changing operational and equipment requirements, and disclosing to Congress the impact on military operations of its risk management decision about the percentage of war reserves being funded. Concurred with intent, open (2) Improve the accuracy of its wartime supply requirements forecasting process by: developing models that can compute operational supply requirements for deploying units more promptly as part of prewar planning and providing item managers with operational information in a timely manner so they can adjust modeled wartime requirements as necessary. (3) Reduce the time delay in granting increased obligation authority to the Army Materiel Command and its subordinate commands to support their forecasted wartime requirements by establishing an expeditious supply requirements validation process that provides accurate information to support timely and sufficient funding. (4) GAO also recommended that the Secretary of Defense direct the Secretary of the Navy to improve the accuracy of the Marine Corps’ wartime supply requirements forecasting process by completing the reconciliation of the Marine Corps’ forecasted requirements with actual OIF consumption data to validate the number as well as types of items needed and making necessary adjustments to their requirements. The department should also specify when these actions will be completed. GAO recommended that the Secretary of Defense direct the Secretary of the Army and Director of the Defense Logistics Agency to take the following two actions: (5) Minimize future acquisition delays by assessing the industrial-base capacity to meet updated forecasted demands for critical items within the time frames required by operational plans as well as specify when this assessment will be completed, and (6) Provide visibility to Congress and other decision makers about how the department plans to acquire critical items to meet demands that emerge during contingencies. GAO also recommended the Secretary of Defense take the following three actions and specify when they would be completed: (7) Revise current joint logistics doctrine to clearly state, consistent with policy, who has responsibility and authority for synchronizing the distribution of supplies from the United States to deployed units during operations; (8) Develop and exercise, through a mix of computer simulations and field training, deployable supply receiving and distribution capabilities including trained personnel and related equipment for implementing improved supply management practices, such as radio frequency identification tags that provide in-transit visibility of supplies, to ensure they are sufficient and capable of meeting the requirements in operational plans; and (9) Establish common supply information systems that ensure the DOD and the services can requisition supplies promptly and match incoming supplies with unit requisitions to facilitate expeditious and accurate distribution. GAO continued to believe, as it did in April 1999, that DOD should develop a cohesive, departmentwide plan to ensure that total asset visibility is achieved. Specifically, GAO recommended that the Secretary of Defense develop a departmentwide long-term total asset visibility strategy as part of the Business Enterprise Architecture that: (1) Describes the complete management structure and assigns accountability to specific offices throughout the department, with milestones and performance measures, for ensuring timely success in achieving total asset visibility; (2) Identifies the resource requirements for implementing total asset visibility and includes related investment analyses that show how the major information technology investments will support total asset visibility goals; (3) Identifies how departmentwide systems issues that affect implementation of total asset visibility will be addressed; and (4) Establishes outcome-oriented total asset visibility goals and performance measures for all relevant components and closely links the measures with timelines for improvement. In addition, since 2001, GAO made a number of recommendations aimed at improving DOD’s refinement and implementation of the business management modernization program. Most recently, GAO identified the need to have component plans clearly linked to the long-term objectives of the department’s business management modernization program. As they relate to total asset visibility, GAO continued to believe that these recommendations were valid. To reduce the likelihood of releasing classified and controlled spare parts that DOD does not want to be released to foreign countries, GAO recommended that the Secretary of Defense take the following three actions: (1) Direct the Under Secretary of Defense for Policy, in conjunction with the Secretaries of the Army and the Navy, and direct the Secretary of the Air Force to develop an implementation plan, such as a Plan of Actions & Milestones, specifying the remedial actions to be taken to ensure that applicable testing and review of the existing requisition-processing systems are conducted on a periodic basis. (2) Direct the Under Secretary of Defense for Policy, in conjunction with the Secretaries of the Army, the Air Force, and the Navy, to determine whether current plans for developing the Case Execution Management Information System call for periodic testing and, if not, provide for such testing. (3) Direct the Under Secretary of Defense for Policy, in conjunction with the Secretary of the Navy, and direct the Secretary of the Air Force to determine if it would be beneficial to modify the Navy’s and the Air Force’s requisition-processing systems so that the systems reject requisitions for classified or controlled parts that foreign countries make under blanket orders and preclude country managers from manually overriding system decisions, and to modify their systems as appropriate. To improve the control of government-furnished material shipped to Navy repair contractors, GAO recommended that the Secretary of Defense direct the Secretary of the Navy to instruct the Commander, Naval Inventory Control Point, to implement the following three actions: (1) Require Navy repair contractors to acknowledge receipt of material that is received from the Navy’s supply system as prescribed by DOD procedure. (2) Follow up on unconfirmed material receipts within the 45 days as prescribed in the DOD internal control procedures to ensure that the Naval Inventory Control Point can reconcile material shipped to and received by its repair contractors. (3) Implement procedures to ensure that quarterly reports of all shipments of government-furnished material to Navy repair contractors are generated and distributed to the Defense Contract Management Agency. To address the inventory management shortcomings that GAO identified, GAO recommended that the Secretary of Defense take the following three actions: (1) Direct the military services and the Defense Logistics Agency to determine whether it would be beneficial to use the actual storage cost data provided by Defense Logistics Agency in their computations, instead of using estimated storage costs, and include that data in their systems and models as appropriate; (2) Direct the Secretary of the Air Force to establish and implement a systemwide process for correcting causes of inventory discrepancies between the inventory for which item managers are accountable and the inventory reported by bases and repair centers; and (3) Direct the Secretary of the Air Force to revise its policy to require item managers to code inventory so that the inventory is properly categorized. To improve internal controls over the Navy’s foreign military sales program and to prevent foreign countries from obtaining classified and controlled spare parts under blanket orders, GAO recommended that the Secretary of Defense instruct the Secretary of the Navy to take the following six actions: (1) Consult with the appropriate officials to resolve the conflict between the DOD and Navy policies on the Navy’s use of waivers allowing foreign countries to obtain classified spare parts under blanket orders. (2) Determine and implement the necessary changes required to prevent the current system from erroneously approving blanket order requisitions for classified spare parts until the new system is deployed. (3) Establish policies and procedures for the Navy’s country managers to follow when documenting their decisions to override the system when manually processing blanket order requisitions. (4) Require that the Navy’s country managers manually enter blanket order requisitions into the Navy’s system to correctly represent foreign-country-initiated orders versus U.S. government-initiated orders so the Navy’s system will validate whether the foreign countries are eligible to receive the requested spare parts. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Partially concurred, closed, implemented Concurred, closed, implemented Partially concurred, closed, implemented (5) Establish policies and procedures to follow for blanket orders when the Navy’s country managers replace spare parts requested by manufacturer or vendor part numbers with corresponding government national stock numbers. (6) Establish interim policies and procedures, after consulting with appropriate government officials, for recovering classified or controlled spare parts shipped to foreign countries that might not have been eligible to receive them under blanket orders until the Defense Security Cooperation Agency develops guidance on this issue. To improve the Navy system’s internal controls aimed at preventing foreign countries from obtaining classified and controlled spare parts under blanket orders, GAO recommended that the Secretary of Defense direct the Under Secretary of Defense for Policy to require the appropriate officials to take the following two actions: (7) Modify the Navy’s system to revalidate blanket order requisitions when the Navy’s country manager replaces spare parts that are requested by manufacturer or vendor part numbers. (8) Periodically test the system to ensure that it is accurately reviewing blanket order requisitions before approving them. To improve internal controls over the Army’s foreign military sales program and to prevent foreign countries from being able to obtain classified spare parts or unclassified items containing military technology that they are not eligible to receive under blanket orders, GAO recommended that the Secretary of Defense instruct the Secretary of the Army to take the following two actions: (1) Modify existing policies and procedures, after consultation with the appropriate government officials, to cover items shipped in lieu of items ordered to also ensure the recovery of classified spare parts that have been shipped to foreign countries that may not be eligible to receive them under blanket orders. (2) Modify existing policies and procedures covering items, after consultation with the appropriate government officials, to cover items shipped in lieu of items ordered to also ensure the recovery of unclassified items containing military technology that have been shipped to foreign countries that may not be eligible to receive them under blanket orders. To improve the Army system’s internal controls aimed at preventing foreign countries from obtaining classified spare parts or unclassified items containing military technology under blanket orders, GAO recommended that the Secretary of Defense direct the Under Secretary of Defense for Policy to require the appropriate officials to take the following two actions: (3) Modify the system so that it identifies blanket order requisitions for unclassified items containing military technology that should be reviewed before they are released. (4) Periodically test the system and its logic for restricting requisitions to ensure that the system is accurately reviewing and approving blanket order requisitions. In order to improve supply availability, enhance operations and mission readiness, and reduce operating costs for deployed ships, GAO recommended the Secretary of Defense direct the Secretary of the Navy to: (1) Develop plans to conduct periodic ship configuration audits and to ensure that configuration records are updated and maintained in order that accurate inventory data can be developed for deployed ships; (2) Ensure that demand data for parts entered into ship supply systems are recorded promptly and accurately as required to ensure that onboard ship inventories reflect current usage or demands; (3) Periodically identify and purge spare parts from ship inventories to reduce costs when parts have not been requisitioned for long periods of time and are not needed according to current and accurate configuration and parts demand information; and (4) Ensure that casualty reports are issued consistent with high priority maintenance work orders, as required by Navy instruction, to provide a more complete assessment of ship’s readiness. To improve the supply availability of critical readiness degrading spare parts that may improve the overall readiness posture of the military services, GAO recommended that the Secretary of Defense direct the Director of the Defense Logistics Agency to: (1) Submit, as appropriate, requests for waiver(s) of the provisions of the DOD Supply Chain Materiel Management Regulation 4140.1-R that limit the safety level of supply parts to specific demand levels. Such waivers would allow Defense Logistics Agency to buy sufficient critical spare parts that affect readiness of service weapon systems to attain an 85 percent minimum availability goal; (2) Change the agency’s current aggregate 85 percent supply availability goal for critical spare parts that affect readiness, to a minimum 85 percent supply availability goal for each critical spare part, and because of the long lead times in acquiring certain critical parts, establish annual performance targets for achieving the 85 percent minimum goal; and (3) Prioritize funding as necessary to achieve the annual performance targets and ultimately the 85 percent minimum supply availability goal. To improve internal controls over the Air Force’s foreign military sales program and to minimize countries’ abilities to obtain classified or controlled spare parts under blanket orders for which they are not eligible, GAO recommended that the Secretary of Defense instruct the Secretary of the Air Force to require the appropriate officials to take the following steps: (1) Modify the Security Assistance Management Information System so that it validates country requisitions based on the requisitioned item’s complete national stock number. (2) Establish policies and procedures for recovering classified or controlled items that are erroneously shipped. (3) Establish polices and procedures for validating modifications made to the Security Assistance Management Information System to ensure that the changes were properly made. (4) Periodically test the Security Assistance Management Information System to ensure that the system’s logic for restricting requisitions is working correctly. (5) Establish a policy for command country managers to document the basis for their decisions to override Security Assistance Management Information System or foreign military sales case manager recommendations. GAO recommended that the Secretary of Defense direct the Secretary of the Navy to: (1) Develop a framework for mitigating critical spare parts shortages that includes long-term goals; measurable, outcome-related objectives; implementation goals; and performance measures as a part of either the Navy Sea Enterprise strategy or the Naval Supply Systems Command Strategic Plan, which will provide a basis for management to assess the extent to which ongoing and planned initiatives will contribute to the mitigation of critical spare parts shortages; and (2) Implement the Office of the Secretary of Defense’s recommendation to report, as part of budget requests, the impact of funding on individual weapon system readiness with a specific milestone for completion. In order to improve the department’s logistics strategic plan to achieve results for overcoming spare parts shortages, improve readiness, and address the long-standing weaknesses that are limiting the overall economy and efficiency of logistics operations, GAO recommended that the Secretary of Defense direct the Under Secretary for Acquisition, Technology, and Logistics to: (1) Incorporate clear goals, objectives, and performance measures pertaining to mitigating spare parts shortages in the Future Logistics Enterprise or appropriate agencywide initiatives to include efforts recommended by the Under Secretary of Defense, Comptroller in his August 2002 study report. GAO also recommended that the Secretary of Defense direct the Under Secretary of Defense, Comptroller to (2) Establish reporting milestones and define how it will measure progress in implementing the August 2002 Inventory Management Study recommendations related to mitigating critical spare parts shortages. GAO recommended that the Secretary of Defense direct the Secretary of the Air Force to take the following steps: (1) Incorporate the Air Force Strategic Plan’s performance measures and targets into the subordinate Logistics Support Plan and the Supply Strategic Plan. (2) Commit to start those remaining initiatives needed to address the causes of spare parts shortages or clearly identify how the initiatives have been incorporated into those initiatives already underway. (3) Adopt performance measures and targets for its initiatives that will show how their implementation will affect critical spare parts availability and readiness. (4) Direct the new Innovation and Transformation Directorate to establish plans and priorities for improving management of logistics initiatives consistent with the Air Force Strategic Plan. (5) Request spare parts funds in the Air Force’s budget consistent with results of its spare parts requirements determination process. GAO recommended that the Secretary of Defense direct the Secretary of the Army to: (1) Modify or supplement the Transformation Campaign Plan, or the Army-wide logistics initiatives to include a focus on mitigating critical spare parts shortages with goals, objectives, milestones, and quantifiable performance measures, such as supply availability and readiness-related outcomes and (2) Implement the Office of Secretary of Defense recommendation to report, as part of budget requests, the impact of additional spare parts funding on equipment readiness with specific milestones for completion. Defense Inventory: Overall Inventory and Requirements Are Increasing, but Some Reductions in Navy Requirements Are Possible (GA0-03-355, May 8, 2003) To improve the accuracy of the Navy’s secondary inventory requirements, GAO recommended that the Secretary of Defense direct the Secretary of the Navy to require the Commander, Naval Supply Systems Command, to require its inventory managers to use the most current data available for computing administrative lead time requirements. Given the importance of spare parts to maintaining force readiness, and as justification for future budget requests, actual and complete information would be important to DOD as well as Congress. Therefore, GAO recommended that the Secretary of Defense: (1) Issue additional guidance on how the services are to identify, compile, and report on actual and complete spare parts spending information, including supplemental funding, in total and by commodity, as specified by Exhibit OP-31 and (2) Direct the Secretaries of the military departments to comply with Exhibit OP-31 reporting guidance to ensure that complete information is provided to Congress on the quantities of spare parts purchased and explanations of deviations between programmed and actual spending. GAO recommended that the Secretary of Defense establish a direct link between the munitions needs of the combatant commands—recognizing the impact of weapons systems and munitions preferred or expected to be employed—and the munitions requirements determinations and purchasing decisions made by the military services. Defense Inventory: Improved Industrial Base Assessment for Army War Reserve Spares Could Save Money (GA0-02- 650, July 12, 2002) In order to improve the Army’s readiness for wartime operations, achieve greater economy in purchasing decisions, and provide Congress with accurate budget submissions for war reserve spare parts, GAO recommended that the Secretary of Defense direct the Secretary of the Army to have the Commander of Army Material Command take the following actions to expand or change its current process consistent with the attributes in this report: (1) Establish an overarching industrial base capability assessment process that considers the attributes in this report. (2) Develop a method to efficiently collect current industrial base capability data directly from industry itself. (3) Create analytical tools that identify potential production capability problems such as those due to surge in wartime spare parts demand. (4) Create management strategies for resolving spare parts availability problems, for example, by changing acquisition procedures or by targeting investments in material and technology resources to reduce production lead times. To improve the control of inventory being shipped, GAO recommended that the Secretary of Defense direct the Secretary of the Air Force to undertake the following: Improve processes for providing contractor access to government-furnished material by: (1) Listing specific stock numbers and quantities of material in repair contracts (as they are modified or newly written) that the inventory control points have agreed to furnish to contractors. (2) Demonstrating that automated internal control systems for loading and screening stock numbers and quantities against contractor requisitions perform as designed. (3) Loading stock numbers and quantities that the inventory control points have agreed to furnish to contractors into the control systems manually until the automated systems have been shown to perform as designed. (4) Requiring that waivers to loading stock numbers and quantities manually are adequately justified and documented based on cost-effective and/or mission-critical needs. Revise Air Force supply procedures to include explicit responsibility and accountability for: (5) Generating quarterly reports of all shipments of Air Force material to contractors. (6) Distributing the reports to Defense Contract Management Agency property administrators. (7) Determine, for the contractors in our review, what actions are needed to correct problems in posting material receipts. (8) Determine, for the contractors in our review, what actions are needed to correct problems in reporting shipment discrepancies. (9) Establish interim procedures to reconcile records of material shipped to contractors with records of material received by them, until the Air Force completes the transition to its Commercial Asset Visibility system in fiscal year 2004. (10) Comply with existing procedures to request, collect, and analyze contractor shipment discrepancy data to reduce the vulnerability of shipped inventory to undetected loss, misplacement, or theft. For all programs, GAO recommended that the Secretary of Defense direct the Director of the Defense Logistics Agency to take the following actions: (1) As part of the department’s redesign of its activity code database, establish codes that identify the type of excess property—by federal supply class—and the quantity that each special program is eligible to obtain and provide accountable program officers access to appropriate information to identify any inconsistencies between what was approved and what was received. (2) Reiterate policy stressing that Defense reutilization facility staff must notify special program officials of the specific tracking and handling requirements of hazardous items and items with military technology/applications. Concurred, closed, implemented Nonconcurred, closed, not implemented Partially concurred, closed, not implemented Partially concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, not implemented Concurred, closed, not implemented Concurred, closed, implemented Partially concurred, closed, not implemented Partially concurred, closed, implemented Concurred, closed, implemented GAO also recommended that the Secretary of Defense ensure that accountable program officers within the department verify, prior to approving the issuance of excess property, the eligibility of special programs to obtain specific types and amounts of property, including items that are hazardous or have military technology/applications. This could be accomplished, in part, through the department’s ongoing redesign of its activity code database. For each individual program, GAO further recommended the following: (1) With regard to the 12th Congressional Regional Equipment Center, that the Secretary of Defense direct the Director of the Defense Logistics Agency to review and amend, as necessary, its agreement with the Center in the following areas: (a) The Center’s financial responsibility for the cost of shipping excess property obtained under the experimental project, (b) The ancillary items the Center is eligible to receive, (c) The rules concerning the sale of property and procedures for the Center to notify the Agency of all proposed sales of excess property, (d) The Center’s responsibility for tracking items having military technology/application and hazardous items, and (e) The need for Agency approval of the Center’s orders for excess property. (2) With regard to the Army, the Navy, and the Air Force Military Affiliate Radio Systems, GAO recommended that the Secretary of Defense direct the Chairman of the Joint Chiefs of Staff to have the Joint Staff Directorate for Command, Control, Communications, and Computer Systems review which items these systems are eligible to receive, on the basis of their mission and needs, and direct each of the Military Affiliate Radio Systems to accurately track excess property, including pilferable items, items with military technology/ applications, and hazardous items. (3) With regard to the Civil Air Patrol, GAO recommended that the Secretary of Defense direct the Secretary of the Air Force to have the Civil Air Patrol-Air Force review which items the Patrol is eligible to receive, on the basis of its mission and needs, and direct the Patrol to accurately track its excess property, including pilferable items, items with military technology/applications, and hazardous items. To provide the military services, the Defense Logistics Agency, and the U.S. Transportation Command with a framework for developing a departmentwide approach to logistics reengineering, GAO recommended that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to revise the departmentwide Logistics Strategic Plan to provide for an overarching logistics strategy that will guide the components’ logistics planning efforts. Among other things, this logistics strategy should: (1) Specify a comprehensive approach that addresses the logistics life-cycle process from acquisition through support and system disposal, including the manner in which logistics is to be considered in the system and equipment acquisition process and how key support activities such as procurement, transportation, storage, maintenance, and disposal will be accomplished. (2) Identify the logistics requirements the department will have to fulfill, how it will be organized to fulfill these requirements, and who will be responsible for providing specific types of logistics support. (3) Identify the numbers and types of logistics facilities and personnel the department will need to support future logistics requirements. (4) GAO also recommended that the Under Secretary of Defense for Acquisition, Technology, and Logistics establish a mechanism for monitoring the extent to which the components are implementing the department’s Logistics Strategic Plan. Specifically, the Under Secretary of Defense for Acquisition, Technology, and Logistics should monitor the extent to which the components’ implementation plans are (a) consistent with the departmentwide plan, (b) directly related to the departmentwide plan and to each other, and (c) contain appropriate key management elements, such as performance measures and specific milestones. Prepare quarterly statistic reports quantifying the cost effectiveness of the special program requirement initiative to reduce or cancel procurement actions by the use of adjusted buy-back rates, segregated by Defense Supply Centers. A.1. Transmit shipment notification transactions to the Defense Reutilization and Marketing Service when materiel is shipped to the Defense Reutilization and Marketing Office and ensure the data in the shipment notification are accurate. A.2. Review and research Defense Reutilization and Marketing Service follow-up transactions for materiel reported as shipped but not received, and respond to the Defense Reutilization and Marketing Service follow-up transactions in a timely manner. B. Establish controls to ensure that Navy organizations either demilitarize materiel or provide demilitarization instructions to the Defense Logistics Agency Depots, prior to requesting the depot ship materiel to disposal, and respond to depot requests for demilitarization instructions in a timely manner. C. Validate that the Realtime Reutilization Asset Management Program Office reprograms its computer system to ensure that disposal shipment notifications, rather than disposal shipment confirmations, are sent to Defense Reutilization and Marketing Service for disposal shipments. D. Request that the Defense Reutilization and Marketing Service provide management reports which identify Navy organizations that are not responding to disposal follow-up transactions for materiel reported as shipped but not received and that are not sending disposal shipment notifications for materiel shipped to disposal. A. Establish controls to ensure that Defense Distribution Depot personnel request the required demilitarization instructions for all materiel awaiting disposal instructions and reverse the disposal transactions if the required instructions are not received. B. Establish controls to ensure that the Defense Reutilization and Marketing Service reviews and analyzes management data to identify Navy organizations that are not routinely preparing shipment disposal notifications or are not routinely responding to follow-up transactions and identify to the Naval Supply Systems Command potential problems with data in the in-transit control system in order for the Naval Supply Systems Command to ensure that Navy organizations comply with disposal procedures. The Commanding General, Marine Corps Logistics Command should: 1. Identify all excess materiel and return the materiel to the supply system, as required by Marine Corps Order P4400.151B, “Intermediate-Level Supply Management Policy Manual,” July 9, 1992. 2. Perform physical inventories of all materiel in all storage locations and adjust inventory records accordingly. The Director, Defense Logistics Agency should: 1. Reevaluate the cost categories for determining the average annual cost for maintaining an inactive national stock number item in the Defense Logistics Agency supply system and recalculate the average annual cost consistent with other pricing and cost methodologies. 2. Discontinue application of the draft Defense Logistics Agency Office of Operations Research and Resource Analysis report, “Cost of a DLA Maintained Inactive National Stock Number,” July 2002, to any authorized programs of DOD or the Defense Logistics Agency until all applicable cost categories are fully evaluated and the applicable costs of those relevant categories are incorporated into the cost study. A. Identify the circumstances or conditions under which other nonrecurring requirements are authorized for processing. B. Identify the requirements for documenting the methodology and rationale for using other nonrecurring requirement transactions. C. Establish requirements for identifying the supply center personnel who enter other nonrecurring requirements in the Defense Logistics Agency supply system and retaining other nonrecurring requirement records after the support dates have passed. Establish a timeline for the Defense supply centers to validate outstanding other nonrecurring requirement transactions in the Defense Logistics Agency supply system. Other nonrecurring requirement transactions that do not have sufficient supporting documentation or that cannot be validated should be canceled or reduced and reported to the Defense Logistics Agency. The report should include the total number of other nonrecurring requirement transactions that were deleted and the dollar value of procurement actions that were canceled as a result. The Commander, Ogden Air Logistics Center should immediately: 1. Comply with the guidance in Air Force Manual 23-110, “U.S. Air Force Supply Manual,” and Air Force Materiel Command Instruction 21-130, “Equipment Maintenance Materiel Control,” regarding the management of maintenance materiel stored at the Air Logistics Center. 2. Perform an annual physical inventory of all materiel recorded in the D035K Wholesale and Retail and Shipping System that is the responsibility of the Maintenance Directorate, reconcile the results, and turn in excess materiel to supply. 3. Perform a physical count of all materiel located on the maintenance shop floors and in storage areas to identify unaccountable and excess materiel, reconcile the physical count to the D035K Wholesale and Retail and Shipping System, and turn in excess materiel to supply. 4. Complete the review of courtesy storage materiel listed in the materiel processing system and either turn in the excess to supply, move to the D035K Wholesale and Retail and Shipping System, or dispose of the materiel. A. Expedite funding and the deployment of the Commercial Asset Visibility system to Army commercial repair facilities. Funding and deployment should be prioritized based primarily on the dollar value of repairable assets at the commercial repair facilities. B. Perform oversight of compliance with DoD 4000.25-2-M, “Military Standard Transaction Reporting and Accounting Procedures,” March 28, 2002, to conduct annual location reconciliations between inventory control point records and storage depot records. A. Determine whether the items with inventory records that were adjusted as a result of the October 2002 reconciliation between the Communications-Electronics Command and the Defense Depot Tobyhanna Pennsylvania are obsolete or excess to requirements. That determination should be made before requesting special inventories or performing other costly causative research procedures. B. Dispose of those assets that are identified as obsolete or excess to projected requirements. A. Develop in-house procedures to provide management information reports to the inventory accuracy officer, comparable to the management information reports required in the February 2003 contract awarded to Resources Consultant Incorporated, to assist in reducing in-transit inventory. B. Establish controls to ensure that all in-transit items that meet the criteria in Naval Supply Systems Command Publication 723, “Navy Inventory Integrity Procedures,” April 19, 2000, are reviewed prior to writing them off as an inventory loss. The Commander, Warner Robins Air Logistics Center should immediately: 1. Comply with Air Force guidance regarding the management of maintenance materiel stored at the Air Logistics Center. 2. Issue guidance regarding materiel management reports for management review. 3. Perform an annual physical inventory of all materiel recorded in the D035K Wholesale and Retail and Shipping System that is the responsibility of the Maintenance Directorate, reconcile the results, and turn in excess materiel to supply. 4. Perform a physical count of all materiel located on the maintenance shop floors and in storerooms, reconcile the physical count to the D035K Wholesale and Retail and Shipping System, and turn in excess materiel to supply. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented 5. Update or complete Air Force Materiel Command Form 100 for each line of floating stock and spares inventory. Submit to the floating stock and spares monitor for processing those forms in which the authorization level changes. 6. Perform semi-annual reviews of materiel stored in the courtesy storage area and turn in excess materiel to supply. 7. Perform quarterly reviews of bench stock materiel in the Low Altitude Navigation and Targeting Infrared for Night shop of the Avionics Division and turn in excess materiel to supply. A. Enforce the requirements of Naval Air Systems Command Instruction 4400.5A to identify excess materiel that has been inactive for more than 270 days for routine use materiel and 12 months for long lead-time or low demand materiel. B. Require quarterly reporting of excess of materiel at Naval Air Depots to ensure excess materiel does not accumulate. C. Develop policy for point of use inventory. A. Perform physical inventories of materiel stored in all storage locations and adjust inventory records accordingly. B. Perform the required quarterly reviews of materiel stored in maintenance storerooms to determine whether valid requirements exist for the materiel. C. Identify all excess materiel stored in maintenance storerooms and return the materiel to the supply system. A. Comply with Navy guidance regarding the storage of maintenance materiel at the depot, performance of quarterly reviews of maintenance materiel on hand, and submission of management reports for review. B. Develop and implement an effective management control program. A. Inventory materiel stored in work center storerooms, record all of the on-hand materiel on accountable records, identify the materiel for which a valid need exists, and return the items with no known requirement to the supply system. B. Review jobs at closeout to determine whether a need exists for leftover materiel. Leftover, unneeded materiel should be made visible to item managers and disposed of in a timely manner. C. Perform the required quarterly reviews of materiel stored in work center storerooms to determine whether valid requirements exist for the materiel. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented D. Perform physical inventories of materiel stored in all storage locations and adjust inventory records accordingly. A. Comply with the Defense Logistics Agency Manual 4140.2 requirement that Defense Logistics Agency item managers contact the supply center monitor for the weapon system support program to coordinate the deletion of the code that identifies the national stock number item as a weapon system item. B. Comply with the Defense Logistics Agency Manual 4140.3 requirement that the supply center monitor for the weapon system support program notify the Military Departments when a national stock number item supporting a weapon system is to be deleted from the supply system as a result of the Defense Inactive Item Program process. Determine the most efficient and cost-effective method to reinstate national stock number items that were inappropriately deleted from the supply system. A. Review the revised procedures for processing Defense Inactive Item Program transactions when the FY 2002 process is complete to ensure the procedures are working as intended and that inactive item review notifications are being promptly returned to the Defense Logistics Agency. B. Establish controls to ensure that inactive item review notifications are reviewed by the user and are returned to the Defense Logistics Agency before an automatic retain notification is provided to the Defense Logistics Agency. C. Establish controls to review Defense Logistics Agency transactions deleting national stock numbers from Air Force systems so that the inappropriate deletion of required data from the Air Force supply system is prevented. A. Describe the factors to be used by the Military Departments and supply centers to evaluate the validity of potential candidates for additive investment. B. Require that additive safety level requirements be based on consistent and up-to-date supply availability data. C. Require regular reviews to determine whether additive safety levels continue to be appropriate. Establish a frequency for when and how often reviews should be made and the criteria for making necessary safety level adjustments and reinvesting funds. D. Establish a method for maintaining safety level increases that adheres to the DoD safety level limitation while recognizing and adjusting to changes in the supply system. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Partially concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented E. Establish a time frame for continuous program evaluation and a resolution process that includes a flag or general officer from each Military Department whenever problem elevation is needed. Approve and coordinate with the Military Departments the revised implementation plan. A. Revise Defense Logistics Agency Manual 4140.2, “Supply Operations Manual,” July 1, 1999, to include terminal national stock number items with registered users in the Defense Inactive Item Program. B. Maintain and report statistics on how many terminal national stock number items are deleted from the supply system after the North Atlantic Treaty Organization and foreign governments review the items. Establish controls to ensure that the Navy is removed as a registered user of Defense Logistics Agency-managed national stock number items that are no longer required. A. Discontinue the use of the market basket approach to determine which bench-stock items are placed on the industrial prime vendor contract. Instead, evaluate each item separately and select the most economical source to supply material. B. Review inventory levels and discontinue placing items on the industrial prime vendor contract with more than 3 years of inventory. C. Take appropriate action in accordance with contract terms to remove items with more than 3 years of inventory and start using existing depot inventories as the first choice to fill contract demand. Convene a performance improvement team composed of representatives from all relevant stakeholders, including appropriate oversight agencies, to plan and execute a reengineered best value approach to manage bench-stock material for all customers that addresses competition and restriction on contract bundling. B. The Commander, Defense Supply Center Philadelphia should: 1. Implement procedures to ensure that future spot buy material procurements are priced and paid for in accordance with the terms of the contract. 2. Obtain a full refund from the Science Application International Corporation for erroneous charges, including lost interest, and take appropriate steps to reimburse the air logistics centers for the full amount of the contract overcharges. Direct the Corpus Christi Army Depot to comply with Army guidance regarding the storage of maintenance materiel at the depot and the preparation and submission of management reports for review. A. Price the materiel stored in the Automated Storage and Retrieval System that has no extended dollar value or that has been added to the physical inventory, and identify the value of inventory excess to prevailing requirements. B. Inventory materiel stored in work centers on the maintenance shop floors, record the materiel on accountable records, identify the materiel for which a valid need exists, and turn in or transfer to other programs excess materiel. C. Perform an annual physical inventory of all of the materiel stored in the Automated Storage and Retrieval System. D. Perform the required quarterly reviews of materiel stored in the Automated Storage and Retrieval System to determine if valid requirements exist for the stored materiel. E. Review projects at the 50-percent, 75-percent, and 90- percent completion stages to determine if a need exists for materiel in storage. F. Perform a reconciliation between the Automated Storage and Retrieval System and Maintenance Shop Floor System files, at a minimum monthly, to determine if files are accurate. A physical inventory should be performed to correct any deficiencies. 2. (G) The Commander, Corpus Christi Army Depot should immediately prepare and submit the following report to management for review: 1. A monthly total dollar value for materiel stored in the Automated Storage and Retrieval System. 2. Items stored in the Automated Storage and Retrieval System with no demand in the last 180 days. 3. Materiel stored in the Automated Storage and Retrieval System against closed program control numbers. 4. Materiel stored against overhead program control numbers. 5. Potential excess materiel by program control number. A. The Commander, U.S. Forces Korea should: 1. Establish guidance for delivery of cargo from ports of debarkation within the theater using Uniform Materiel Movement and Issue Priority System standards or U.S. Forces Korea supplemental standards to the Uniform Materiel Movement and Issue Priority System criteria more applicable to theater requirements. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Partially concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred , closed, implemented Concurred, open 2. Establish procedures for using and maintaining documentation that provides evidence of delivery times and the accuracy of the delivered cargo. 3. Prepare or amend commercial carrier contracts that contain delivery provisions for weekend and holiday deliveries, and penalties for nonperformance compliance with the standards established by the provisions of Recommendation A.1. 4. Establish procedures to ensure that the priority of the cargo to be delivered from a port of debarkation is matched with a commercial carrier contract that has the necessary provisions that will ensure delivery within the standards established by Recommendation A.1. 5. Establish procedures, metrics, and surveillance plans that will monitor and ensure carrier performance of contract specifications and reconcile movement control documents received from commercial carriers to ensure consignees received prompt and accurate delivery of all cargo. B. The Commander, U.S. Forces Korea should revise U.S. Forces Korea Regulation 55-355 to require: 1. Supply Support Activities to maintain dated and signed truck manifests and pickup sheets to confirm receipt. 2. Supply Support Activities immediately contact end users for pickup of high priority cargo within the same day the cargo is made available for end user. The Director, Defense Logistics Agency should: 1. Revise Defense Logistics Agency Manual 4140.2, “Supply Operations Manual,” July 1, 1999, to include terminal national stock number items with no registered users in the Defense Inactive Item Program last user withdrawn process. 2. Maintain and report statistics on how many terminal national stock number items are deleted from the supply system after the North Atlantic Treaty Organization and foreign governments review the items. Ensure that the Joint Total Asset Visibility Program is funded until sufficient operational capabilities of the Global Combat Support System have been fielded and can provide capabilities that are at least equivalent to the existing Joint Total Asset Visibility Program. The Deputy Under Secretary of Defense (Logistics and Materiel Readiness) should: 1. Evaluate the usefulness of the DoD Total Asset Visibility performance measure. 2. Issue specific, written, performance measure guidance that standardizes and clarifies the required data elements for the Total Asset Visibility measure consistent with the evaluation of the usefulness of the measure. 3. Establish and institutionalize a process to evaluate and verify data submitted by DoD Components for the Total Asset Visibility performance measure, consistent with the evaluation of the usefulness of the measure. Reassess guidance regarding the 60-day storage and requisitioning of fabrication materiel at maintenance depots and revise Army Regulation 750-2. The guidance should state the following: the appropriate number of days depots should be allowed for storing and requisitioning fabrication materiel. quarterly reviews should be performed to determine if materiel is still required. Issue guidance regarding management of the Automated Storage and Retrieval System at Tobyhanna. The guidance should include the following: all materiel stored in the Automated Storage and Retrieval System shall be, at a minimum, identified by owning cost center; national stock number/part number; program control number; quantity; acquisition source code; nomenclature; and condition code. a review of any materiel with a date of last activity more than 6 months shall be performed. an annual physical inventory of any materiel stored in the Automated Storage and Retrieval System shall be performed. items stored in mission stocks must represent a bona fide potential requirement for performance of a maintenance or fabrication requirement. availability of materiel from previously completed fabrication orders must be determined before placing new requisitions. projects shall be reviewed at the 50 percent, 75 percent, and 90 percent completion stages to determine if a need exists for materiel still in storage. reclaimed materiel, materiel removed from assets in maintenance, and work in process may be stored until reutilized on the maintenance program. Excess reclaimed materiel shall be turned in or transferred to a valid funded program. materiel shall not be stored in Automated Storage and Retrieval System in an overhead account. quarterly reviews shall be performed on materiel stored in the Automated Storage and Retrieval System to determine if requirements still exist. prior to closing a depot maintenance program, any associated remaining repair parts, spares, and materiel on hand shall be transferred to an ongoing program or a program that will begin within 180 days or turned in to the installation supply support activity within 15 days. The gaining program must be funded, open, and valid. The transferred materiel must be a bona fide potential requirement of the gaining program. A.3. The Commander, Communications-Electronics Command should direct Tobyhanna to immediately: a. Price the materiel stored in the Automated Storage and Retrieval System that has no extended dollar value or that has been added to the physical inventory, identify the value of inventory excess to prevailing requirements, and notify the Inspector General, DoD, of the corrected dollar value of the inventory and value of inventory excess to the requirements. b. Limit the storage of materiel in the Automated Storage and Retrieval System under overhead accounts. Specifically, remove materiel obtained from the Sacramento Air Logistic Center from the overhead account program control numbers. c. Record the Tactical Army Combat Computer System equipment on accountable records and inventory and turn in the computer equipment to the supply system because no requirement for the equipment exists at Tobyhanna. Issue guidance regarding reports that should be submitted to management for review. The guidance should require the following reports: an annual physical inventory of all materiel stored in Automated Storage and Retrieval System. a reconciliation between the Automated Storage and Retrieval System and Maintenance Shop Floor System files, at a minimum monthly, to determine if files are accurate. a physical inventory should be performed to correct any deficiencies. Reports should be prepared for management review. a monthly total dollar value for materiel stored in the Automated Storage and Retrieval System. items stored in the Automated Storage and Retrieval System with no demand in the last 180 days. materiel stored in the Automated Storage and Retrieval System against closed program control numbers. materiel stored against overhead program control numbers. potential excess materiel by program control number. Direct the Tobyhanna Army Depot to immediately perform a physical inventory and reconcile the Automated Storage and Retrieval System records with the Maintenance Shop Floor System records to verify the accuracy of inventory records and submit report for review. A-1. Include placement of stocks (malpositioned) as part of the Army Pre-positioned Stocks program performance metrics. As a minimum: clearly define malpositioned stocks and establish procedures for calculating the data to minimize inconsistency or data misrepresentation reported by the subordinate activities. establish long-term goals for correcting the problems and annually monitor the progress in meeting the goals to ensure the situation doesn’t deteriorate. examine the feasibility of correcting the Web Logistics Integrated Database limitations and shortfalls identified within this report so the system can be used to produce reliable performance data. A-2. Improve shelf-life management controls and oversight. As a minimum: develop stock rotation plans for items in long-term storage outside Continental U.S. or remove the items from outside Continental U.S. storage. prepare an annual list of all Army Pre-positioned Stocks items due to expire within 12 and 24 months and have U.S. Army Field Support Command ensure stock rotation plans are adequate to minimize expired assets. Use the data to formulate funding requirements for test and inspection. use critical data fields within information management systems to assist in shelf-life stock rotations. Require U.S. Army Field Support Command to monitor shelf-life data—such as dates of manufacture and expiration dates—provided by its Army Pre- positioned Stocks sites to ensure it is current and complete. Perform quarterly reconciliations. include shelf-life management metrics as part of the Army Pre- positioned Stocks program performance assessment. Establish goals and develop methods to track and minimize the loss of items due to the expired shelf-life. Concurred, open A-3. Strengthen accountability controls and enhance data integrity, reliability, and visibility of pre-positioned stocks. Specifically: require U.S. Army Communications-Electronics Life Cycle Management Command and U.S. Army Tank-automotive and Armaments Life Cycle Management Command to incorporate controls similar to U.S. Army Aviation and Missile Life Cycle Management Command that will identify and track unauthorized transactions—that is, situations where the ownership purpose code of an item was changed from a war reserve purpose code to a general issue code without first receiving approval from Army Pre-positioned Stocks personnel. execute the required steps to place data associated with loan transactions onto the Army knowledge online account to facilitate oversight of loan transactions. numerically sequence each approved request and use the number to cross-reference back to the approved request. include all Open Army Pre-positioned Stocks loan transactions issued to item managers that weren’t paid back as part of the Army Pre-positioned Stocks program performance assessment. require U.S. Army Communications-Electronics Life Cycle Management Command and U.S. Army Tank-automotive and Armaments Life Cycle Management Command to track the paybacks by establishing a scheduled payback target date so they can be proactive in pursuing collections. track inventory loss adjustment statistics as a potential source for benchmarking progress on reducing repetitive errors and identifying performance problems. establish dollar values for supply class VII inventory adjustments in Logistics Modernization Program so loss adjustments meeting the causative research criteria are researched. randomly sample 25 percent of the inventory loss adjustment transactions to verify the adjustments are supported by evidence of documented causative research and an adequate explanation is documented. A-4. Track Army Pre-positioned Stocks site weekly data reconciliations to evaluate performance and data reliability. For the Commander, 10th Mountain Division (Light Infantry) A-1. Provide unit commanders with a block of instructions that explain the process and importance of accurately accounting for assets and maintaining the property book. A-2. Establish a reminder system to notify gaining and losing units when equipment transfers occur. A-3. Develop and distribute guidance to operations personnel stressing the need to follow established procedures for accounting for assets and the importance of providing necessary documentation to property book officers. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented A-4. Research each discrepancy with equipment transfers and turn-in documents and make appropriate adjustments to the property book records for the 1st and 2nd BCTs. If the missing vehicles can’t be located in a reasonable time period, initiate an AR 15-6 investigation and, if warranted, take further appropriate action. B-1. Research the discrepancies we found with the 1st, 2nd, and 3rd BCT vehicles and make appropriate adjustments to the respective property books. For the Commander, U.S. Army Aviation and Missile Life Cycle Management Command 1. Require: item managers to consider historical procurement data in the Master Data Record’s Sector 10 when justifying values they enter for the Requirements System to use as representative estimates of procurement lead time. integrated Materiel Management Center second-level supervisors to review and explicitly approve the procurement lead time values entered into the Master Data Record by item managers. 2. Require contract specialists to adhere to Army and Aviation and Missile Life Cycle Management Command guidance on considering the extent of delay in awarding procurements to vendors when justifying if a procurement should be identified as a representative estimate of a future procurement’s administrative lead time. A-1. Initiate DA staff action to withhold funding for increasing safety levels until Army Materiel Command develops test procedures and identifies key performance indicators to measure and assess the cost-effectiveness and impact on operational readiness. For the Commander, Defense Supply Center Philadelphia 1. Monitor the contractor’s progress to ensure the contractor completes the reorganization of the bulk storage warehouses with a location grid plan and subsequent warehousing of operational rations with specific location areas in the warehouses. Then ensure contractor records updated locations of these rations in the warehouse management system database to ensure physical location of products match the database. 2. Complete and implement the software change package to ensure operational rations containing more than one national stock number are allocated from inventory based on the first-to-expire inventory method. Concurred, closed, implemented Concurred, closed, implemented 3. Develop and implement guidance for the contractor regarding the requirements for the destruction of government-owned operational rations which have been deemed unfit for human consumption. Require the contracting officer representative to certify the destruction certification package only when adequate documentation is attached to support the operational rations being destroyed. Also, require the contracting officer representative to ensure products are destroyed in a reasonable time frame after the Army Veterinarians recommend destruction of the products. If Implemented, this recommendation should result in monetary savings to the government. 4. Before shipping excess to theater, review the worldwide excess stock of operational rations and identify the expiration dates on products that may be considered for shipping to replenish operational ration stock in theater. Before shipping stock, coordinate with the Theater Food Advisor to ensure the products can be incorporated into the existing stock on hand and be effectively managed. Also, don’t consider for shipment any products with less than 4-months’ remaining shelf life unless the Army Veterinarians have inspected and extended the shelf life of the products. In such cases, ensure the documentation accompanies the shipments. 5. Implement a Quality Assurance Surveillance Plan that encompasses all requirements of the prime vendor contract. Require the Administrative Contracting Officer and the contracting officer representative located at the prime vendor’s location in Kuwait to monitor and document the contractor’s performance using the Quality Assurance Surveillance Plan on a scheduled basis. Upon completion of each review, the Contracting Officer should review the results of the Quality Assurance Surveillance Plan and determine if any actions are required to correct the areas of concern. For the Commander, Defense Supply Center Philadelphia and for the Commander, Coalition Forces Land Component Command 6. Require the Theater Food Advisor and Defense Supply Center Philadelphia to review the quantities of operational rations that are currently excess in the prime vendor’s warehouses and ensure none of these products have orders placed until the excess quantities are projected to be depleted. If implemented, this recommendation will result in funds put to better use. For the Commander, Coalition Forces Land Component Command 7. Require the Theater Food Advisor to periodically review the inventory of government-owned operational rations and ensure appropriate action is taken when products reach their expiration date but remain in the prime vendor’s inventory. If implemented, this recommendation should result in monetary savings to the government. A-1. Ensure that the Defense Contract Audit Agency remains actively involved in monitoring the contractor’s costs. For the Assistant Secretary of the Army (Acquisition, Logistics and Technology) B-1. Develop Army guidance for approving contract requirements for deployment operations to include acquisition approval thresholds, members of joint acquisition review boards, and documentation of board actions. C-1. Establish guidance addressing how to transfer government property to contractors in the absence of a government property officer to conduct a joint inventory. C-2. Issue specific policy on (i) screening the contingency stocks at Fort Polk for possible use on current and future Logistics Civil Augmentation Program contracts, and (ii) returning commercial- type assets to the contingency stocks at Fort Polk after specific contract operations/task orders are completed. C-3. Update Army Materiel Command Pamphlet 700-30 to include specific procedures on: screening the contingency stocks at Fort Polk for possible use on current and future Logistics Civil Augmentation Program contracts. returning commercial-type assets to the contingency stocks at Fort Polk after contracts are completed. disposing of obsolete or unusable property. D-1. Include in an annex to AR 715-9 (Contractors Accompanying the Force) the key management controls related to Logistics Civil Augmentation Program, or specify another method for determining whether the management controls related to the program are in place and operating. For the Deputy Chief of Staff, G-4 1. Authorized Stockage Lists (Inventory On-Hand): Army should issue a change to policy and update AR 710-2 to require forward distribution points in a deployed environment to hold review boards for authorized stockage lists when they deploy and no less often than quarterly thereafter. Require review boards to accept recommendations from dollar cost banding analyses or justify why not. Improvements needed to better meet supply parts demand. A-1. Develop policy and procedures for the program executive office community to follow to identify, declare, and return excess components to the Army supply system. A-2. Develop and issue guidance that states ownership of Army Working Capital Fund (AWCF) components that subordinate management offices possess and control through modification, conversion, and upgrade programs resides with the Army supply system. Concurred, closed, implemented Nonconcurred, closed, not implemented Nonconcurred, closed, not implemented Partially concurred, closed, not implemented Concurred, closed, implemented Concurred, open A-3. Make sure policy is clear on the responsibilities of program executive offices and their subordinate management offices in complying with established Army policy and procedures for asset accountability. Specifically, record and account for all Army assets in a standard Army system that interfaces with the Army system of accountability. As a minimum, make sure item managers: have all transactions and information on acquisition, storage, and disposition of their assets. are notified of any direct shipments so that the item managers can record the direct shipments to capture demand history for requirements determination. A-1. Construct permanent or semipermanent facilities in Kuwait and Iraq in locations where a continued presence is expected and that have a large number of containers being used for storage, force protection, and other requirements. For those locations where construction of permanent or semipermanent facilities isn’t feasible, use government-owned containers to meet storage, force protection, and other requirements. A-2. Align the Theater Container Management Agency at the appropriate command level to give it the authority to direct and coordinate container management efforts throughout the Central Command area of responsibility. A-3. Direct the Theater Container Management Agency to develop and maintain a single theater container management database. Issue guidance that requires all activities in the area of responsibility to use this database for their container management. A-4. Coordinate with Military Surface Deployment and Distribution Command to purchase commercial shipping containers in the Central Command area of responsibility that are currently accruing detention. In addition, discontinue use of the Universal Service Contract and only use government-owned containers or containers obtained under long-term leases for future shipment of equipment and supplies into the Central Command area of responsibility. Ensure any long-term lease agreements entered into include provisions to purchase the containers. A-5. Coordinate with Military Surface Deployment and Distribution Command to either get possession of the 917 government-owned containers still in the carriers’ possession, obtain reimbursement from the carriers for the $2.1 million purchase price of the containers, or negotiate with the carriers to reduce future detention bills by $2.1 million. A-6. Coordinate with Military Surface Deployment and Distribution Command to reopen the 6-month review period under the post- payment audit clause to negotiate with commercial carriers to either obtain reimbursement of $11.2 million for detention overcharges on the 29 February 2004 detention list, or negotiate with the carriers to reduce future detention bills by $11.2 million. A-7. Perform either a 100-percent review of future detention bills or use statistical sampling techniques to review carrier bills prior to payment. B-1. Include the minimum data requirements identified in the July 2004 DOD memorandum that established policy for the use of radio frequency identification technology in the statements of work for task order 58 and all other applicable task orders. For the Deputy Chief of Staff, G-4 1. Clarify accountability requirements for rapid fielding initiative (RFI) property distributed through program executive officer (PEO) Soldier; specifically, accountability requirements for organizational clothing and individual equipment (OCIE) items when not issued by a central issued facility (CIF). For the Program Executive Officer, Soldier and For the Executive Director, U.S. Army Research, Development and Engineering Command Acquisition Center 2. Instruct the appropriate personnel at the rapid fielding initiative warehouse to complete and document causative research within 30 days of inventory. Have the causative research: identify documents used in the causative research process and the procedures followed to resolve the error in the results of the causative research. identify the circumstances causing the variance. make changes to operating procedures to prevent errors from recurring. include government approval signatures before processing inventory adjustments and a system for tracking inventory adjustments so managers can cross-reference adjustments and identify those representing reversals. 3. Assign a quality assurance representative to the rapid fielding initiative warehouse that can provide the appropriate contract oversight and prompt feedback to the contractor on accountability and performance issues. Direct the individual to coordinate with the contracting officer to ensure the contracting officer incorporates instructions for evaluating contract requirements into key documents, such as a surveillance plan and an appointment letter. 4. Coordinate with the contracting officer to instruct the contractor to include the results of performance metrics related to inventory adjustments, location accuracy, inventory accuracy, and inventory control in the weekly deliverables or other appropriate forum. Have the contractor also include a spreadsheet with the overall accountability metric in the weekly reports for each line item and a continental United States (CONUS) fielding accountability spreadsheet after each fielding is completed. The data fields would include: overall inventory control accountability would include: Prior week ending inventory balance + all receipts and returns for the current week = all shipments from the warehouse + ending inventory on hand. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented 5. Direct the RFI contracting officer technical representative from program executive officer Soldier to work together with the contracting officer to develop a surveillance plan and provide the plan to the contract monitor. Include in the plan provisions for spot- checks if developers rely on the contractor’s quality control plan. A-1. Coordinate with the Deputy Chief of Staff, G-3 to develop guidance that instructs deploying units on protecting automation equipment from voltage differences and extreme environmental temperature conditions. A-2. Direct all units in the Kuwaiti area of operations to provide controlled temperature conditions for automation equipment. A-3. Instruct all units arriving in the Kuwaiti area of operations on how to protect automation equipment from voltage differences. B-1. Declassify the order that identifies which combat service support automation management office units should contact for assistance. A-1. Evaluate lessons learned from Operation Iraqi Freedom. As appropriate, adjust force structure requirements for military police and transportation personnel during the Total Army Analysis and contingency operations planning processes. A-2. Reduce the number of trucks assigned to the aerial port of debarkation to better reflect actual daily requirements. Coordinate with the Air Force at the aerial port of debarkation to obtain advanced notice of air shipments on a daily basis. Monitor use periodically to determine if future adjustments are required. A-3. Reestablish a theater distribution management center and make it responsible for synchronizing overall movement control operations for the Iraqi theater of operations. Coordinate with the Multi-National Force-Iraq to establish a standardized convoy tracking and reporting procedure. A-1. Coordinate with depots currently using local databases to track receipt transactions and develop a standard database that can be used by all depots to effectively track receipts from arrival date to posting. Each depot should be required to use this comprehensive database to track receipts and monitor the suspense dates to ensure receipts are posted to the Standard Depot System within the time standards. At a minimum, this database should include: start and completion dates for key management controls. date of arrival. receipt control number and date assigned. cross Reference Number assigned by the Standard Depot System. suspense dates (when receipt should be posted to record). date of physical count and reconciliation to receipt documentation. if receipt required Report of Discrepancy be sent to shipper and date report was sent if required. daily review control (list of receipts that are approaching required posting date). date stored. date posted. reason for not posting within required time frame. A-2. Initiate a change to Army Materiel Command Regulation 740- 27 to incorporate steps for identifying misplaced or lost labels in depot quality control checks, command assessments, and other tools used to measure depot performance. A-3. Fully use performance indicators (Depot Quality Control Checks, 304 Reports, and command assessments) as management tools to ensure necessary management controls are in place and operating for all depots’ receipt process. Also, ensure depots have effective training programs that consist of both on-the- job training and formal training to ensure depot personnel are aware of key controls and their responsibilities. Provide training on weaknesses and negative trends identified during biannual command assessments. A-4. Assign receipt control numbers based on the date the receipt arrived, and accountability transfers from transporter to depot. A-5. Submit Reports of Discrepancy to shipper for all discrepancies between physical counts and receipt documents, including when no receipt documents are received. A-6. Post receipts to records in temporary location, when it meets the requirement for a reportable storage location, to ensure receipt transactions are posted so that munitions can be made visible for redistribution in a timely manner. For the Commander, U.S. Army Communications- Electronics Command 1. Reemphasize to item managers to use supply document transactions, as specified in AR 725-50, to generate due-ins in command’s wholesale asset visibility system when directing the movement of military equipment items to a conversion contractor. 2. Direct item managers to use a GM fund code in disposition instructions to troop turn-in units and materiel release orders to storage activities directing shipments of equipment items to conversion contractors or to an Army depot maintenance facility. 3. Request the Logistics Support Activity to assign Routing Identifier Codes and related DOD Activity Address Codes for all conversion contractor operating locations where the contractor maintains quantities of items in the conversion process, but doesn’t presently have the codes. For future conversion contracts develop a process to ensure that all required codes are assigned immediately following contract award. 4. Reemphasize to item managers to: monitor asset visibility system management reports for creation of due-ins. require immediate corrective actions when due-ins aren’t created in the asset visibility system. 5. Reemphasize to item managers the requirement to perform follow up on due-ins when receipts aren’t posted in command’s asset visibility system within time periods stated in AR 725-50. 6. Incorporate into the current and all future conversion contracts, in coordination with the appropriate Project/Program Managers, the requirement for conversion contractors to transmit supply document transactions to the asset visibility system at Communications-Electronics Command in order to report: receipts of assets upon arrival at the contractor’s plant. changes in item configurations during the conversion process. shipments to gaining activities following conversion operations. 7. Until the conversion contracts are modified as detailed in Recommendation 6, require operating personnel to obtain all necessary supply documents and manually enter all necessary transactions into command’s asset visibility system to report receipts at contractor locations from turn-in units and storage activities, changes in equipment item configurations, and shipments of converted items to gaining activities. 8. Take appropriate actions to ensure unused component parts returned from conversion programs are not improperly reported in command’s asset visibility system as complete military equipment systems. Specifically, for National Stock Number 5840-01-009- 4939: request an inventory at the depot storage activity to identify all component parts improperly returned as complete systems. use the inventory results to adjust on-hand quantities in command’s asset visibility system to ensure accurate balances. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented 9. Direct the Tobyhanna Army Depot maintenance facility to take all actions necessary to ensure appropriate supply document transactions are processed when equipment items are received, converted, and transferred back to storage ready for issue. 10. Direct operating personnel to evaluate all Communications- Electronics Command equipment items undergoing disassembly, conversion, modification, or overhaul programs to determine if the same processes used for the items discussed in this report are applicable to them. If so, require operating personnel to apply the recommendations in this report to those affected items. For the Commander, U.S. Army Materiel Command 1. Establish Army guidance requiring integrated materiel managers to perform annual reviews of holding project assets and follow up on redistribution actions. 2. Direct commodity commands to redistribute holding project assets to other pre-positioned stock projects or to general issue. 3. Direct commodity commands to dispose of excess, unserviceable, and obsolete assets in holding projects. Direct materiel managers to review the 38 bulky items in holding projects to identify excess assets and dispose of them. 4. Establish guidance on the use of holding projects that requires managers to either provide a documented rationale for retaining excess assets in holding projects or dispose of them. Include in the guidance the requirement that inventory management commanders or their designees review the retention rationales for approval or disapproval. 5. Establish guidance that requires materiel managers to review holding projects annually to identify unserviceable (condemned, economically unrepairable, and scrap) and obsolete assets in holding projects. Include in the guidance the requirement that the identified assets be disposed of within 12 months. For the Joint Munitions Command 1. Use the integration plan to manage the integration of automatic identification technology in receiving and shipping processes, as well as the seal site program. At a minimum, the plan should be periodically reviewed to make sure: adequate workforces are dedicated for integration tasks in the future. equipment and software are thoroughly tested and determined to be functional before being fielding to ammunition storage activities. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented 2. Require contractor to use Standard Depot System’s composition rules and traditional edit checks in software development for the remaining applications to automatic identification technology. The development should include the: use of established performance measures to ensure that all the contractor’s products and services meet Joint Munitions Command’s automatic identification technology needs, such as appropriate edit checks before fielding. development of specific tasks with timelines to ensure that established implementation goals are met in the most effective and efficient manner. This should include penalties to ensure timely delivery of necessary equipment and software applications from contractors. A-1. Establish procedures that ensure commands and units reduce training ammunition forecasts when units determine that training ammunition requirements have changed. B-1. Make sure ammunition supply point personnel follow procedures to post all ammunition supply transactions in the Training Ammunition Management System on the day the transaction occurs. B-2. Make sure the ammunition supply point has procedures to maintain updated plan-o-graphs that show the locations and lot numbers of the ammunition stored in the ammunition supply point bunkers and includes the procedures in the supply point’s standing operating procedures. B-3. Develop a plan to establish a reliable quality assurance specialist (ammunition surveillance) capability for the ammunition supply point and California Army National Guard units. Include in the plan an evaluation of whether the California Guard should have an internal quality assurance capability instead of relying on a memorandum of agreement with Fort Hunter-Liggett. B-4. Correct the contingency ammunition control problems at California Guard units by: identifying all contingency ammunition that is currently on-hand at all California Guard units and establishing proper accountability over the ammunition. preparing a serious incident report if the amount of ammunition unaccounted for that is identified at the units meets the criteria in AR 190-40. ensuring that units and the ammunition supply point follow established procedures for maintaining all issue and turn-in documentation for security ammunition to support the quantities recorded on the units’ hand receipt. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented B-5. Follow procedures for reviewing and updating security and contingency ammunition requirements. At a minimum: determine ammunition requirements based on threat assessments, potential missions and force structure available to provide a response. coordinate and establish a current ammunition distribution plan. conduct an annual review of ammunition requirements. maintain a list of where ammunition is being stored for State contingency by type and quantity. B-6. Make sure units follow the requirement to provide all small arms supply transactions to the U.S. Property and Fiscal Office within 5 working days so that the DA central registry can be updated within 10 working days. B-7. Make sure units follow the checklist in AR 190-11 related to physical security over the storage of small arms and document the results of their inspections. For the Commander, Eighth U.S. Army 1. Take appropriate action to perform and document required Operational Project reviews. Specifically: establish and prescribe guidelines and criteria that will inject more discipline into the Operational Project review and validation process. Prescribe key factors, best practices, and methods for determining and documenting Operational Project requirements. have each project proponent perform an analysis each year in accordance with the annual review process in Army Regulation 710-1 and whenever the Operational Plan changes. The project proponent should include an updated letter of justification that references where each project’s list of requirements originated and how the quantities for each item were computed. after receiving the official response from the project proponent, Eighth Army, G-4, War Reserve, should submit a memorandum to Headquarters, DA, G-4 for the purpose of documenting the annual review. 2. Have the War Reserve Branch track completion of annual reviews and 5-year revalidations; periodically review documentation of reviews and revalidations to evaluate their sufficiency. For the Deputy Chief of Staff, G-3 1. Develop and apply detailed criteria to assess the adequacy of operational project packages and the validity of related requirements, and approve only those projects that meet the criteria. 2. Establish criteria and guidelines that require proponent commands to identify and prioritize mission essential equipment in operational projects. Establish a policy to fund the higher priority items first. For the Deputy Chief of Staff, G-3 and For the Deputy Chief of Staff, G-4 3. Establish and prescribe guidelines and criteria that will inject more discipline into the operational project requirements determination process. Prescribe key factors, best practices, and methods for determining and documenting operational project requirements. For the Deputy Chief of Staff, G-4 4. Designate only commands with clear or vested interest in projects as the proponents. 5. Provide guidance to project proponents that outline strategies and methodologies for reviewing and revalidating operational projects. 6. Track completion of reviews and 5-year revalidations, periodically review documentation of reviews and revalidations to evaluate its sufficiency, and reestablish the enforcement policy that would allow cancellation of operational projects when proponents don’t perform timely, adequate reviews or revalidations. Consider having a formal Memorandum of Agreement with Army Materiel Command to track operational project reviews and revalidations. 7. Revise guidance requiring annual reviews for all operational projects to consider the individual characteristics of projects when scheduling the frequency of reviews. For the U.S. Army Aviation and Missle Command 1. Instruct the responsible item managers to: initiate actions to dispose of quantities that exceed documented requirements for the seven items identified. determine if it’s economical to reduce the planned procurement quantities excess to requirements for the five items identified. For those that are economically feasible, take action to reduce planned procurement quantities. If these actions are implemented, we estimate they will result in potential monetary savings of about $1.7 million. For the Commanding General, Combined Joint Task Force 180 1. Build semi-permanent storage facilities for class I supplies at Bagram and Kandahar, including facilities for dry and frozen goods storage. 2. Direct base operations commanders to record all containers purchased with Operation Enduring Freedom funds in the installation property books. In addition: conduct a 100-percent physical inventory of shipping containers at each installation. record all leased and purchased containers in the property book. Make sure the serial numbers of the shipping containers are recorded, too. establish procedures with the contracting office to ensure that the installation property book officer is given documentation when containers are purchased or leased. For the Commander, Combined Joint Task Force 180 1. Increase the size of the supply support activity in Bagram to 1,700 line items of authorized stockage list to ensure the availability of critical aviation spare parts. 2. Require the supply support activity officer to hold inventory reviews every 30 days or less with aviation maintenance units to ensure adequate inventory levels of items on the authorized stockage list. 3. Place Army expeditors—“the go-to guys”—familiar with class IX aviation spare parts at choke points located in Germany in the Army and Air Force delivery system to prioritize pallets and shipments. For the Deputy Chief of Staff, G-4 1. Establish theater DOD activity address codes for units to fall in on when assigned to Operation Enduring Freedom. For the Deputy Chief of Staff, G-4 1. Issue guidance directing activities to attach radio frequency tags to shipments en route to the Operation Enduring Freedom area of responsibility. Enforce requirements to tag shipments by directing transportation activities not to allow the movement of cargo without a radio frequency tag attached. 2. Direct Military Traffic Management Command to obtain radio frequency tag numbers from activities shipping goods and to report those tag numbers to transportation officers by including them in the in-transit visibility (ITV) Stans report. 3. Issue additional guidance to activities clarifying procedures they should follow for the retrograde of radio frequency tags and to replenish their supply of tags. For the Joint Logistics Command 1. Make sure movement control teams tag shipments as required by US Central Command guidance to ensure that improvements continue during future rotations. A-1. Direct responsible activities to: validate current requirements for subproject PCA (authorizing chemical defense equipment for 53,000 troops) to augment U.S. Army Europe’s second set deficiencies and submit the requirements to DA for approval in accordance with AR 710-1. revalidate requirements for chemical defense equipment for project PCS (see PCA), including the addition of equipment decontamination kits. Revise requirements for chemical defense equipment for the Kosovo Force mission and submit the changes to DA. A-2. Ask Army Materiel Command to fully fill revised requirements for chemical defense equipment for operational project PCS and to redistribute or dispose of excess items from operational projects PCA and PBC. Concurred, closed, implemented Concurred, closed, implemented Nonconcurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, not implemented Concurred, closed, not implemented B-1. Direct responsible activities to review and validate all project requirements for collective support systems as required by AR 710-1. C-1. Direct responsible activities to: ask DA to cancel subprojects PZP and PZQ (project codes to provide equipment for reception of reinforcing forces deploying to Europe and other theaters). develop requirements and request a new receiving, staging, onward movement and integration operational project, if needed, in accordance with AR 710-1. D-1. Ask DA to cancel operational subproject PYN (project code) for aircraft matting. D-2. Submit new operational project requirements for aircraft matting to DA in accordance with AR 710-1. A-1. Develop a system of metrics, to include performance goals, objectives, and measures, for evaluating the reliability of data in the capability. Establish processes for comparing actual performance to the metrics and taking remedial action when performance goals and objectives aren’t met. (Recommendation B-3 calls for a process to compare data in the capability and feeder systems. The results of these comparisons would constitute the actual data reliability performance.) A-2. Develop goals and objectives for use in evaluating the success of redistribution actions for Army assets. Develop procedures for identifying and correcting the causes for referral denials that exceed the established goals. B-1. Issue guidance to project and product managers detailing the proper use of bypass codes on procurement actions. B-2. Include definitive guidance on the use of bypass codes into appropriate guidance documents on The Army’s business processes, such as AR 710-1. Make sure the guidance explains the ramifications of using the different codes. B-3. Direct the Logistics Support Activity to perform periodic reviews of data in the capability to ensure that it agrees with data in feeder systems, and take action to identify and correct the causes for any differences. B-4. Require commodity commands to use the Post-award Management Reporting System to help manage contract receipts. Also, make sure the Logistics Modernization Program has the capability to manage invalid due-in records. B-5. Direct commodity commands to delete all procurement due-in records with delivery dates greater than 2 years old. Have the commodity commands research and resolve due-in records with delivery dates more than 90 days old but less than 2 years old. B-6. Direct commodity commands to review and remove invalid due-in records for field returns with delivery dates over 180 days. Concurred, closed, not implemented Concurred, closed, not implemented Concurred, closed, not implemented Concurred, closed, not implemented Nonconcurred, closed, not implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, open B-7. Require commodity commands to periodically scan the Commodity System for procurement actions issued with bypass codes. Ask project and program managers to explain the decision to use a bypass code. Report the results of the review to the Assistant Secretary of the Army (Acquisition, Logistics and Technology). If the Logistics Modernization Program continues to employ bypass codes or other methods that prevent the creation of a due-in record, conduct similar reviews when the Logistics Modernization Program is implemented. C-1. Incorporate instructions on the use of the capability into appropriate guidance documents on The Army’s logistics business processes, such as AR 710-1. These instructions should address topics such as reviewing the capability for excess items before procuring additional stocks. C-2. Direct the Logistics Support Activity to review data in the Army Total Asset visibility capability for potentially erroneous data. Establish a procedure for reporting the potentially erroneous data to the activities responsible for the data and performing research to determine the validity of the data. D-1. Revise AR 710-2 and 710-3 to comply with the requirements of AR 11-2. Specifically: develop management control evaluation checklists addressing the accuracy and reliability of data in the Army Total Asset visibility capability and publish these controls in the governing Army regulations, or identify other evaluation methods and include these in the applicable Army regulations. For the Commander, U.S. Army Materiel Command 1. Emphasize to the commodity commands the need to periodically review the process for creating asset status transactions in the Commodity Command Standard System to ensure the transactions are properly created and forwarded to the Logistics Support Activity. 2. Revise Automated Data Systems Manual 18-LOA-KCN-ZZZ-UM to require activities to promptly submit monthly asset status transactions to the Logistics Support Activity. For the Commander, U.S. Army Materiel Command Logistics Support Activity 3. Establish procedures for notifying source activities when the capability rejects asset status transactions. Make sure that rejected and deleted transactions are reviewed to identify reasons for the transactions being rejected or deleted. If appropriate, correct the rejected transactions and resubmit them for processing to the capability. Based on the results of the reviews, take appropriate action to correct systemic problems. 4. Establish a control log to monitor participation of Army activities in the monthly asset status transaction process. Use the log to identify activities that didn’t submit a monthly update and determine why an update wasn’t submitted. Report frequent abusers of the process through appropriate command channels. 5. Report to the Deputy Chief of Staff, G-4 that AR 710-3 needs to be revised to require activities to promptly submit monthly asset status transactions to the Logistics Support Activity. 6. Document the process used to update information in the asset visibility module of the Logistics Integrated Data Base. A-1. Obtain a document number from the installation property book office before ordering installation property or organizational clothing and individual equipment. Order only equipment and vehicles for valid requirements approved by the Joint Acquisition Review Board. A-2. Include written justification, analyses and study results in documentation for purchase requests and commitments before acquisition decisions are made. A-3. Determine the number of vehicles required for the mission. Consider adjusting dollar thresholds for approval by the Joint Acquisition Review Board. A-4. Establish written policy to secure explosives using the interim plan. Build a permanent secure area for explosives awaiting movement as soon as possible. A-1. When updating the variable cost-to-procure factor, make sure the following steps are completed until a system like activit-based costing is available to capture costs: develop cost data for each functional area using groups of well- trained, function experts. properly document the process used to develop costs. research and substantiate variances in cost data among buying activities. A-2. Make sure updates to the variable cost-to procure factor are given to each buying activity and properly input into the materiel management decision file in the Commodity Command Standard System. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, open A-3. Review the variable cost-to-procure elements in the materiel management decision file and determine which of the three variable cost-to-procure cost categories should be used to update each element. Provide this information to the buying activities for implementation. Do periodic checks to make sure the elements are updated properly. A-4. Review the other factors in the materiel management decision file mentioned in this report for accuracy, especially those that haven’t been updated in the past 2 years. Specifically make sure the buying activities update the following factors using data related to the commodity they manage: Variable Cost to Hold (General Storage Cost, Discount Rate, Storage Loss Rate, and Disposal Value). Probability of No Demands. Depot Cost Elements (Stock Issue Cost, Fixed Cost, Receipt Cost for Stocked Item, and Non-Stocked Cost). Percent Premium Paid. Add-Delete Demands. B-1. Have the Requirements Integrity Group (or a similar working group) periodically review the factors used in the economic order quantity/variable safety level model for accuracy—especially those discussed in this objective. Provide guidance to buying activities for properly updating factors and make sure updated factors are processed in the automated system. For the Assistant Secretary of the Army (Acquisition, Logistics and Technology) 1. Issue written policy prescribing the specific roles and responsibilities, processes, and key management controls for developing and integrating automatic identification technology into logistics processes. As a minimum, include requirements for funding, milestone decisions, in-process reviews, test and evaluation plans, life-cycle cost estimates, benefit analyses, coordination with other system developments, and transfer of finished products. Also, consider subjecting the Army’s development of automatic identification technology to the prescribed acquisition procedures of AR 70-1. 2. Prepare a business case analysis for each automatic identification technology application that the Army has ongoing and planned. Adjust applications, if appropriate, based on the results of the business case analyses. 3. Establish a central oversight control within the Army for automatic identification technology. As a minimum, set up a process to: monitor all development and funding within the Army for automatic identification technology. verify that similar developments aren’t duplicative. For the Commander, U.S. Army Training and Doctrine Command 4. Update the operational requirements document for automatic identification technology. As a minimum, determine the Army-wide need for standoff, in-the-box visibility and document the results in an updated operational requirements document. Revise the current version of AR 710-2 to make Dollar-Cost Banding mandatory. Set a date for implementing Dollar-Cost Banding that will allow for gradual implementation by major commands, divisions, and other activities with supply support activities. A-1. Issue a message to all major command and subordinate activities informing them of problems and best practices identified during our audit. Use the draft advisory message as a guide for preparing the message (Annex E). Advise major commands and divisions responsible for maintaining units on alert status for rapid deployment in response to a crisis to ensure their local policies (such as major command regulations or division Readiness Standing Operating Procedures) include the provisions outlined in the message. A-2. Modify AR 710-2 to include guidance for major commands and subordinate activities responsible for maintaining units on alert status for rapid deployment to follow to ensure adequate repair parts support during the initial period of deployment. As a minimum, require that divisions with alert units have: an assumption process in place that includes procedures for detailed planning of Class IX requirements. a deployment notification process in place with procedures for conducting a summary review of Class IX stocks planned for deployment, considering such factors as the deployment environment, anticipated operating tempo, or intensity of the operations. A-3. Modify DA Pamphlets 710-2-1 and 710-2-2 to include detailed procedures for divisions to follow to ensure alert forces have adequate Class IX repair parts support. Review the best practices outlined in this report (and the draft advisory message in Annex E) as a starting point for revising the pamphlets. A-4. Update Field Manual 10-15 (Basic Doctrine Manual for Supply and Storage) to reflect current policies and address the key procedures discussed earlier in this report. Additionally, update the field manual to provide guidance on such issues as: how to identify Class IX repair part requirements for alert forces. how to identify repair parts shortages and whether to requisition shortage items. what priority designator code to use for requisitioning parts during the assumption process and when in alert status. when to use pre-packaged inventories. when to pre-position parts at airfields (with alert force equipment). B-1. Include key management controls for alert forces in an appendix of AR 710-2 as prescribed by AR 11-2 or incorporate these controls into the existing Command Supply Discipline Program. Consider our list of key controls contained in Annex H to identify controls for inclusion in the regulation. The Director of Logistics Readiness, Air Force Deputy Chief of Staff for Installations, Logistics and Mission Support should: a. Require Air Force personnel to delete all invalid adjusted stock levels identified in the audit. b. Establish procedures to improve adjusted stock level management. Specifically, revise Air Force Manual 23-110 to: address the role of the Logistics Support Centers. Specifically, require Logistics Support Center personnel only approve base- initiated adjusted stock levels with sufficient justification on the Air Force Forms 1996, maintain all Air Force Forms 1996, and initiate the revalidation process. improve the revalidation process. Specifically, the guidance should contain the following controls: a revalidation checklist detailing procedures logistics personnel should use to revalidate adjusted stock levels. ensure personnel accomplish the revalidation every 2 years. a requirement to use Air Force Form 1996 to establish each adjusted stock level (including MAJCOM-directed adjusted stock levels) and include a detailed justification of the adjusted stock level purpose and duration. A.1. The Air Force Materiel Command Director of Logistics should: a. Direct air logistics center shop personnel to delete the invalid Credit Due In From Maintenance details identified by audit (provided separately). b. Establish procedures requiring an effective quarterly Credit Due In From Maintenance Reconciliation. Specifically, Air Force Manual 23-110, US Air Force Supply Manual, and Air Force Materiel Command Instruction 23-130, Depot Maintenance Material Control, should require maintenance personnel to provide written documentation for each Credit Due In From Maintenance detail (i.e., supported by a “hole” in the end item). If such supporting documentation is not provided, require retail supply personnel to delete the unsupported Credit Due In From Maintenance details. c. Develop training for air logistics center shop personnel regarding proper spare part turn-in and Credit Due In From Maintenance Reconciliation procedures. Specifically, the training should define the various ways to turn spare parts in, and the differences between each method, to include the impact of improperly turning in spare parts. In addition, proper Credit Due In From Maintenance Reconciliation procedures should be covered in depth to include training on what constitutes appropriate supporting documentation. A.2. The Air Force Materiel Command Director of Logistics should: a. Establish detailed procedures in Air Force Manual 23-110 on how an item manager should validate Due Out to Maintenance additives (i.e., what constitutes a Due Out To Maintenance additive, where the item manager can validate the additive, which priority backorders are associated with Due Out To Maintenance, etc.). Concurred, open b. Direct Warner Robins Air Logistics Center to rescind local policy allowing item managers to increase the Due Out To Maintenance additive quantity to account for install condemnations. c. Issue a letter to item managers reemphasizing the requirement to document the methodology used to validate changes to Due Out to Maintenance additives, and retain adequate support for the Due Out To Maintenance additive quantity. A.1. Air Force Materiel Command Directorate of Logistics and Sustainment personnel should update Air Force Materiel Command Manual 23-1, Requirements for Secondary Item, to: a. Include instruction on what information should be developed and retained to support estimated condemnation rates. The guidance should include maintaining documentation on key assumptions, facts, specific details, decision makers’ names and signatures, and dates of decisions so the condemnation percentage can be recreated. b. Establish sufficient guidance to instruct equipment specialists on managing parts replacement forecasting. Specifically, develop a standardized method to plan for replacement part acquisition while phasing out the old parts. The Air Force Materiel Command Director of Logistics and Sustainment should: a. Correct the shop flow times for the 211 items with requirements discrepancies. b. Revise the process for computing shop flow times to adhere to DoD 4140.1-R, which requires the removal of awaiting maintenance and awaiting parts times from requirements computations. c. Evaluate the D200A Secondary Item Requirements System computer program to identify and correct the programming deficiencies adversely impacting the shop flow times computation. d. Complete the ongoing automation effort designed to eliminate manual processing errors. A.1. The Air Force Deputy Chief of Staff, Installations and Logistics, should: a. Revise Air Force Manual 23-110 to: (1) Provide supply discrepancy report missing shipment procedures consistent with Air Force Joint Manual 23-215 guidance. (2) Establish supply discrepancy report dollar value criteria consistent with DoD 4500.9-R guidance. b. Establish base supply personnel training requirements on supply discrepancy report procedures and communicate those requirements to the field. Request Defense Logistics Agency comply with procedures requiring depot supply personnel inspect packages and submit supply discrepancy reports when appropriate. A.1. The Air Force Deputy Chief of Staff, Installations and Logistics, should: a. Revise Air Force Manual 23-110 to (1) describe more thoroughly documentation requirements for data elements used to compute readiness spares package item requirements and (2) require all readiness spares package managers to attend training that includes an adequate explanation of data element documentation requirements. Concurred, open b. Upgrade the Weapons System Management Information System Requirements Execution Availability Logistics Model to (1) accept mechanical data element transfers directly from other source systems and (2) prompt readiness spares package managers to input documentation notations supporting the rationale of changes in readiness spares package data elements. A.1. The Air Force Materiel Command Directorate of Logistics and Sustainment personnel should: a. Reduce the stock level day standard value from 10 days to 4 days in the D200A Secondary Item Requirements System. b. Develop and implement an automated method in the Advanced Planning and Scheduling system to measure the actual order and ship time needed to replenish depot level maintenance serviceable stock inventories. c. Develop and implement an interim method to measure or estimate depot order and ship time until an automated method is developed. A.1. The Deputy Chief of Staff, Installations and Logistics, Directorate of Logistics Readiness should require the Distribution and Traffic Management Division to: a. Direct Transportation Management Office personnel to communicate to the consignors the cost and timing benefits to move shipments via door-to-door commercial air express carrier service when eligible based on DoD and Air Force guidance. If the consignor refuses the cost-effective mode, require a waiver letter expressing the need to use the Air Mobility Command carrier. b. Develop criteria to allow consignors to adequately identify priority requirements and assign appropriate priority designator codes when shipping assets via Air Mobility Command airlift. This criteria should be included in Air Force Instruction 24-201. c. Instruct Transportation Management Office personnel to properly review all shipping documentation to ensure all required information is completed by the consignor prior to accepting cargo for movement to the Air Mobility Command aerial port. A.1. The Air Force Materiel Command Director of Logistics and Sustainment should: a. Establish procedures to properly budget for delayed discrepancy repair requirements by accounting for the eventual return and repair of unserviceable items in the requirements/budget process starting with the March 2005 computation cycle. b. Develop procedures or include an edit in the new system that flags additives and prompts the item manager to perform thorough reviews of additive requirements. c. Develop a process that requires program managers, item managers, and other applicable program directorate personnel to periodically review program and mission direct additive requirements to verify that duplication has not occurred. Concurred, closed, implemented Concurred, open d. Inform all item managers and air logistics center managers that it is an inappropriate use of mission direct additives to retain excess inventory or preclude contract terminations. Additionally, reiterate regulatory guidance delineating the approved process for retaining excess materiel and preventing contract terminations. A.1. The Air Force Materiel Command Director of Logistics and Sustainment should: a. Direct item managers to correct erroneous requirements identified during this review. b. Revise Air Force Materiel Command Manual 23-1 to clarify procedures for adjusting low demand item requirements. Specifically, ensure the guidance clearly states item managers may restore previously decreased requirements to their original level. A.1. The Air Force Materiel Command Director of Logistics and Sustainment should: a. Direct item managers to correct all erroneous requirements computations and related budgets identified during this review. b. Revise Air Force Materiel Command Manual 23-1 to correct guidance conflicts. Specifically, ensure the guidance only contains the correct standards requirements (3 days for base processing times and 10 days for reparable intransit times). A.1. The Air Force Materiel Command Director of Logistics should revise Air Force Materiel Command Manual 23-1 to: a. Require item managers review and identify excess next higher assemblies that could be used to satisfy indentured item repair, as well as buy, requirements. b. Provide specific procedures for item managers to follow to satisfy the indentured item buy and repair requirements. Revise training, and then train item managers to use indentures system data to identify excess next higher assemblies that could be used to satisfy indentured item requirements. B.1. The Air Force Material Command Director of Logistics should: a. Require equipment specialists correct inaccurate indentures system data. b. Publish the draft guidance requiring equipment specialists ensure indentures system data accuracy. c. Train equipment specialists to maintain indentures system data accuracy. The Air Force Materiel Command Director of Logistics should: a. Collect the unserviceable parts identified during the audit from the contractors or adjust the price of those parts (FY 2000-2002, $238.9 million and estimated FY 2003, $79.6 million). b. Establish a mechanism to track the issue and return of parts issued to customers who subsequently provide those parts to contractors as prescribed in Air Force Manual 23-110, Volume I, Part 3, Chapter 7. c. Either revise the policy to issue parts to customers who subsequently provide those parts to contractors at standard price or develop a due-in-from-maintenance-like control to adjust the part’s price if the unserviceable parts are not returned. A.1. The Deputy Chief of Staff, Installations and Logistics should: a. Revise Air Force Instruction 21-104 to require engine managers to input a follow-on tasked unit into the requirements computation system as a single unit. b. Modify PRS software to compute spare engine needs based on the combined flying hours for follow-on tasked units. A.1. The Air Force Materiel Command Supply Management Division should: a. Implement corrective software changes to the Secondary Item Requirements System and Central Secondary Item Stratification Subsystem systems to remove the Other War Reserve Materiel requirements from the Peacetime Operating Spares requirements and report Other War Reserve Materiel requirements separately. b. Implement interim procedures to remove Other War Reserve Materiel requirements from the Peacetime Operating Spares requirements and budget and report Other War Reserve Materiel requirements separately until they implement Recommendation A.1.a. A.1. The Air Force Materiel Command Director of Logistics should: a. Direct maintenance management personnel to provide adequate oversight to ensure maintenance personnel turn in all aircraft parts to the Weapon System Support Center or courtesy storage areas. b. Revise Air Force Materiel Command Instruction 21-130 directing air logistics center Weapon System Support Center management to establish a supply inventory monitor to oversee maintenance work areas ensuring excess parts are turned in to Weapon System Support Center or courtesy storage areas. Reemphasize the regulatory requirement (Air Force Materiel Command Instruction 21-130) to the air logistics center maintenance supervisors to assign a maintenance inventory control monitor to oversee the maintenance areas and ensure maintenance personnel tag and label all parts with the applicable aircraft number and the serviceability condition. Request that the Air Force Materiel Command Director of Logistics include Air Force Logistics Management Agency Stocking Policy 11 in the Readiness Base Leveling system to calculate C-5 forward supply location spare parts stock levels. Instruct item manager specialists that Air Force Form 1996 is not required to maintain Army Materiel Command Forward supply secondary item requirements in the Secondary Item Requirements System. A.1. The Air Force Materiel Command Director of Logistics should: a. Remove the D200A Secondary Item Requirements System automatic asset balance variance adjustment. b. Establish training requirements for air logistics center personnel on how to research and resolve D200A Secondary Item Requirements System asset balance variances. Concurred, closed, implemented Concurred, closed, not implemented Concurred, closed, not implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, not implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented c. Revise the Air Force Materiel Command Manual 23-1 to require that item managers defer an item’s buy and/or repair requirement until reconciling any asset balance variance greater than a specified threshold (variance percent, quantity, and/or dollar value). d. Establish asset balance variance oversight procedures to verify item managers resolve asset balance variances. A.1. The Air National Guard, Deputy Chief of Staff, Logistics, should: a. Address to subordinate units the importance of following Air Force equipment guidance related to small arms accountability, inventory, documentation, storage, and disposal, and the competitive marksmanship program. b. Request the Air National Guard Inspector General to include small arms accountability, inventory, documentation, storage, and disposal requirements as a special emphasis area in unit inspections. B.1. The Air National Guard, Deputy Chief of Staff, Logistics, should: a. Direct all Air National Guard units revalidate small arms and conversion kit requirements using Allowance Standard 538. b. Recompute requirements (including M-16 conversion kits), reallocate small arms on-hand based on adjusted authorizations, and adjust requirements and requisitions, as needed, following the reallocations. A.1. The Air Force Materiel Command Director of Logistics should revise Air Force Manual 23-110 to include specific material management transition guidance. Specifically, the guidance should require: a. Transition gaining locations to have a training plan in place to ensure personnel are adequately trained before working asset buy and repair requirement computations. b. Air Force Materiel Command personnel to establish a transition team to monitor all stages of the transition, to include ensuring personnel are adequately trained and providing additional oversight over requirement computations worked by new item managers. Revise Standard Base Supply System transaction processing procedures to automatically select special requisition Air Force routing identifier codes. Issue guidance to base supply personnel reminding them of proper receipt transaction procedures. Discontinue the automated transaction deletion program since the revised Standard Base Supply System procedures render the program obsolete. C.2. The Deputy Chief of Staff, Installations and Logistics should: a. Revise Air Force Manual 23-110 to direct working capital fund managers to input reversing entries that will correct erroneous transactions identified during monthly M01 list reviews. b. Direct all base supply working capital fund managers to: (1) Review the most current M01 list to evaluate the propriety of all transactions affecting the Purchases at Cost account. (2) Input reversing entries to correct any erroneous transactions identified during the M01 list review. This will correct all deficiencies, including those described in Results-A and Results-B. A.1. The Air Force Reserve Command, Deputy Chief of Staff, Logistics, should: a. Address to subordinate units the importance of following Air Force equipment guidance related to small arms accountability, inventory, documentation, storage, and disposal. b. Request the Air Force Reserve Command Inspector General to include small arms accountability, inventory, documentation, storage, and disposal requirements as a special emphasis area in unit inspections. B.1. Air Force Reserve Command, Deputy Chief of Staff, Logistics, should: a. Request all Air Force Reserve Command units revalidate small arms and conversion kit authorizations using Allowance Standard 538. b. Recompute requirements (including M-16 conversion kits), reallocate small arms on-hand based on recomputed authorizations, and adjust requirements and requisitions, as needed, following the reallocations. Finalize and issue the revised Air Force Manual 23-110 requiring personnel to identify and timely return secondary items to the primary control activity. Finalize and issue the revised Air Force Manual 23-110 requiring personnel to research and validate credit due on repairable items returned to the primary control activity. The Office of the Commander, U.S. Fleet Forces Command should: 1. Emphasize Chief of Naval Operations requirements that all ships maintain proper inventory levels based on authorized allowances and demand history. 2. Emphasize Chief of Naval Operations and Naval Supply Systems Command internal control procedures to ensure inventory levels in the Hazardous Material Minimization Centers remain within the authorized limits, and return material exceeding requisitioning objectives to the supply system. 3. Emphasize Chief of Naval Operations requirements that ships requisition only hazardous materials authorized for shipboard use, and return unauthorized material to the supply system. 4. Enforce Naval Supply Systems Command requirements that ships prepare and submit Ship’s Hazardous Material List Feedback Reports and Allowance Change Requests, whenever required. The Naval Supply Systems Command should: 5. Establish an interface between authorized allowance documents and the Type-specific Ship’s Hazardous Material List to ensure that hazardous material items authorized for shipboard use also have authorized allowance levels. 6. Establish procedures to validate Hazardous Material Minimization Centers low and high inventory levels with those inventory levels in Relational Supply for the same items to ensure Hazardous Material Minimization Centers high limits do not exceed Relational Supply high limits. 7. Establish procedures that require unissued hazardous material in the Hazardous Material Minimization Centers be counted as on-hand inventory before reordering Relational Supply stock. 8. Develop and implement a hazardous material usage database that accumulates and retains data on supply system hazardous material ordered and used by the ship for use in planning future hazardous material requirements. 9. Establish procedures to ensure that Enhanced Consolidated Hazardous Material Reutilization and Inventory management Program Afloat Program technicians perform tasks in accordance with the Enhanced Consolidated Hazardous Material Reutilization and Inventory management Program Afloat Program Desk Guide. 10. Establish a working group to determine the feasibility for the development of ship-specific allowance-control documents for all items managed in the Hazardous Material Minimization Centers not already on an approved allowance list. The Office of the Commander, U.S. Fleet Forces Command should: 11. Return the prohibited undesignated hazardous material items to the supply system for credit. The Naval Sea Systems Command, with the assistance of Naval Supply Systems Command should: 12. Establish formal written guidance stating what system allowance list hazardous material is designated for and their current quantities allowed. Guidance should include requisitioning metrics that cross check hazardous material items against designated system designs as generated by Naval Inventory Control Point and Naval Surface Warfare Center Carderock Division – Ship System Engineering Station, technical manuals, and one-time General Use Consumable List. 13. Clarify Naval Sea Systems Command Instruction 4441.7B/Naval Supply Systems Command Instruction 4441.29A to measure the quality of hazardous material load outs instead of the quantity or percentage of hazardous material loaded on ships. The Office of the Supervisor of Shipbuilding, Conversion, and Repair Newport News should: 14. Discontinue requisitioning aircraft cleaning, maintenance, and preservation hazardous material for actual aircraft before Post Shakedown Availability. 15. Establish formal written local procedures that require detailed support, justification, and audit documentation for system validation on all hazardous material requisitions received from ship personnel after Load Coordinated Shipboard Allowance List delivery. This support should indicate the specific system the item is required for and the document numbers for Preventative Maintenance Schedule, Maintenance Request Cards, Allowance Equipage List, Allowance Parts List, General Use Consumables List, and technical manuals. An Allowance Change Request should be included, if applicable. 16. Use Outfitting Support Activity when requisitioning all hazardous material items for ship initial outfitting to minimize local procurement as required by the Navy Outfitting Program Manual of September 2002. The Naval Supply Systems Command should: 17. Enforce compliance with established guidance for material offloads to ensure a uniform use of DD Form 1348 documents among ships and the proper processing of Transaction Item Reporting documents to ensure inventory accuracy. 18. Update the Enhanced Consolidated Hazardous Material Reutilization and Inventory management Program Afloat Program Desk Guide to include specific requirements for the Enhanced Consolidated Hazardous Material Reutilization and Inventory management Program Afloat Program technician when offloading Naval Supply Systems Command-owned hazardous material. The Naval Inventory Control Point should: 1. In coordination with Naval Air Systems Command, update policy and procedures issued to field activities on managing and reporting aircraft engine/module container inventory. 2. Require Fleet activities to provide a daily transaction item report of all intra-activity receipts and issues of engine/module containers to item managers. 3. Establish controls to ensure containers are not procured in excess of requirements. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Nonconcurred, closed, implemented Concurred, closed, implemented 4. Include the Aircraft Engine Container Program as an assessable unit in Naval Inventory Control Point’s Management Control Program. The Naval Air Systems Command should: 5. Fully fund the engine/module repair container program in accordance with requirements generated by Naval Inventory Control Point. 6. Report any engine/module containers costing $5,000 or more in the Defense Property Accounting System. The Naval Inventory Control Point and Naval Air Systems Command should: 7. Require Naval Aviation Depots, Aircraft Intermediate Maintenance Depots, and Fleet activities to perform periodic inventories of engine/module containers, and report the results to Naval Inventory Control Point’s item managers. The Commandant of the Marine Corps should: 1. Terminate the Norway Air-Landed Marine Expeditionary Brigade program. 2. Prepare a comprehensive statement encompassing disposal costs, equipment condition, and status of outstanding procurements and repairs of the excess onhand ground equipment and supplies, and identify Norway Air-Landed Marine Expeditionary Brigade program items that would satisfy outstanding procurements and repairs for fiscal year 2003 and the out years. 3. Cancel the planned modernization procurements associated with the replacement of Norway Air-Landed Marine Expeditionary Brigade equipment, subject to negotiated termination costs for one of the six modernization projects. 4. Cancel all procurements that replenish Norway Air-Landed Marine Expeditionary Brigade preposition inventory shortages. The Deputy Chief of Naval Operations, Warfare and Requisitions Programs should: 1. Perform analyses to establish validated engine readiness requirements, incorporate ready-for-training engine readiness rates for training aircraft engines, and establish separate requirements for different categories of aircraft (such as combat, support, and training). 2. Formally document the engine requirements and supporting rationale in Department of the Navy guidance. The Deputy Chief of Naval Operations, Fleet Readiness and Logistics should: 3. Coordinate with Naval Inventory Control Point and Naval Air Systems Command to require more realistic parameter inputs to the Retail Inventory Model for Aviation while encouraging engine maintenance strategies that will ultimately reduce turn around time and increase reliability (mean time between removal). 4. Issue written guidance to assign responsibility for calculating engine war reserve requirements and the need to compute additional war reserve engine/module requirements. The Deputy Chief of Naval Operations, Warfare Requirements and Programs should: 5. Adjust out-year F414-GE-400 engine and module procurement requirements (to be reflected in the President’s 2004 Budget) to agree with Naval Inventory Control Point’s revised Baseline Assessment Memorandum 2004 requirements. The Commander, Naval Inventory Control Point should: 6. Reiterate Secretary of the Navy policy that documentation supporting official Baseline Assessment Memorandum submissions be retained for no less than 2 years. The Deputy Chief of Naval Operations, Fleet Readiness and Logistics should: 7. In coordination with Deputy Chief of Naval Operations, Warfare Requirements and Programs, establish policy and adjust the procurement strategy for F414-GE-400 engines and modules to procure (based on current audit analyses) approximately 30 percent whole engines and 70 percent separate engine modules and thereby improve the engine/module repair capability. 8. Issue guidance requiring Naval Air Systems Command to determine, and annually reevaluate, the engine-to-module procurement mix for the F414-GE-400. The Commander, Naval Air Systems Command should: 9. Reduce out-year AE1107C spare engine procurement by 12 (changed to 8 after receipt of management comments) through fiscal year 2008. 10. Adhere to the Chief of Naval Operations-approved model (Retail Inventory Model for Aviation) for calculations of spare engine requirements. The Deputy Chief of Naval Operations, Warfare Requirements and Programs should: 11. Adjust planned out-year Aircraft Procurement, Navy-6 (APN-6) procurement requirements to reduce the quantities of T700-401C Cold and Power Turbine Modules by 10 each. The Commandant of the Marine Corps should: 1. Validate the Time-Phased Force Deployment Database equipment requirements and determine how the Marine Corps will source (make available) the equipment required and determine if the equipment required is on the unit’s table of equipment. 2. Evaluate the Asset Tracking Logistics and Supply System II+ to determine if it adequately meets user needs and, if not, take sufficient action to correct identified deficiencies. 3. Perform onsite technical assessments to determine the extent of required maintenance/repair. 4. Provide dedicated organic or contract resources to reduce maintenance backlogs. 5. Establish an acceptable level of noncombat deadline equipment relative to the total combat deadline equipment and total equipment possessed and report outside the unit to the Marine Expeditionary Force commander. This would help ensure that the extent of nonmajor maintenance/repair requirements receives appropriate visibility and support requests for resources to reduce maintenance backlogs. Create a Joint Logistics Command: Responsible for global end-to-end supply chain, That includes the U.S. Transportation Command mission, Defense Logistics Agency, service logistics and transportation commands as components to Joint Logistics Command with: Regional Combatant Commanders retaining operational control of the flow of in-theater logistics; and Program managers retaining responsibility for lifecycle logistics support plan and configuration control. Lead the work to create an integrated logistics information system. Appoint an external advisory board of relevant industry experts to assist in guiding this effort. Specific recommendations made for tactical supply, theater distribution, strategic distribution, national- and theater-level supply, and command and control. Supply chain planning needs to be better integrated with a common supply chain vision. The newly designated distribution process owner (U.S. Transportation Command), in concert with the Army, the other services, and the Defense Logistics Agency, should develop and promulgate a common vision of an integrated supply chain. The complementary, not redundant roles, of each inventory location, distribution node, and distribution channel should be defined. Every joint logistics organization should examine and refine its processes to ensure detailed alignment with this vision. Review doctrine, organizational designs, training, equipment, information systems, facilities, policies, and practices for alignment with the supply chain vision and defined roles within the supply chain. The assumptions embedded within the design of each element of the supply chain with regard to other parts of the supply chain should be checked to ensure that they reflect realistic capabilities. Improve the joint understanding of the unique field requirements of the services. Likewise, the services need to understand the Defense Logistics Agency, the U.S. Transportation Command, and the General Services Agency processes and information requirements, as well as those of private-sector providers. Metrics should be adopted to maintain alignment with the vision. Logistics information systems need adequate levels of resources to provide non-line-of-sight mobile communications and effective logistics situational awareness in order to make new and emerging operational and logistics concepts feasible. Deliberate and contingency planning should include improved consideration of the logistics resource requirements necessary to execute sustained stability and support operations. Resourcing processes should consider uncertainty and implications of capacity shortages. The flexibility of financial and resource allocation processes to rapidly respond to the need for dramatic changes in logistics capacity that sometimes arises from operational forecast error should be improved. Logistics resource decisions should more explicitly consider how much buffer capacity should be provided in order to handle typical operational and demand variability without the development of large backlogs. Joint training should be extended to exercise the entire logistics system. The Army should review all wartime and contingency processes from the tactical to the national level to determine which are not exercised in training with all requisite joint organizations participating. Such processes range from setting up tactical logistics information systems to planning a theater distribution architecture to determining national level spare parts distribution center capacity requirements. Review which tasks and processes do not have adequate doctrine and mission training plans. Planning tools and organizational structures need to better support expeditionary operations. Automation should more effectively support the identification of logistics unit requirements to support a given operation. Unit “building blocks” should be the right size and modular to quickly and effectively provide initial theater capabilities and then to facilitate the seamless ramp-up of capacity and capability as a deployment matures. Conclusions and recommendations fall into three categories: programmatic, constructive, and operational. Programmatic conclusions and recommendations include: logistics transformation and interoperability. If interoperability is important to transformation, the Office of the Secretary of Defense must fund it adequately and specifically, not just the component systems and organizations being integrated. Services and agencies will be reluctant to act against their own financial interest. Title 10 can be used to prevent joint logistics transformation and interoperability, and needs clarification. If a Logistics Command is created, Title 10 may need to be amended. Expanded Office of the Secretary of Defense leadership (beyond technical standardization) for joint logistics transformation is necessary to effect change. The Logistics Systems Modernization office efforts to realign business processes and to prioritize rapid return on investment initiatives are a good start and can be expanded. A 4-Star Combatant Command – U.S. Logistics Command – in charge of logistics needs to be created, following the example of the U.S. Strategic Command. The responsibilities and enforcement powers of this Logistics Command may be significantly different than the U.S. Strategic Command model and require clear specification. Some responsibilities that this Command could undertake include: Defining the distribution authorities, scenarios, business processes and process ownership at the “hand-off” from U.S. Transportation Command distribution to services distribution. Developing doctrine and implementing joint business processes and rules for logistics interoperability between services, prioritizing known problem and conflict areas, and assigning ownership of business processes across the broader Supply Chain Operations Reference-defined supply chain. Identifying budget requirements for logistics interoperability, and requiring logistics interoperability to be adequately funded and planned as part of the acquisition process of any logistics systems. Accelerating interoperability testing of all Global Combat Support System implementations both within and across services and agencies, with a spiral development methodology. Coordinating and communicating various isolated ongoing efforts in defining logistics Extensible Markup Language schema, business processes, databases, published web services and other joint logistics projects, with the Integrated Data Environment and Enterprise Resource Planning programs underway in the services and agencies. Where conflicts, redundancies or gaps are identified, the U.S. Logistics Command may function as an “honest broker” to develop an interoperable solution, or as a “sheriff” to enforce an interoperable solution. A single logistics business process modeling needs to be created as a common reference, with the understanding that the modeling effort will be descriptive rather than prescriptive, due to Services’ autonomy and the need to continue migrating legacy systems and building new logistics capability. Since all Services, Agencies and the Office of the Secretary of Defense are employing the Supply Chain Operations Reference Model for logistics, some degree of commonality should already exist. If the process modeling effort can build on existing U.S. Transportation Command/Defense Logistics Agency business process models, and incorporate business process models from each of the Services, it may be available earlier and used more effectively. A “greenfield” effort may have limited utility and never get beyond the requirements stage. Efforts to align logistics data are underway within the Joint Staff Logistics Directorate, and in the ongoing U.S. Transportation Command/Defense Logistics Agency modeling. The touchpoints between these alignment efforts and the actual Enterprise Resource Planning implementations within the services and joint agencies could be expanded. A variety of “to-be” logistics business process models must be generated to meet the requirements of varying future war fighting scenarios. For example, loss of space assets or enemy use of electromagnetic pulse will create significant constraints on logistics interoperability, and contingency business processes should be designed for those scenarios. The logistics business process must be defined from end-to- end at the DOD level, and then Services and Agencies must assess how they will or will not align with those processes. Alignment, interoperability and jointness are consensus goals for system development, but some Service decisions not aligned with specific DOD level processes may provide net benefits and increase the robustness of the overall logistics System of Systems (the federated supply chain, or loosely-coupled approach). The ongoing questions that the U.S. Logistics Command will address are these: Should the default state for interoperability be alignment, with non- alignment developed as a scenario-based exception? Or should the default state for interoperability be non-alignment, with occasional moments of alignment (specific data feeds of a finite duration)? Some form of charter or statutory legislation is needed to prevent joint logistics transformation from backsliding into non-interoperable organizations and systems, when leadership changes. Change management for joint logistics needs to be resourced specifically, in addition to current resources for logistics transformation within services and joint agencies. Fuse the logistics and transportation functions into an integrated U.S. Logistics Command. Implement the Beyond Goldwater-Nichols Phase I recommendation to merge much of the Joint Staff Directorate of Logistics with its Office of the Secretary of Defense counterpart, the Deputy Under Secretary of Defense (Logistics & Material Readiness) into an office that reports to both the Under Secretary for Technology, Logistics, and Acquisition Policy. The public sector should seek to bolster the fault tolerance and resilience of the global container supply chain. The closure of a major port-for whatever reason-would have a significant effect on the U.S. economy. The federal government should lead the coordination and planning for such events for two reasons. First, the motivation of the private sector to allocate resources to such efforts is subject to the market failures of providing public goods. Second, the government will be responsible for assessing security and for decisions to close and reopen ports. Security efforts should address vulnerabilities along supply- chain network edges. Efforts to improve the security of the container shipping system continue to be focused on ports and facilities (although many ports around the world still failed to meet International Ship and Port Security Code guidelines even after the July 1, 2004, deadline.) Unfortunately, the route over which cargo travels is vast and difficult to secure. Measures to keep cargo secure while it is en route are essential to a comprehensive strategy to secure the global container supply chain. Research and development should target new technologies for low-cost, high-volume remote sensing and scanning. Current sensor technologies to detect weapons or illegal shipments are expensive and typically impose significant delays on the logistics system. New detection technologies for remote scanning of explosives and radiation would provide valuable capabilities to improve the security of the container shipping system. level logistics and Army/Land component logistics requirements and the need for a joint theater-level logistics commander. Codify in joint doctrine the distinction between joint theater Document a Joint Theater Sustainment Command and assign to Combatant Commands. Implement useful practices of other services. Don’t preclude early use of Logistics Civilian Augmentation Program. Complete a thorough business-based cost/benefit analysis of Radio Frequency Identification before spending more money on it. Make directive authority for the Combatant Command real. Joint doctrine must: Be prescriptive in its language, purging words like “should” and “attempt” and replacing them with specific direction. Be joint and comprehensive. It must explicitly address the joint organizational structure and staffing, develop and institutionalize joint processes and procedures, and specifically require, not assume, the necessary communications infrastructure and information tools to support this vision. Support an expeditionary logistics capability to enable rapid deployment and sustainment of flexible force structures in austere theaters around the globe. Reconcile with the emerging concepts of net-centric warfare and sense and respond logistics, balancing past lessons with the needs for the future. Joint doctrine must be based on today’s capabilities, not tomorrow’s promises. Continue to identify the combatant commander as the locus of control for logistics in support of deployed forces, and specify the tools, forces, processes, and technologies required from supporting commands. Develop a true expeditionary logistics capability. Develop logistics systems able to support expeditionary warfare. Logistics systems must be designed, tested, and developed to support a mobile, agile warfighter. Logistics capabilities need to be native to an expeditionary unit for swift and agile deployment. The people, equipment, and systems that accompany these small, cohesive units must be able to integrate data within the services and commands as well as among the coalition partners. Logistics communications planning and infrastructure are an integral part of any operation, and must be robust, fully capable, and deployable in both austere to developed environments. Planning and development of the required infrastructure must consider the issues of bandwidth, mobility, security and aggregation of logistics data. Retool the planning processes. A follow-on replacement for the current Time-Phased Force and Deployment Data /Joint Operation Planning and Execution System process is required, with the necessary improvements in task structures and planning speed. This process should directly drive sustainment planning, including acquisition and distribution decisions. The challenge of requirements identification and fulfillment in a deployed environment is a joint challenge. Planning tools must be developed that recognize and fuse the consumption of materiels and fulfillment of warfighter requirements across the joint force. The speed and flexibility of future operations demand that a closer and more dynamic relationship be developed with suppliers in the industrial base and prime vendor partners. Create an integrated theater distribution architecture Theater distribution capability must be embedded in a permanent organization within the theater or at least rapidly deployable to any global location. The balance of reserve forces and the implications of the activation cycle must be considered in the development of this organizational structure and manning. The need for a joint in-theater distribution cross dock, staging, and break-bulk operation must be explicitly recognized in every Combatant Command Area of Responsibility. Rapid maneuver and task reorganization precludes a 100% “pure pallet” shipment. Retrograde and reverse logistics capabilities must also be embedded. Leadership must recognize that the growth and development of “joint logisticians” who can operate and lead effectively in the theater environment will take time and effort, potentially altering established career progression plans. Resolve the technology issues. Rationalize logistics systems. Current battlefield and deployment realities include the existence of multiple systems for logistics support. DOD must complete and deploy an integrated architecture, including operational, systems, technical, and data elements to streamline the systems capabilities to the joint warfighter, and manage the portfolio of systems to eliminate those that cannot support the future state. Create visibility within logistics and supply systems that extends to the tactical units. Today’s warfighting mission includes mobile expeditionary engagements. Support systems need to include the ability to communicate and synchronize with rear support units and systems 24 hours a day, 365 days a year in both austere and developed environments. Ensure communications capability and availability for logistics, the environment. Logistics is an information- intensive function with constant requirements for updated information. Logistics support planning needs to include communications-level planning and should be completed before deployment. Development of the foundational role of the Distribution Process Owner. The Distribution Process Owner concept must be implemented swiftly and should recognize the potential resource requirements in the near- and mid-term to complete this task. This is a necessary first step, addressing distribution challenges, and should facilitate the establishment of an integrated, end-to-end logistics architecture, eliminating the confederation of stovepipes. Financial and transactional systems should not be a hindrance to going to war: They must be designed so that the transition from peace to war is seamless; the ability to employ these systems in a deployed environment must take precedence over garrison requirements. More emphasis needs to be placed on managing retrograde and repairables. Processes must be synchronized and integrated across the stovepipes. Synchronize the chain: from Continental United States to Area of Responsibility. Capacities across the distribution nodes and distribution links, and across the entire logistics network but particularly in theater, must be reviewed, understood, and actively managed. The ability to determine and manage practical and accurate throughput capacities for air and seaports, along with an understanding of the underlying commercial infrastructure is essential to future planning. The ability to evaluate possible scenarios for host nation support is also critical. Deploy Performance Based Logistics agreements more comprehensively. Standardize Performance Based Logistics implementation. Implementation of Performance Based Logistics must become more standard to prevent confusion with other contractor support services and activities. To the extent possible, common metrics and terms must be developed and applied. Implement Performance Based Logistics across total weapons systems. Support broad end-to-end application. Much integration and synchronization is required to ensure full system synchronization of performance metrics but the end capability of tracking total system performance to both cost and “power by the hour” is a significant potential advancement in warfighter support. Make Radio Frequency Identification real. Extend Radio Frequency Identification to the warfighter. Asset tracking system capabilities, infrastructure, and support must extend to the farthest reaches of the logistics supply chain, even in austere environments. Do not combine U.S. Transportation Command and Defense Logistics Agency. Roles, missions and competencies of the two organizations are too diverse to create a constructive combination. Organizational merger would not significantly facilitate broader transformational objectives of supply chain integration. Both organizations perform unique activities/functions in the supply chain. The real problem is not that the two organizations are separate, but that their activities are not well integrated. Elevate leadership for Department of Defense global supplies chain integration. Designate a new Under Secretary of Defense for Global Supply Chain Integration reporting directly to the Secretary of Defense. Ensure the Global Supply Chain Integration is a civilian with established credibility in the field of supply chain management. Establish the Global Supply Chain Integration’s appointment as a fixed term for a minimum of 6 years. Direct the U.S. Transportation Command and the Defense Logistics Agency to report to Global Supply Chain Integration. Create a working relationship for the Global Supply Chain Integration with the Chairman of the Joint Chiefs of Staff. Build the Global Supply Chain Integration’s staff from existing staffs in the Office of the Secretary of Defense, the U.S. Transportation Command, and the Defense Logistics Agency. Empower a Global Supply Chain Integrator with the required authority and control to effect integration. The Global Supply Chain Integrator should be granted authority to: Build end-to-end integrated supply chains through the establishment of policies and procedures. Enable privatization and partnering with global commercial distributors. Oversee program management decisions related to major systems vendor support. Establish/authorize organizations and processes to control flow during deployment/wartime scenarios. Control budgetary decisions affecting the U. S. Transportation Command, the Defense Logistics Agency, and the distribution budgets of the services. In addition to the contacts named above, key contributors to this report were Thomas W. Gosling, Assistant Director, Susan C. Ditto, Amanda M. Leissoo, Marie A. Mak, and Janine M. Prybyla.
Military operations in Iraq and Afghanistan have focused attention on the Department of Defense's (DOD) supply chain management. The supply chain can be critical to determining outcomes on the battlefield, and the investment of resources in DOD's supply chain is substantial. In 2005, with the encouragement of the Office of Management and Budget (OMB), DOD prepared an improvement plan to address some of the systemic weaknesses in supply chain management. GAO was asked to monitor implementation of the plan and DOD's progress toward improving supply chain management. GAO reviewed (1) the integration of supply chain management with broader defense business transformation and strategic logistics planning efforts; and (2) the extent DOD is able to demonstrate progress. In addition, GAO developed a baseline of prior supply chain management recommendations. GAO surveyed supply chain-related reports issued since October 2001, identified common themes, and determined the status of the recommendations. DOD's success in improving supply chain management is closely linked with its defense business transformation efforts and completion of a comprehensive, integrated logistics strategy. Based on GAO's prior reviews and recommendations, GAO has concluded that progress in DOD's overall approach to business defense transformation is needed to confront problems in other high-risk areas, including supply chain management. DOD has taken several actions intended to advance business transformation, including the establishment of new governance structures and the issuance of an Enterprise Transition Plan aligned with the department's business enterprise architecture. As a separate effort, DOD has been developing a strategy--called the "To Be" logistics roadmap--to guide logistics programs and initiatives across the department. The strategy would identify the scope of logistics problems and capability gaps to be addressed and include specific performance goals, programs, milestones, and metrics. However, DOD has not identified a target date for completion of this effort. According to DOD officials, its completion is pending the results of the department's ongoing test of new concepts for managing logistic capabilities. Without a comprehensive, integrated strategy, decision makers will lack the means to effectively guide logistics efforts, including supply chain management, and the ability to determine if these efforts are achieving desired results. DOD has taken a number of actions to improve supply chain management, but the department is unable to demonstrate at this time the full extent of its progress that may have resulted from its efforts. In addition to implementing audit recommendations, DOD is implementing initiatives in its supply chain management improvement plan. However, it is unclear how much progress its actions have resulted in because the plan generally lacks outcome-focused performance metrics that track progress in the three focus areas and at the initiative level. DOD's plan includes four high-level performance measures, but these measures do not explicitly relate to the focus areas, and they may be affected by many variables, such as disruptions in the distribution process, other than DOD's supply chain initiatives. Further, the plan does not include overall cost metrics that might show efficiencies gained through the efforts. Therefore, it is unclear whether DOD is meeting its stated goal of improving the provision of supplies to the warfighter and improving readiness of equipment while reducing or avoiding costs. Over the last 5 years, audit organizations have made more than 400 recommendations that focused specifically on improving certain aspects of DOD's supply chain management. About two-thirds of the recommendations had been closed at the time GAO conducted its review, and most of these were considered implemented. Of the total recommendations, 41 percent covered the focus areas in DOD's supply chain management improvement plan: requirements forecasting, asset visibility, and materiel distribution. The recommendations addressed five common themes--management oversight, performance tracking, planning, policy, and processes.
Opportunities for servicewomen have increased dramatically since 1948, when the Women’s Armed Services Integration Act of 1948 gave women a permanent place in the military services. However, the act excluded women from serving on Navy ships (except hospital ships and transports) and aircraft engaged in combat missions. Because the Marine Corps is a naval oriented air and ground combat force, the exclusion of women from Navy ships essentially barred them from combat positions in the Marine Corps as well. The Women’s Army Corps already excluded women from combat positions, eliminating the need for a separate statute for Army servicewomen. During the 1970s, Congress and the services created more opportunities for women in the military. In 1974, the age requirement for enlistment without parental consent became the same for men and women. Then, in 1976, women were admitted to the Air Force Academy, the Naval Academy, and the Military Academy. In 1977, the Army implemented a policy that essentially opened many previously closed occupations, including some aviation assignments, but formally closed combat positions to women. Finally, in 1978, Congress amended the 1948 Integration Act to allow women to serve on additional types of noncombat ships. The Navy and the Marine Corps subsequently assigned women to noncombat ships such as tenders, repair ships, and salvage and rescue ships. In February 1988, DOD adopted a Department-wide policy called the Risk Rule, that set a single standard for evaluating positions and units from which the military service could exclude women. The rule excluded women from noncombat units or missions if the risks of exposure to direct combat, hostile fire, or capture were equal to or greater than the risk in the combat units they supported. Each service used its own mission requirements and the Risk Rule to evaluate whether a noncombat position should be open or closed to women. The National Defense Authorization Act for Fiscal Years 1992 and 1993 repealed the prohibition on the assignment of women to combat aircraft in the Air Force, the Navy, and the Marines Corps. The act also established the Presidential Commission on the Assignment of Women in the Armed Forces to study the legal, military, and societal implications of amending the exclusionary laws. The Commission’s November 1992 report recommended retaining the direct ground combat exclusion for women. In April 1993, the Secretary of Defense directed the services to open more specialties and assignments to women, including those in combat aircraft and on as many noncombatant ships as possible under current law. The Army and the Marine Corps were directed to study the possibility of opening more assignments to women, but direct ground combat positions were to remain closed. The Secretary of Defense also established the Implementation Committee, with representatives from the Office of the Secretary of Defense, the military services, and the Joint Chiefs, to review the appropriateness of the Risk Rule. In November 1993, Congress repealed the naval combat ship exclusions and required DOD to notify Congress prior to opening additional combat positions to women. In January 1994, the Secretary of Defense, in response to advice from the Implementation Committee, rescinded the Risk Rule. In DOD’s view, the rule was no longer appropriate based on experiences during Operation Desert Storm, where everyone in the theater of operation was at risk. The Secretary also established a new DOD-wide direct ground combat assignment rule that allows all servicemembers to be assigned to all positions for which they qualify, but excludes women from assignments to units below the brigade level whose primary mission is direct ground combat. The purpose of this change was to expand opportunities for women in the services. Additionally, the Secretary stipulated that no units or positions previously open to women would be closed. At that time, the Secretary issued a definition of direct ground combat to ensure a consistent application of the policy excluding women from direct ground combat units. As of September 1998, DOD had not revised its 1994 rule or changed its direct ground combat definition. In addition to establishing the direct ground combat assignment rule in 1994, the Secretary of Defense also permitted the services to close positions to women if (1) the units and positions are required to physically collocate and remain with direct ground combat units, (2) the service Secretary attests that the cost of providing appropriate living arrangements for women is prohibitive, (3) the units are engaged in special operations forces’ missions or long-range reconnaissance, or (4) job related physical requirements would exclude the vast majority of women. The military services may propose additional exceptions, with justification to the Secretary of Defense. At the time of our review, about 221,000 positions, or about 15 percent of the approximately 1.4 million positions in DOD, were closed to servicewomen. About half of these are closed because of DOD’s policy to exclude women from positions whose primary mission is to engage in direct ground combat. Figure 1 shows the percentage and numbers of positions closed based on exclusion policies. Appendixes I and II provide more details on the numbers and types of positions closed by each service. As figure 1 shows, about 46 percent of the positions closed to women in the military services are associated with the direct ground combat exclusion policy. These positions, according to DOD officials, are in units whose primary mission is to engage in direct ground combat and includes occupations in infantry, armor, field artillery, and special forces. The majority of these closures are in the Army, followed by the Marine Corps, and a small number in the Air Force. About 41 percent of the positions closed to women are attributed to the collocation exclusion policy. Units that collocate with direct ground combat units operate within and as part of those units during combat operations. For example, Army ground surveillance radar units, while not considered direct ground combat units, routinely operate with infantry and armor units on the battlefield. Because of the differences in roles, missions, and organization between the Army and the Marine Corps, however, some positions that are closed for collocation reasons in the Army may be closed for direct ground combat reasons in the Marine Corps, according to DOD officials. Cost-prohibitive living arrangements account for about 12 percent of the positions closed to women. These positions are exclusive to the Navy and are on submarines and small surface vessels like mine sweepers, mine hunters, and coastal patrol ships. The special operations forces and long-range reconnaissance missions exclusion policy accounts for almost 2 percent of all positions closed to women. These closures are in the Navy and the Air Force because the Army classifies most of its special operations forces as direct ground combat forces. During our review we found no additional exceptions or exclusions based on physical requirements. When DOD formalized its policy excluding women from direct ground combat positions in 1994, it adopted the primary elements of the Army’s ground combat exclusion policy as the DOD-wide assignment rule. According to DOD officials from the Office of the Under Secretary of Defense for Personnel and Readiness, the prohibition on direct ground combat was a long-standing Army policy, and for that reason, no consideration was given to repealing it when DOD adopted the current assignment policy in 1994. Other reasons for continuing the ground combat exclusion policy were presented in a 1994 DOD news briefing announcing the opening of 80,000 new positions to servicewomen. At the briefing, defense officials said they believed that “integrating women into ground combat units would not contribute to the readiness and effectiveness of those units” due to the nature of direct ground combat and the way individuals need to perform under those conditions. The DOD official providing the briefing said that physical strength and stamina, living conditions, and lack of public support for women in ground combat were some of the issues considered. According to DOD, its perception of the lack of public support was partly based on the results of a survey done in 1992 for the Presidential Commission on the Assignment of Women in the Armed Forces. DOD documents also cited the Department’s lack of experience with women in direct ground combat and its observation of the experience of other countries as part of the rationale for continuing the exclusion of women from direct ground combat. As of September 1998, DOD had no plans to reconsider the ground combat exclusion policy because, in its view, there is no military need for women in ground combat positions because an adequate number of men are available. Additionally, DOD continues to believe that opening direct ground combat units to women lacks congressional and public support. Finally, DOD cited military women’s lack of support for involuntary assignments to ground combat positions as another reason for continuing its exclusion policy. This lack of support has been documented in several studies of military women. For example, in a 1997 Rand Corporation study, done at the request of DOD, most servicewomen expressed the view that while ground combat positions should be opened to women, such positions should be voluntarily assigned. DOD provided the military services with a single definition of direct ground combat. The services use the definition to ensure a common application of the policy excluding women from direct ground combat units. To be considered a direct ground combat unit, the primary mission of the unit must include all the criteria of the direct ground combat definition. Specifically, DOD defines direct ground combat as engaging “an enemy on the ground with individual or crew served weapons, while being exposed to hostile fire and to a high probability of direct physical contact with the hostile force’s personnel.” In addition, DOD’s definition states that “direct ground combat takes place well forward on the battlefield while locating and closing with the enemy to defeat them by fire, maneuver, or shock effect.” According to ground combat experts, “locating and closing with the enemy to defeat them by fire, maneuver, or shock effect” is an accurate description of the primary tasks associated with direct ground combat units and positions. However, DOD’s definition of direct ground combat links these tasks to a particular location on the battlefield—“well forward.” In making this link, the definition excludes battlefields that may lack a clearly defined forward area. According to current Army and Marine Corps ground combat doctrine, battlefields are generally conceptualized to include close, deep, and rear operational areas. Close operations areas involve friendly forces that are in immediate contact with enemy forces and are usually exposed to the greatest risk. Direct ground combat units, along with supporting collocated units, primarily operate in the close operations area. Deep operations are focused beyond the line of friendly forces and are generally directed against hostile supporting forces and functions, such as command and control, and supplies. Rear operations sustain close and deep operations by providing logistics and other supporting functions. Several factors determine how the battlefield will develop during a military operation, including mission, available resources, terrain, and enemy forces. The phrase “well forward on the battlefield” in DOD’s definition, according to ground combat experts, implies that military forces will be arrayed in a linear manner on the battlefield. On this battlefield, direct ground combat units operate in the close operational area where the forward line of troops comprises the main combat units of friendly and hostile forces. Land battles envisioned in Europe during the Cold War were planned in a linear manner. Figure 2 depicts an example of a linear battlefield. Battlefields can also be arrayed in a nonlinear manner, meaning that they may have a less precise structure, and the functions of close, deep, and rear operations may have no adjacent relationship. On a nonlinear battlefield, close operations can take place throughout the entire area of military operations, rather than just at the forward area as in the linear organization. Recent military operations like Operation Restore Hope in Somalia and Operation Joint Endeavor in Bosnia involved nonlinear situations that lacked well-defined forward areas, according to ground combat experts. Figure 3 depicts an example of a nonlinear battlefield. Ground combat experts in the Army and the Marine Corps note that, in the post-Cold War era, the nonlinear battlefield is becoming more common. Should this trend continue, defining direct ground combat as occurring “well forward on the battlefield” may become increasingly less descriptive of actual battlefield conditions. We provided a draft of this report to the Office of the Secretary of Defense, the Army, the Air Force, the Marine Corps, and the Navy. The Office of the Secretary of Defense and the military services orally concurred with information presented in the report. Additionally, the Army, the Navy, and the Marine Corps provided technical comments, which we incorporated as appropriate. To identify the military occupations and positions closed to women, we reviewed data from the Army, the Marine Corps, the Navy, and the Air Force on current positions closed to women, the numbers associated with each closed position, and the justification for each closed position. Based on the information provided, we compiled the closed occupations and positions to determine the total number of positions closed and the justification for each. We discussed the currency of this information with officials from the Department of the Army, Deputy Chief of Staff for Personnel; Headquarters Marine Corps, Deputy Chief of Staff for Manpower and Reserve Affairs; the Department of the Navy, Bureau of Naval Personnel; and the Department of the Air Force, Deputy Chief of Staff, Personnel. During this review, we did not evaluate the military services’ decisions for closing certain positions or units to women. To identify DOD’s rationale for the exclusion of women from direct ground combat positions, we reviewed documents, including policy memorandums, congressional correspondence, and press briefings from the Office of the Under Secretary of Defense for Personnel and Readiness. We also interviewed officials from the Office of the Under Secretary of Defense for Personnel and Readiness, who helped provide useful information regarding the historical origins of the prohibition of women in direct ground combat. To determine the relationship of DOD’s definition of direct ground combat to current military operations, we reviewed Army and Marine Corps ground combat doctrine. Doctrine is developed from a variety of sources, including actual lessons learned from combat operations, and it provides a framework for military forces to plan and execute military operations. We also interviewed ground combat doctrine officials at the Army’s Combined Arms Center, Fort Leavenworth, Kansas, and the Marine Corps’ Combat Development Command, Quantico, Virginia, and an expert from the Naval War College, Newport, Rhode Island. We did not evaluate the rationale the military services used to classify closures based on the Secretary of Defense’s approved justifications. To calculate the percentage of positions closed to women in the military services, we used the active duty authorized personnel end strength for fiscal year 1998. Authorized end strength is the maximum number of personnel authorized by Congress for a particular service. The Marine Corps, in some publications, may show a higher percentage of positions closed because it uses actual assignable positions to derive the percentage of positions closed to women. The actual strength, which is a measurement of personnel at a particular point in time, fluctuates throughout the year and can sometimes be lower than authorized personnel end strength. We conducted our review from March to September 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees and Members of Congress; the Secretaries of Defense, the Army, the Air Force, and the Navy; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will also make copies available to other interested parties upon request. Please contact me at (202) 512-5140 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix III. About 15 percent of all positions across the armed forces are closed to women because they (1) are in occupations that primarily engage in direct ground combat, (2) collocate and operate with direct ground combat units, (3) are located on ships where the cost of providing appropriate living arrangements is considered prohibitive, or (4) are in units that engage in special operations missions and long-range reconnaissance. Table I.1 shows the number of positions closed in each service and the exclusion justification. About 142,000 positions, or about 29 percent, of the Army’s fiscal year 1998 active force authorized personnel end strength of 495,000 are closed to women. About half of these closures are associated with occupations involving direct ground combat. These closures include the occupational fields of infantry, armor, and special forces. The remaining closures are in occupational specialties or units that are required to collocate and remain with direct ground combat units, including combat engineering, field artillery, and air defense artillery. Also, some occupational specialties in the petroleum and water, maintenance, and transportation career fields, for example, are considered open to women but are closed at certain unit levels because they collocate with direct ground combat units. About 43,400 positions, or about 25 percent, of the Marine Corps’ fiscal year 1998 active force authorized personnel end strength of 174,000, are closed to women. About two-thirds of the closures are in occupational fields involving direct ground combat, such as infantry, artillery and tank, and assault amphibious vehicles. The other third of the closures are in occupational specialties that are required to collocate and remain with direct ground combat units, such as counterintelligence specialists and low-altitude air defense gunners. In addition, some occupational specialties, such as landing support specialist and engineering officer, are generally open to women but are closed at certain unit levels because of collocation with direct ground combat units. About 2,300 positions, or less than 1 percent, of the Air Force’s fiscal year 1998 active force authorized personnel end strength of 371,577 are closed to women. About 69 percent of these are in occupations such as tactical air command and control, combat controller, and pararescue, which are involved with direct ground combat, according to Air Force documents. About 18 percent are closed because the Air Force places restrictions on assignments to aircrew positions in its helicopters that conduct special operations forces missions. About 13 percent of the closures are in certain weather and radio communications occupations because they collocate with ground combat units or special operations forces. Appendix II shows the career fields and occupations that are closed to women. Other occupations, for example in transportation, maintenance, and aviation, are generally considered open, but women may be restricted from assignment to them at various unit levels because these units collocate with direct ground combat forces. Carol R. Schuster William E. Beusse Colin L. Chambers Carole F. Coffey Julio A. Luna Andrea D. Romich The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed various issues pertaining to the treatment of men and women in the armed forces, focusing on: (1) the numbers and types of positions that are closed to women and the associated justifications for closure; (2) Department of Defense's (DOD) current rationale for excluding women from direct ground combat; and (3) the relationship of DOD's definition of direct ground combat to current military operations. GAO noted that: (1) approximately 221,000 of DOD's 1.4 million positions are closed to women, who comprise about 14 percent of the armed services; (2) about 101,700 of these positions are closed based on DOD's policy of not assigning women to occupations that require engagement in direct ground combat; (3) the remaining 119,300 positions are closed because they are collocated and operate with direct ground combat units, are located on certain ships where the cost of providing appropriate living arrangements for women is considered prohibitive, or are in units that conduct special operations and long-range reconnaissance missions; (4) GAO found no positions closed to women because of job-related physical requirements; (5) DOD's current rationale for excluding women from direct ground combat units or occupations is similar to its rationale when it first formalized the combat exclusion policy in 1994; (6) at that time, DOD officials did not consider changing its long-standing policy because they believed that the integration of women into direct ground combat units lacked both congressional and public support; (7) furthermore, transcripts of a 1994 press briefing indicate that DOD officials believed that the assignment of women to direct ground combat units would not contribute to the readiness and effectiveness of those units because of physical strength, stamina, and privacy issues; (8) at the time of GAO's review, DOD had no plans to reconsider the ground combat exclusion because in GAO's view: (a) there is no military need for women in ground combat positions because an adequate number of men are available; (b) the idea of women in direct ground combat continues to lack congressional and public support; and (c) most servicewomen do not support the involuntary assignment of women to direct ground combat units; (9) DOD's definition of direct ground combat includes a statement that ground combat forces are well forward on the battlefield; (10) this statement, however, does not reflect the less predictable nature of emerging post-Cold War military operations that may not have a well-defined forward area on the battlefield; and (11) if this trend continues, DOD's definition of direct ground combat may become increasingly less descriptive of actual battlefield conditions.
In general, the 14 island nations in the Pacific Ocean that we reviewed face significant development challenges. With few exceptions, such as Papua New Guinea and Fiji, the island nations have small economies and limited natural resources, and most are highly vulnerable to natural disasters, environmental problems, and the impacts of climate change. Their remote location, poor access to commercial and capital markets, and limited institutional capacity hinder economic development. In many islands, the public sector is disproportionately large, the private sector is poorly developed, and there is a shortage of trained personnel to meet development challenges. Finally, rapid urbanization, population growth, and inadequate infrastructure are outstripping the countries’ ability to meet basic health and education needs. (See fig. 1 for a map of most of the island nations and territories in the Pacific region.) Virtually all of the Pacific Island nations, including the Federated States of Micronesia (FSM) and the Republic of the Marshall Islands (RMI), receive development assistance. (See app. III for a description of the recipient nations and the assistance they receive.) In addition, at least seven island territories in the Pacific (including New Caledonia) receive direct government assistance from their associated governments and, in some cases, a small amount of development assistance from other donors. Five of the small island nations—Kiribati, Samoa, the Solomon Islands, Tuvalu, and Vanuatu—are “least developed countries,” according to the United Nations, meaning that they have special development needs. In 1986, the United States entered into a Compact of Free Association with the FSM and the RMI, both of which were part of the U.N. Trust Territory of the Pacific Islands administered by the United States. The United States agreed, in part, to provide economic assistance to these countries to help them in their efforts to become economically self-sufficient. A portion of the Compact assistance to the RMI is also used for payments to landowners related to the U.S. military presence at the Kwajalein Atoll. The Department of the Interior has responsibility for administering economic assistance to the two countries. This funding represented a continuation of U.S. financial support that had been supplied to these areas for almost 40 years after World War II. The two nations have also received support in the form of direct government services, such as U.S. Postal Service and National Weather Service assistance, and grants and loans from U.S. domestic agencies. From fiscal years 1987 through 2001, total U.S. support to the islands—Compact assistance and other U.S. assistance—is estimated at more than $2.6 billion. The economic assistance provided to the two countries through the Compact of Free Association expires in late 2001. However, the Compact provides funding for an additional 2 years if negotiations on further assistance are not completed by that time. In June 2000, the Department of State’s negotiator for the Compact of Free Association testified that the general approach to the new negotiations with the FSM and the RMI includes sector grants and trust fund contributions, in place of the financial transfers provided in the first 15 years of the Compact, to improve accountability for the use of funds.State concurred with our finding that the FSM, the RMI, and the United States provided limited accountability over Compact expenditures from 1987 to 1998. From 1987 through 1999, the seven top donor countries and organizations provided about $11 billion, or 93 percent, of all development assistance to help Pacific Island nations. The bilateral donors generally targeted their assistance to a few recipients, while the multilateral donors distributed aid more broadly to member nations in the region. Five bilateral donors—Australia, Japan, New Zealand, the United Kingdom, and the United States—and two multilateral donors— the Asian Development Bank (ADB) and the European Union (EU)—provided about $11 billion in official development assistance to Pacific Island nations between 1987 and 1999, according to our review of data from the OECD and annual financial audits of the FSM, the RMI, and Palau. Figure 2 shows the top donors and the amount of total assistance provided to the Pacific region from 1987 through 1999. The major donor countries and organizations have varied widely regarding the development assistance provided to recipients from 1987 to 1999. The major bilateral donors, except for Japan and New Zealand, have concentrated their assistance on relatively few Pacific Island nations. For example, between 1987 and 1999, about 75 percent of Australia’s assistance to the region went to Papua New Guinea, which is the largest country in the region, and about 91 percent of U.S. assistance went to the FSM, the RMI, and Palau. The multilateral donors also concentrated on a few recipients. The EU and the ADB gave about 65 percent and 55 percent of their assistance, respectively, to their top two recipients—Papua New Guinea and the Solomon Islands for the EU and Papua New Guinea and Samoa for the ADB. Two major donors, the United Kingdom’s Department for International Development and the U.S. Agency for International Development (USAID), significantly reduced their presence in the Pacific in the 1990s. The programs of two other donors, New Zealand and Japan, were under review in those countries at the time we prepared this report. The purpose of the New Zealand review is to examine how the development assistance program can best meet the long-term development needs of the recipients, given that most of the recipient countries will be dependent on aid indefinitely. The purpose of the program review in Japan is to look for opportunities to improve Japan’s budget deficit. Many Pacific Island nations are dependent on a single donor for most of their assistance. Seven of the 14 recipient countries received more than 50 percent of their aid from a single donor from 1987 through 1999. For example, the FSM and the RMI received 93 percent and 89 percent of their assistance, respectively, from the United States, according to our analysis. In addition, aid is concentrated between donors and recipients linked by free association agreements. The five Pacific Island nations with free association status—the FSM, the RMI, and Palau, which are freely associated with the United States, and Niue and the Cook Islands, which are freely associated with New Zealand—received an average of 84 percent of their aid from their top donor, while the other seven recipients received an average of 37 percent of their aid from their top donor. According to documents of the major donors, their principal development objectives are to alleviate poverty in the region and help the island nations become more self-sufficient. To achieve these objectives, Australia, for example, focuses its assistance in the Pacific on education and training, economic reform and governance, health, environment and natural resources, and private sector development. In 1998-99, Australia allocated 35 percent of its aid for education and training, 20 percent for economic reform and governance, 15 percent for health, 15 percent for environment and natural resources, 5 percent for private sector development, and 10 percent for other areas. Similarly, to achieve its objectives, New Zealand supports projects around six strategies: security and governance, civil society, gender equality, social development, the environment, and business. Finally, as another example, the ADB is tackling poverty through promoting programs, such as public sector reform programs, in its Pacific member countries. Since 1995, the ADB has undertaken reform programs in seven Pacific Island nations for macroeconomic stabilization, good governance, public sector efficiencies, and private sector development. The major donors recognize that their choice of assistance strategies must address long-term aid dependence by many Pacific recipients and trade- offs involving multiple objectives for assistance, costs, effectiveness, and accountability. Within this environment, the donors have tried several strategies to achieve their development objectives, such as incorporating flexibility and relying on trust funds. (See app. IV for further information on trust funds.) Economic self-sustainability will be a difficult challenge for many Pacific Island nations and is not a realistic goal for the smaller and more remote countries, according to officials at and documents from the Australian Agency for International Development, the Japan Ministry of Foreign Affairs, the New Zealand Ministry of Foreign Affairs and Trade, and the ADB. The officials expect that, under the best circumstances, most countries will need assistance for the foreseeable future to achieve improvements in development. According to an ADB report, “t is widely understood that the smallest and least-endowed island states will need to be assisted by free transfers of resources indefinitely, if they are to maintain standards of welfare that the donors of the aid can bear to look at.…” Two major donors—the United Kingdom Department for International Development and USAID—chose to cut their bilateral programs significantly in the 1990s, due to changed priorities and agency budgetary reasons. The United Kingdom switched from a bilateral program to a regional program in 1995 that focused on three countries—Kiribati, the Solomon Islands, and Vanuatu—where the need was greatest. According to a Department for International Development official at the regional office in Fiji, the United Kingdom now expects to end the regional program by 2004, as part of a worldwide change in the agency’s focus, and will provide support to the region through multilateral donors. USAID ended its bilateral program in the South Pacific in 1994, due to agency budgetary reasons, and now provides modest assistance for a regional environmental program. In providing assistance to the Pacific, most of the major donors combine their development interests with other motivations, according to officials and documents of the donor agencies. These other motivations include historic ties between the donor and the recipient (such as former dependencies), foreign policy interests, and strategic interests. Australia’s large commitment of assistance to Papua New Guinea, for example, responds to development needs in the country but also reflects the historical relationship and the development assistance program as agreed through a treaty with its former territory. For New Zealand, the development assistance program is one pillar of its foreign policy and is intended to contribute to stability and harmony in the South Pacific. Finally, U.S. assistance to the FSM and the RMI, through the Compact of Free Association, is one of three elements (political, economic, and defense) of the Compact. The defense element includes a right granted to the United States by the FSM and the RMI to deny access by third countries for military use. While multiple motivations do not inherently conflict with development interests, other interests, in some cases, have taken precedence over the effectiveness and accountability of the development assistance. According to Australian, New Zealand, and State officials, for example, the donor countries initially chose to provide unrestricted budget support to former territories as a means of separating themselves from colonialist administration. In the case of the Compact with the FSM and the RMI, State counseled Interior to be lenient in reviewing the use of Compact funds in the early years of the Compact because State placed a high priority on maintaining friendly relations with the FSM and the RMI. By 1993, however, the United States began placing greater emphasis on the effectiveness and accountability of the assistance due, in part, to the end of the Cold War. Finally, according to Ministry of Foreign Affairs officials, Japan generally selects development projects from the requests of Pacific Island nations. The criteria for evaluating a specific request include, for example, the extent to which the project will be seen as a Japanese contribution but do not include an evaluation of the project need or sustainability. In addition to recognizing that their development assistance may be intended to achieve multiple objectives, the donors have used a range of assistance strategies in striving to reach a desired balance of aid effectiveness, accountability, and administrative cost. The donors have used at least six different strategies to deliver their development assistance to the Pacific Island nations. These strategies include technical assistance, such as the ADB’s funding of the economic advisory team in the FSM; project assistance, such as Japan’s road improvement projects in the RMI; program assistance, such as the U.S. Department of Labor’s job training program in the FSM; budget support, such as New Zealand’s support for government operations sectorwide programs, such as Australia’s pilot program to support the health sector in Papua New Guinea; and contributions to trust funds, such as Australia’s, New Zealand’s, and the United Kingdom’s contributions to the Tuvalu Trust Fund, which is intended to provide self-sustaining revenue. These strategies often provide different levels of donor control over their assistance, according to officials with USAID, the New Zealand Ministry of Foreign Affairs and Trade, and the Australian Agency for International Development. Technical assistance and project assistance, for example, enable donors to exercise a high level of control and accountability by participating directly in funded activities, while unrestricted budget support and some forms of trust fund contributions allow donors little or no control over their assistance because the donor is only providing cash to the recipient. Reduced control over assistance is associated with more uncertainty in achieving the aid objectives and ensuring accountability. Yet, donors also acknowledge that the higher level of control involves greater administrative costs. Thus, there are trade-offs between donor control and costs, on one side, and expected effectiveness and accountability, on the other. The following examples illustrate these trade- offs: New Zealand and Australia have cut the amount of budget support they provide as an assistance strategy in an effort to improve the effectiveness of their assistance. They found from their experience that (1) budget support did not achieve the intended development objectives and (2) these funds were largely unaccounted for. In 1997, New Zealand eliminated all of its annual budget support to the Cook Islands and focused on technical and project assistance after New Zealand found that the Cook Islands government was misusing funds and had built a large and inefficient public sector. Similarly, the Australian Agency for International Development gradually eliminated its annual budget support to Papua New Guinea from 1990 to 2000 and replaced it with more than 100 separate project grants, because Australia could not identify specific development benefits linked to its cash transfer. At that time, Australia believed that project assistance, which it refers to as “jointly programmed activities,” would give it more control over development activities. According to an Australian official, delivering budget support to Papua New Guinea required only 1 to 2 staff in 1990; but, by 2000-01, the Australian program supported more than 100 projects and required 73 staff from the Australian Agency for International Development, 30 Papua New Guinea staff, and at least 1 contractor for each project. The Australian Agency for International Development, the New Zealand Ministry of Foreign Affairs and Trade, and USAID have also faced trade- offs in adopting policies on the use of development assistance to pay for recurring expenses—that is, the annual government operations and maintenance costs—to improve the effectiveness of their aid. On the one hand, the donors are concerned that providing assistance for recurring expenses provides a disincentive to recipients to become more self- sufficient, and that recipients may choose to use assistance to pay for operating costs that are not related to development. The donors noted that recipients might decide to defer maintenance with the expectation that donor assistance will always be available. On the other hand, the donors are concerned that the projects they helped develop are not maintained or staff and supplies are not provided, and, thus, the assistance does not have a sustainable impact. The FSM, for instance, depends on U.S. assistance to meet 98 percent of its educational operating expenses, according to a November 2000 ADB report on a proposed loan to the FSM. Officials at Interior said the Compact economic assistance was expected to pay for recurring expenses as well as program expenses and capital improvements. To address their concerns, Australia and New Zealand adopted a joint policy in 1992 to define acceptable and unacceptable uses of assistance for recurring expenses. In addition, USAID has a policy on recurrent cost problems, which calls for funding of recurrent costs under narrow conditions, such as having a carefully phased plan for shifting the cost burden to the recipient government. The major donors are exploring or have adopted assistance strategies designed to improve aid effectiveness while reacting to the context of providing aid in the Pacific region—long-term aid dependence; trade-offs among multiple motivations for assistance; and trade-offs to balance cost, effectiveness, and accountability. Australia, Japan, New Zealand, the United Kingdom, and the ADB have adopted strategies that promote the development of good governance policies in the recipient countries. This emphasis follows the widely accepted principle that aid is more effective in countries with good policy environments in place. According to the Australian Agency for International Development, “good governance” means competent management of a country’s resources and affairs in a manner that is open, transparent, accountable, equitable, and responsive to people’s needs. Australia, for example, supports efforts to develop a rule of law. The New Zealand Ministry of Foreign Affairs and Trade believes that good governance is critical to dealing with such issues as drug trafficking, money laundering, Internet scams, and migratory diseases. The ADB, as one example of a donor that embraces this principle, shifted its strategy in 1995 to focus on economic policy and good governance issues. Between 1995 and 1998, the ADB supported reform efforts in seven Pacific Island nations to improve policy environments, including fiscal reform programs in the FSM and the RMI, which led to reductions of 37 percent and 33 percent, respectively, in the size of their public sectors. In 2000, the ADB adopted a new development strategy for the Pacific that takes a subregional approach, underscoring the differences between various Pacific Island nations. The ADB strategy separates the island nations into three categories that are based on the nations’ resource profiles and their growth prospects. For example, the ADB lists the RMI and the FSM in different categories. The strategy for the RMI, which is an island atoll nation with severe development disadvantages emphasizes the use of trust funds to support sustainable financing of basic services and development of niche markets such as tourism. In contrast, the strategy for the FSM, which falls into the category of countries with a higher skill base, good growth prospects, and moderate resource potential, focuses on physical infrastructure and private sector development to promote economic growth. According to the ADB’s strategy, the implementation of its previous strategy in 1996 provided several lessons, including the need (1) for the Pacific Island nations to have stronger ownership of policy and reform programs and (2) to design development strategies that take account of local cultures and capacities. Flexible strategies are allowing donors to use their assistance as incentives and disincentives. Australia recently created two development incentives within its strategies that can provide funds for activities outside the annual program plan. One incentive, a fund for Papua New Guinea, has two components: one, a policy component to encourage and reward the effective implementation of the government development policy, and, two, a program component to fund organizations that have track records of good program management. Another incentive, the Policy and Management Reform initiative in the Pacific, allocates funds competitively to countries on the basis of demonstrated commitment to reform. Australia provided assistance from the fund to Vanuatu, for example, to reinforce a new government’s commitment to economic and public sector reform. Flexibility in their strategies also enabled Australia and New Zealand to stop delivering assistance under undesirable circumstances. New Zealand, for example, suspended funding to the governments of Fiji, in response to a coup, and to the Solomon Islands, in response to civil unrest, while maintaining the assistance to community organizations so that aid for basic human services could continue. Australia also suspended some of its funding to Fiji due to political unrest and, in the Solomon Islands, refocused its aid on peace, security, and basic needs when ethnic conflict disrupted the country and the delivery of the aid program in mid-2000. According to New Zealand officials, flexibility is a key in selecting an assistance strategy, because it allows donors the ability to adjust programs over time as priorities and development needs change. Australia’s Pacific Islands Development Strategy, 1999-2001 recommends that the donor avoid locking in commitments to rigidly designed projects to minimize the risk that the donor is not able to adjust the aid when priorities or other critical circumstances change. All of the major donors have highlighted their donor coordination efforts to improve the efficiency of delivering assistance and reduce the burden of multiple donor requirements on recipients. Australia and New Zealand, for example, are studying options for harmonizing their programs to streamline their own operations and to increase their overall effectiveness. They believe that harmonization could also minimize the impact of multiple donor requirements on the recipient government. According to a World Bank official, the different reporting and other requirements of up to 18 different organizations providing assistance to the health sector in the Solomon Islands are stretching the capacity of the Solomon Islands government. In addition, the ADB hosts regular donor consultation meetings to discuss the development assistance needs of individual recipients and to coordinate assistance to avoid duplication. Finally, Australian officials said donor coordination is most effective if the recipient countries lead the coordination effort. Where the countries lack the capacity to lead the coordination effort, the donors should assist them to strengthen the coordination functions. However, according to the Australian officials, some recipient countries play donors off of each other to increase the amount of assistance, and, thus, they have limited interest in closer donor coordination. Six donors have set up or contributed to trust funds in the Pacific as a means of providing recipients with a sustainable source of revenue and, in one case, ending annual bilateral assistance. The ADB’s Millennium Strategy for the Pacific suggests that trust funds may be an appropriate assistance strategy for bilateral donors to provide to atoll nations, such as Kiribati and the RMI, which have few natural resources and little potential for economic growth. A 1999 report prepared for the ADB noted that two trust funds in the Pacific, the Tuvalu Trust Fund and the Kiribati Revenue Equalisation Reserve Fund, have been successful but that several others were less successful due, in part, to fraud, poor management, unclear objectives, and risky investments. The report stated that the Tuvalu and Kiribati funds were successful primarily because they were designed to protect the investment capital from misuse. As a result of its contribution to the Tuvalu Trust Fund in 1987, the United Kingdom ceased its annual budget support to Tuvalu, because the trust fund provided the means to balance the budget. According to the consultant who prepared the report to the ADB, other funds, such as the Nauru Phosphate Royalties Trust, have been less successful, because the funds were not designed to ensure good management and to protect the fund’s capital from being spent. The consultant believes that a lesson learned from his review of trust funds is that a well-designed trust fund can help recipient countries reduce their aid dependency levels and become more self-reliant. (See app. IV for more discussion about trust funds.) In 1999, Australia began testing a new approach for delivering assistance, called a “sectorwide approach,” after it found that the cost of managing the project assistance in Papua New Guinea was too high. To reduce its administrative costs while trying to maintain aid effectiveness, Australia adopted the sectorwide approach to deliver assistance to the health sector in Papua New Guinea. Through this pilot project, Australia began moving from a portfolio of 16 individual health projects to cofinancing (with other donors) of sectorwide projects and programs identified in Papua New Guinea’s national health plan. In exchange for giving up control over the projects, Australia gained a voice in developing the national strategy and allocating resources for health projects. The approach also encouraged Papua New Guinea to become a major stakeholder in the development process, following a generally believed principle that aid is more effective when developing countries determine their own needs and strategies for meeting them. The Australian pilot project is small scale and has not yet been evaluated. According to an Australian official, the sectorwide approach will cover about 25 percent of all of Australia’s assistance to Papua New Guinea by 2002. Australia is also considering sector-based approaches for education in Kiribati and, eventually, for health care in the Solomon Islands. Our review of the lessons learned from the major donors’ experiences in the Pacific could provide some guidance to the United States as it negotiates further economic assistance to the FSM and the RMI. These lessons deserve attention because the current U.S. assistance to the two countries and the proposed approach for future assistance through the Compact of Free Association often contrast with the other major donors’ experiences, as discussed in the following points: Assistance Strategies May Involve Trade-offs in Expectations of Aid Effectiveness If the Main Motivation for Assistance Is Not Development. Donor strategies demonstrate that the effectiveness of the assistance in achieving the development objectives can depend on the principal motivation for providing the assistance. Often, donors have multiple motivations for providing assistance, such as historical links, which could have different standards for effectiveness and accountability. For example, the U.S. priority on maintaining friendly relations with the FSM and the RMI during the early years of the Compact, in order to protect strategic interests in the region, contributed to limited accountability requirements for the financial assistance and the degree of oversight. Assistance Strategies Involve Trade-offs Between Cost, Effectiveness, and Accountability. In general, choosing a strategy involves balancing donor interests in aid effectiveness and accountability with the higher administrative costs of donor involvement. When donors try to control their assistance to ensure effectiveness and accountability, their costs of administering the assistance increase. In the current Compact, the United States chose a low administrative cost strategy of providing relatively unrestricted cash transfers, which led to problems with the effectiveness of and accountability for the assistance. The proposal for new Compact assistance, according to a report prepared by an official in State’s Office of Compact Negotiations, would provide financial assistance, in part, through six sectoral grants—health, education, infrastructure and maintenance, private sector development, capacity building, and the environment—each of which would have its own planning, monitoring, and reporting requirements. State and Interior officials have said that the United States will need significantly more staff to administer the proposed sectorwide grants to the FSM and the RMI than they currently have. Effective Assistance Depends on a Good Policy Environment in the Recipient Country. A common theme running through the major donors’ assistance programs is the emphasis on good governance as a necessary condition for effective and sustainable development. As the ADB noted, “t is important that the Bank first assist to get their economic policy and governance environments right, thus ensuring that follow-up sector and project investments achieve due returns.” The United States also embraced this emphasis by supporting ADB technical assistance and reform programs for the FSM and the RMI, such as the Economic Management and Policy Advisory Team in the FSM. The Compact negotiator told the Congress that his approach for further assistance would include providing targeted grants for good governance and capacity building. Strategies Tailored to Specific Island Conditions May Be More Effective by Inviting Greater Recipient Ownership of the Program. Assistance strategies designed to reflect the diversity of the Pacific Islands may offer more potential to achieve economic growth than strategies that are not adapted to the recipient’s needs and ability to participate in the development process. The ADB’s new subregional approach to assistance, on the basis of differences in resources and growth potential, highlights the need for accommodating the different needs of the Pacific Island nations and suggests different strategies for the FSM and the RMI. By addressing local needs and accounting for local cultures, the assistance strategies are more likely to ensure the political commitment of the recipient and are more likely to achieve outcomes. The ADB’s approach contrasts with the current structure of Compact assistance for the FSM and the RMI, which generally applies the same objectives and strategies for the two countries. Flexible Strategies Are Important to Adapt Assistance to Changing Circumstances and Needs. Flexibility in assistance strategies is enabling the donors to respond to changing conditions in the Pacific. Flexibility not only allows donors to curtail assistance if the funds are not used effectively or properly, but it also permits donors to (1) adjust strategies to meet changing needs, such as transferring resources from one sector to another, and (2) provide rewards or incentives for good performance. The United States’ assistance to the FSM and the RMI through the first 15 years of the Compact was distributed according to a negotiated formula that did not allow changes in the distribution of the funds. Moreover, Interior officials believed that the provision of assistance with the “full faith and credit” of the United States, combined with a lack of controls typically available with domestic grant assistance, severely limited the ability to change funding levels, even in cases of misuse of funds. Well-designed Trust Funds Can Provide a Sustainable Source of Assistance and Reduce Long-term Aid Dependence. Successful trust funds in the Pacific can be designed to maintain and protect the fund value through prudent investment and management. Independent economic advisers, as required in the Tuvalu Trust Fund agreement, can also provide guidance to the government on the most effective use of fund proceeds. If the funds produce sufficient annual revenue to meet recipient budget needs and the revenues are used wisely, as has been the case with the Tuvalu Trust Fund, donors may have opportunities to reduce their annual assistance levels. The Compact negotiator has discussed similar trust funds for the FSM and the RMI in his approach for further assistance. According to FSM and RMI officials, the two countries have adopted their own trust fund agreements and anticipate using the agreements to invest the funds from future Compact assistance. Sectorwide Approaches Depend on Recipient Governments’ Commitment and Ability. Although Australia’s sectorwide approach has only been tested on a small scale in the Pacific and has not yet been evaluated, the extensive literature on sectorwide approaches in Africa and other locations suggests that the approaches are effective only under certain conditions. These include operating in sectors where there is an agreement among donors and recipients regarding the need for a government role in the financing, planning, and delivery of services. Moreover, a review of sectorwide approaches around the world found that such approaches are more effective when they correspond to the budget responsibility of a single sector, such as education and health, rather than sector programs for crosscutting themes, such as the environment. The Compact negotiator said that the approach for providing further assistance includes financial assistance in the form of sector grants to the FSM and the RMI in place of the cash transfers of the current Compact. In addition, three of the six sectors identified in the negotiator’s proposal—private sector development, the environment, and capacity building and good governance—are crosscutting sectors. We received comments from the Republic of the Marshall Islands and the Federated States of Micronesia. These governments generally sought greater discussion regarding the nature of the Compact relationship and the recognition of the unique nature of their countries. Their comments and our responses can be found in appendixes VI and VII. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies of this report to interested congressional committees and to the Honorable Gale A. Norton, the Secretary of the Interior; the Honorable Colin L. Powell, the Secretary of State; His Excellency Leo A. Falcam, President of the Federated States of Micronesia; and His Excellency Kessai Note, President of the Republic of the Marshall Islands. We will also make copies available to others upon request. If you or your staff have any questions about this report, please call me at (202) 512-4128. An additional GAO contact and staff acknowledgments are listed in appendix VIII. The Chairman of the House Committee on Resources; the Ranking Minority Member of the House Committee on International Relations; the Chairman of the Subcommittee on East Asia and the Pacific, House Committee on International Relations; and the Honorable Doug Bereuter, House of Representatives, asked us to assist the Congress in its consideration of future economic assistance for the Federated States of Micronesia (FSM) and the Republic of the Marshall Islands (RMI) through the Compact of Free Association. Specifically, our objectives were to (1) identify the major donors of development assistance to the Pacific Island nations and their objectives, (2) discuss the donors’ development assistance strategies and the factors or experiences that influence their choice of strategies, and (3) report lessons from the other donors’ assistance strategies that could be useful for U.S. consideration. To identify the major donors in the Pacific Islands, we obtained and reviewed the annual development assistance statistics from 1987 through 1999 as reported by the Development Assistance Committee of the Organization for Economic Cooperation and Development (OECD). The committee’s database allowed us to identify the official development assistance provided by members of OECD’s Development Assistance Committee and multilateral donors to each Pacific Island nation. We relied on the committee’s conversion of official development assistance into 1998 U.S. dollars. Our analysis of the committee’s data found several inconsistencies, such as no reported development assistance to the FSM and the RMI before 1991. To resolve the problems that we found in the committee’s data, we relied on our analysis of the annual financial audits for the FSM, the RMI, and Palau to determine the U.S. assistance levels to those countries. From our analysis of U.S. assistance to the FSM and the RMI, we know that the official development assistance excludes assistance such as educational Pell Grants given directly to students and U.S. Department of Agriculture housing loans. We included official development assistance provided to territories in the Pacific, when reported by the committee, but we excluded the portion of the assistance that the territories received from their national governments because it often is a transfer of domestic funds. For example, we did not include the amount that New Zealand gave to Tokelau because Tokelau is a territory of New Zealand; however, we did include the amount of assistance that Australia gave to Tokelau. The committee’s database did not report official development assistance to some territories, such as American Samoa. Finally, we did not include development assistance data from other known donors, such as China and Taiwan. The committee’s database does not report their assistance because the countries are not members of OECD’s Development Assistance Committee. Despite our attempts to collect data from China and Taiwan, these countries were unwilling to provide the information. China and Taiwan may be significant donors; one news article, for example, mentioned that China gave more than $150 million in untied grant aid to Papua New Guinea in 2000, which was nearly the same as Australia’s annual assistance of $167 million. To identify the donor objectives, we reviewed recent development planning documents and interviewed officials from the Australian Agency for International Development; the Japan Ministry of Foreign Affairs, Economic Cooperation Bureau and European and Oceanian Affairs Bureau, and the Japan International Cooperation Agency; the New Zealand Ministry of Foreign Affairs and Trade, Official Development Assistance agency; the United Kingdom Department for International Development; the U.S. Agency for International Development (USAID); the U.S. Department of the Interior, Office of Insular Affairs; and at major multilateral donor agencies (the World Bank, the Asian Development Bank (ADB), the European Union (EU), and the United Nations Development Program). To collect information on the recipient countries in the Pacific, such as population, gross domestic product, and geographic characteristics, we relied on data from the World Bank, the United Nations Development Program, the Secretariat of the Pacific Community, the Bank of Hawaii, and the U.S. Central Intelligence Agency’s World Factbook 2000. We found that these data were often missing or were based on estimates. Through conversations with donors, we found that the lack of reliable statistics is widely accepted. To verify the integrity of the data, we (1) checked the reliability of data sources with multilateral agencies, such as the ADB; (2) cross-checked the information among various sources reporting Pacific Island data, such as comparing gross domestic product figures among the World Bank, the United Nations Development Program, and the Bank of Hawaii; and (3) used our best judgment. To identify the major donors’ assistance strategies, we relied on information in donor documents and interviews with donor officials. To identify and explain the major donors’ experiences in their choice of strategies, we relied on the donor documents and our meetings with officials at the bilateral and multilateral donors. From this review of documents and the interviews, we identified specific assistance strategies, the reasons for choosing specific strategies, and examples of the effectiveness of the strategies. Relying on State’s June 2000 testimony on its approach to negotiations with the FSM and the RMI, we narrowed the range of experiences identified in interviews and documents in selecting the experiences for discussion in this report. For our analysis of donor experiences with sectorwide approaches, trust funds, and good governance, we also relied on general reports and literature on development assistance to support the donor information. To report the lessons learned from the donors’ experiences, we identified common themes that were potentially relevant to economic assistance to the FSM and the RMI. From this analysis, we developed some observations in the form of lessons learned from the donors’ experiences. We also obtained information from USAID officials and documents about that agency’s experiences in providing development assistance and in ending its Pacific program. Finally, we collected documents from and met with officials from the State’s Office of Compact Negotiations and Interior’s Office of Insular Affairs to identify issues related to the negotiations of future economic assistance to the FSM and the RMI. We conducted our work from August 2000 through May 2001 in accordance with generally accepted government auditing standards. From 1987 through 1999, more than 22 countries and 13 multilateral organizations provided almost $12 billion (in 1998 U.S. dollars) in development assistance to the Pacific region. The amount of assistance ranged from a single donation of $10,000 by Spain to total donations by Australia of more than $3.8 billion. Major bilateral donors, such as Australia ($3.8 billion), the United States ($3.1 billion), Japan ($1.6 billion), New Zealand ($685 million), and the United Kingdom ($394 million), accounted for nearly 81 percent of the total development assistance to the region. The U.S. share was almost 26 percent of the total. Donations from the EU ($900 million) and the ADB ($506 million) constituted more than 80 percent of the aid from multilateral organizations and close to 12 percent of overall assistance. Together, these seven major bilateral and multilateral donors were responsible for almost 93 percent of the development assistance to the region. Other bilateral and multilateral donors contributed about 7 percent of the assistance. Table 1 provides a side-by-side comparison of these five major bilateral donors, their development agencies and stated development objectives, and a list of the countries receiving more than 10 percent of the donor’s aid to the region. Table 2 provides similar comparative information for the two major multilateral donors. Between 1987 and 1999, donors provided about $11.9 billion in development assistance to 14 sovereign nations and 5 territories in the Pacific region, according to data reported by the OECD’s Development Assistance Committee. The recipients ranged from Papua New Guinea, with almost 70 percent of the region’s 6.4 million residents, to Niue, with 1,500 residents. This appendix presents information on the characteristics of the sovereign island nations, data on their development assistance, and information on how they compare with other islands in the region. The Pacific Island countries vary substantially in their size and population and the composition of their geography. Fiji, with a land area of 7,055 square miles and a population of 785,000, is very different from the remote and small, low-lying atolls of Kiribati, which encompasses only 266 square miles and has a population of 85,100. These islands also span a wide range in terms of their human development. The United Nations Development Program created the Human Development Index to measure development progress in three dimensions—life expectancy, educational attainment, and per capita gross domestic product (GDP)—and to show where each country stands in relation to the scales, which are expressed as a value between 0 and 1, with 1 being the highest score. The FSM and the RMI are in the bottom half of a list of Pacific Island nations, according to their index scores in 1998, despite their high GDP per capita. Table 3 displays basic information on development assistance recipients’ land area, geographic characteristics, population, political status, per capita GDP, and Human Development Index. While Palau ranks the highest (0.861) and Papua New Guinea the lowest (0.371) in the Human Development Index, both the FSM and the RMI fall closer to the middle of the index, reporting 0.569 and 0.563, respectively. This section describes development assistance over time, identifies each recipient’s major donors and how much assistance is provided, compares differences in the amount of assistance per capita among recipients, and analyzes the role of development assistance in the economy by measuring assistance as a percentage of GDP. Between 1987 and 1999, the FSM and the RMI each received substantially higher amounts of assistance—$1.8 billion and $873 million, respectively— than other recipients, except for Papua New Guinea, which received $4.4 billion in assistance. Figure 3 shows the total amount of assistance received by the major Pacific Island recipients for 1987 through 1999. The assistance per capita varied widely for the 14 Pacific Island nations. Table 4 shows that Niue received more than $2,700 in assistance per capita in 1998, while Fiji received $46 in assistance per capita. The median assistance per capita was $680. The RMI and the FSM ranked third and fourth, respectively, among the recipients. Seven of the recipient countries have received more than half of their assistance from a single donor. The RMI, the FSM, Nauru, and Palau, for example, have received at least 87 percent of their 1987 through 1999 assistance from a single donor. Table 5 lists the five largest donors, and their share of total assistance, for each recipient country for 1987 through 1999. Finally, figure 4 provides information on the proportion of aid that makes up each country’s GDP. In 6 of 13 countries, aid constitutes 20 percent or more of their GDP. For example, the FSM and the RMI rely heavily on development assistance—more than 50 percent of their GDP—to sustain their economies. Although there are great differences among the size, population, geographic characteristics, economic development, social indicators, and other features of Pacific Island recipients of development assistance, the ADB has classified the island nations according to their development conditions and recommended assistance strategies for each type of island classification. The ADB places its member countries into three categories on the basis of resource endowments, population, poverty level, social characteristics, international labor mobility, growth prospects, and strategies tailored for each classification. Atoll economies have little prospect for economic development, and there is special concern about the sustainability of financing of essential services. The ADB therefore recommends that island atolls (the RMI, Kiribati, Nauru, and Tuvalu) develop trust funds and continue to rely on aid for their economic sustainability. In contrast, the strategy for economically advanced countries (Fiji, Samoa, the FSM, Tonga, and the Cook Islands) is to focus on physical infrastructure and private sector development as well as tourism industry development. For Papua New Guinea, the Solomon Islands, and Vanuatu (the Melanesian nations), the group with the most potential for growth, the ADB priority is to expand access in rural areas, reduce high population growth rate, and build local government capacity. (See table 6 for the ADB’s classification of the Pacific Islands in its Pacific Strategy.) Several trust funds exist in the Pacific. Their objective is to provide a source of sustainable revenue from the proceeds from investment of the trust fund capital. According to a report prepared in 1999 for the ADB,these funds have had mixed results. The Tuvalu Trust Fund, which was set up by aid donors to provide sustained revenue, is cited as a model for future trust funds because the fund agreement incorporates key design features. The report to the ADB, plus other reports by the United Nationsand USAID, identify specific design characteristics that may lead to successful trust funds. Several trust funds are currently operating in the Pacific. Examples include the Tuvalu Trust Fund; the Kiribati Revenue Equalisation Reserve Fund; the Banaban Trust Fund; the Marshall Islands resettlement trust funds for Bikini Island, Enewetak, Utrik, and Rongelap; the Palau Trust Fund; and the Nauru Phosphate Royalties Trust. These funds were designed to serve a range of purposes, including the development of rural communities or outer islands, management of recurring government expenses, and assistance in achieving greater financial autonomy. Six of the seven major donors have contributed to trust funds. For example, Australia, New Zealand, the United Kingdom, and Japan have donated funds to the Tuvalu Trust Fund. The United States contributed to the Palau Trust Fund. Finally, the ADB provided a loan for another trust fund in Tuvalu that was designed to assist outer island development. Although the funds of Kiribati and Tuvalu are known for their success in maintaining fund value, both funds have received criticism because they have tended to reinvest their revenues into the funds instead of using them for development. In contrast to the investment success of the Kiribati and Tuvalu funds, the Banaban Trust Fund has encountered serious difficulties involving misappropriation and poor management of the fund’s capital, due to the failure of the fund structure to separate fund managers and fund users and to protect the fund capital. Similarly, the Nauru Phosphate Royalty Trust has lost most of its value due to poor advice from its legal and economic advisers. In addition, Nauru has borrowed against future earnings of the Trust. Finally, the Bikini Island Resettlement Trust Fund, although successful in providing a stream of revenue, has experienced difficulty in finding an equitable distribution mechanism for revenue to beneficiaries because clear guidelines were not established in the fund agreement. The Tuvalu Trust Fund was created by an international agreement between Tuvalu, Australia, New Zealand, and the United Kingdom in 1987. The fund was set up to enable the small island nation to help finance chronic budget deficits, underpin economic development, and achieve greater financial autonomy. As a result of the agreement to create the fund, annual British aid for recurring budget expenses ended. Initial contributions to the fund in 1987 amounted to Austalian $27.1 million.The initial donors were Tuvalu, Australia, New Zealand, and the United Kingdom, with later contributions from Japan and South Korea. The fund capital had $66 million, as of December 2000, and Tuvalu has set an informal target of $100 million before it will stop reinvesting in the fund. The fund management structure includes (1) a Board of Directors, with each member having been appointed by an original donor; (2) professional fund management; (3) external auditors; and (4) an advisory committee. According to one of its members, the advisory committee regularly evaluates and monitors the fund and provides advisory reports to the government of Tuvalu and the Board of Directors. Each of the original donors has a member on the committee, while Tuvalu currently has two members. Although the donors nominate the advisory committee members, the committee acts independently. The member from New Zealand, for example, does not consult with New Zealand on economic decisions. Another key element of the fund is the separation of fund capital from fund proceeds available for distribution. The fund capital is held in an “A” account and invested primarily in Australia. The objectives of the A account are to maintain the real value of the fund and to provide a regular stream of income to the government of Tuvalu. Income earned from the investments is calculated annually. Generally, part of the income is automatically reinvested to maintain the real value of the fund, while remaining income is placed in a separate account, the “B” account, to hold it for distribution to the government of Tuvalu. According to a report on the 10th anniversary of the trust fund, the B account has become an important tool for the government to use in managing its cash flow. The government limits budget growth to the amount of money the fund can deliver. According to an advisory committee member, the trust fund agreement does not allow the donors to intervene in determining how Tuvalu uses the fund proceeds. A U.N. review of the fund noted that this arrangement provides the Tuvalu government with a considerable degree of financial independence, which was not possible under a system of direct bilateral assistance. Bad decisions by the Tuvalu government would affect only the B account, not the fund capital. According to the reports on trust funds, these funds can be effective instruments for providing development assistance if they are properly designed and managed. The Nimmo-Bell & Company 1999 report to the ADB identified several issues that trust funds must address: The purpose of the fund must be clear and specific, along with containing clear and measurable goals and objectives. There should be a legal structure that permits the establishment of the fund; tax laws allowing the fund to be tax exempt, within the country and internationally; and a provision for donations from public and private contributors. There must be a sound, transparent, and accountable governance structure. Mechanisms must be provided to ensure involvement of a broad set of stakeholders, including the beneficiaries, central and local government, and donors during the design process. Adequate protection mechanisms must be built into the structure to safeguard the capital of the fund and ensure a fair distribution of benefits. Strong linkages should exist between the fund and national strategies and action plans. Baseline information should be collected at the initiation of the trust fund so that the performance can be measured against the criteria. Money managers should be selected on a competitive basis. The sophistication of investment management should reflect the size of the fund in order to keep administrative and transaction costs to an appropriate level. Technical assistance should be provided during the establishment phase and the first few years of operation to assist fund managers in implementing the intent of the fund and to monitor its performance. According to a U.N. report, trust funds are most appropriate for addressing development problems that require a continuous income stream over a long-term period. A key advantage of a trust fund for donors is its cost-effectiveness in reducing the administrative costs associated with individual projects or aid cycles. The advantages of a trust fund for recipients are the abilities to (1) improve the coordination, consistency, and sustainability of overall development efforts; (2) reduce administrative efforts linked with obtaining assistance and preparing reports on the use of donor resources; and (3) coordinate disbursements of assistance with institutional capacity to manage the assistance. Finally, according to a USAID working paper on endowment funds (trust funds), several lessons are available from USAID’s involvement in funding more than 35 endowment funds. These lessons include the need for adequate financing to establish the fund, the strategic use of matching funds to leverage the U.S. contribution, and the importance of fund independence from government or secular interests. The principal conclusions from the review of USAID endowment funds were that (1) under the appropriate conditions, such funds can be a viable option for providing long-term, sustainable development; (2) using funds can be an important strategy for increasing the capabilities of development partners; (3) strong institutions that are well managed and have successful track records are an essential prerequisite to funding; and (4) by their nature, funds involve less USAID monitoring and oversight than other types of activities because of built-in safeguards. These safeguards include (1) USAID involvement in the design of the fund agreement, (2) USAID approval of the initial Board of Directors and possibly appointment of a board member, (3) annual audits and performance reports, and (4) a requirement that all funds be invested in financial instruments offered in the United States through a U.S.-based financial intermediary. The report concluded that using well-designed funds that are consistent with USAID and host country objectives are a “natural” for countries graduating from USAID assistance. Sectorwide approaches emerged in the 1990s as a form of assistance designed to return ownership of the development process to the recipient government, according to a report by the Overseas Development Institute. The approaches are a response to (1) recent work on aid effectiveness, which found that development assistance requires a supportive policy environment in the recipient country in order to achieve sustainable benefits, and (2) concern that a proliferation of stand-alone, donor-funded projects has led to a piecemeal and distorted pattern of development. In addition, donors believe that poor coordination has contributed to multiple donor agendas and reporting systems, which has complicated the development process for recipient countries. Sectorwide approaches are expected to achieve greater coherence in the use of aid by allowing recipient governments to assume ownership for the planning and implementation of all activities within a specific sector. If the donor projects are not set within a coherent plan and budget, the result can be an effort that is expensive to manage and in which there is wasteful duplication, uneven coverage, inconsistent approaches, and poor sustainability of projects. The principal characteristic of a fully developed sector program is that all significant funding for a designated sector should support a single sector policy and expenditure program, under the recipient government’s leadership. By requiring recipients to develop their own sector strategies, the assumption is that sectorwide approaches will enhance country ownership. A condition for assuming ownership, however, is the presence of sound policies, such as reasonable macroeconomic and budget policies; a supportive environment for private sector development; and a role for the public sector that is consistent with the government’s management and financial capacity. Donors may have to work closely with the recipients to develop the needed policy environment. An Australian Agency for International Development document noted that there are five stages in moving toward a process of progressively strengthening government sector management. The progression depends on achieving milestones related to improved effectiveness in government budget management. The stages range from the first step, in which a donor provides project-based assistance and implementation within an agreed policy framework, to the fifth step of supporting a sector program with a common financing mechanism. Another benefit of having a coherent sector strategy under the recipient government’s leadership is that donors support the sector under a common framework, thus minimizing the problems of poor coordination. Two reports identified the following conditions for making sectorwide approaches successful: Sectorwide approaches are more relevant for countries and sectors in which donors’ contributions are large enough to create coordination difficulties. These approaches are less desirable if aid is only a small share of the budget. Sectorwide approaches are potentially more useful when applied to those sectors in which there exists greater agreement among donors and recipients regarding the need for a strong government role in the financing, planning, and delivery of services, hence the dominance of health and education. Supportive macroeconomic and budget frameworks must be in place because there is a longer time frame. Donors should not dismiss out of hand sectorwide approaches due to a perceived lack of recipient capacity. The best strategy may be to strengthen the sector capacity. Sectorwide approaches have been more successful in certain areas, such as health and education, and have tended to fail when attempting to address crosscutting themes, such as the environment, or sectors in which there is a great deal of disagreement about the proper role of government, such as the agriculture sector. These themes need to be incorporated into various sector programs, rather than having their own sectorwide approaches. Sectorwide approaches are more likely to be successful where public expenditure is a major feature of the sector and where the donor contribution is large enough for coordination to be a problem (where aid forms more than 10 percent of GDP). As of June 2000, there were about 80 sector programs being prepared and implemented, mostly in Africa, according to the Overseas Development Institute. The approaches are found exclusively in highly aid-dependent poor countries. More than half of the approaches have been in the health and education sectors. Thus far, Australia is the only bilateral donor to take a sectorwide approach in the Pacific, and this is a limited pilot project that has not been evaluated. In addition, from a recent survey of 16 sector programs, more than 80 percent of the aid provided was in the form of traditional project assistance, making use of individual donor procedures, and just 17 percent was given in the form of sector budget support. These results may reflect documents on sectorwide assistance, which describe donors moving from project assistance strategies to a sectorwide approach. The U.S. proposal to the FSM and the RMI, by contrast, would shift from a budget support strategy to a sectorwide approach. The following are GAO’s comments on the letter from the Republic of the Marshall Islands dated July 19, 2001. 1. We have added text on page 6 of this report to recognize that portions of the Compact economic assistance to the RMI are used for payments related to the U.S. military presence at the Kwajalein Atoll. A 1982 Land Use Agreement between the RMI government and an organization of Kwajalein landowners obligated the government to make payments to the landowners. In fiscal year 1998, for example, the RMI government paid about $8 million in Compact assistance to the landowners. 2. Our report clearly acknowledges that donors need to tailor their strategies to the individual characteristics of the recipient nations. Page 15 describes the ADB’s subregional approach to development, on the basis of Pacific Island characteristics. On pages 19 and 20, we suggest that tailored strategies may be more effective because they are more likely to ensure recipient commitment. 3. We highlighted Australia’s pilot project to support the health sector in Papua New Guinea as a sectorwide approach in the Pacific. However, because the United States is proposing sector grants for the RMI and the FSM, we included additional information about sectorwide approaches in appendix V. The appendix summarizes some development conditions that could help sectorwide approaches succeed. We relied on reports that summarized donor experiences with these approaches for this information and did not evaluate individual country approaches outside of the Pacific region. Additional information on selected sectorwide approaches can be found in (1) New Approaches to Development Co-operation: What can we learn from experience with implementing Sector Wide Approaches? and (2) The Status of Sector Wide Approaches. The following are GAO’s comments on the letter from the Federated States of Micronesia dated July 19, 2001. 1. The intent of our report was to highlight some of the lessons learned from other donors’ experiences with development assistance throughout the Pacific. One of the lessons, which we discuss on page 19 of the report, is that strategies tailored to the individual development conditions of the recipient country are more likely to succeed. In the case of the FSM, this lesson implies that the United States could adopt a new strategy with different assistance levels to reflect improved development conditions. 2. On page 11 of this report, we recognize that many motivations, such as historical ties, guide the distribution of development assistance to the Pacific Island nations. In a previous report, we discussed these historical ties and the current obligation to provide assistance through the Compact of Free Association through fiscal year 2001, with the possibility of extended assistance. Nevertheless, as we note on page 4, the FSM, as a small, island nation in the Pacific, shares similar development challenges with at least 13 other island nations that receive development assistance. Also, beginning with footnote 3 and discussed throughout this report, we note that the major donors provided $11 billion in development assistance to 14 Pacific Island nations. We compiled these data from several sources—the OECD and annual financial audits of the FSM, the RMI, and Palau. 3. Also on page 11, we have replaced the statement, now on pages 11 and 12, with other text to clarify our point that multiple objectives for the Compact may have contributed to reduced expectations for accountability of the assistance. Also on page 12, we recognize that the Compact economic assistance to the FSM and the RMI was part of an agreement that included political and defense elements. The previous report cited in the preceding response discusses these objectives. 4. We agree that the FSM does not receive assistance from the EU and did not report that information. As we note in table 2 of this report, the EU, however, is one of the major multilateral donors to the Pacific Island nations and provided $900 million to islands in the region from 1987 to 1999. In addition to the person named above, Dennis Richards, Jennifer Li Wong, Ron Schwenn, and Rona Mendelsohn made key contributions to this report.
Australia, Japan, New Zealand, the United Kingdom, and the United States have been the major providers of bilateral development assistance to the Pacific Island nations since 1987. The Asian Development Bank and the European Union have been the major multilateral donors. The donors' main development objectives, according to the planning documents, have been to alleviate poverty and to set the Pacific Island nations on the path to economic self-sufficiency. To achieve these objectives, these donors focus their assistance in key areas, such as education, policy reform, and infrastructure. The United States could draw several lessons from the donors' experiences for providing assistance as well as the strategies and approaches the donors have adopted. These lessons could provide valuable insights for the United States as it negotiates additional economic assistance to the Federal States of Micronesia and the Republic of the Marshall Islands. On the basis of the donors' experiences, GAO observed that (1) assistance strategies may involve trade-offs in expectations of aid effectiveness if other objectives for providing assistance take priority over development objectives; (2) assistance strategies may involve trade-offs between effectiveness and accountability, on the one hand, and administrative costs, on the other hand; (3) effective assistance depends on a good policy environment in the recipient country to create the conditions for sustainable development; (4) strategies tailored to the individual needs of the recipient country might have greater chances of succeeding because they offer recipients opportunities for stronger ownership of the program; (5) flexible strategies enable donors to adapt their assistance to changing circumstances and provide incentives for development achievements; (6) well-designed trust funds can provide sustainable sources of assistance to Pacific Island nations with limited growth options; and (7) sectorwide approaches, although generally untested in the Pacific, depend on recipient government commitment and ability.
Because of such emergencies as natural disasters, hazardous material spills, and riots, all levels of government have had some experience in preparing for different types of disasters and emergencies. Preparing for all potential hazards is commonly referred to as the “all-hazards” approach. While terrorism is a component within an all-hazards approach, terrorist attacks potentially impose a new level of fiscal, economic, and social dislocation within this nation’s boundaries. Given the specialized resources that are necessary to address a chemical or biological attack, the range of governmental services that could be affected, and the vital role played by private entities in preparing for and mitigating risks, state and local resources alone will likely be insufficient to meet the terrorist threat. Some of these specific challenges can be seen in the area of bioterrorism. For example, a biological agent released covertly might not be recognized for a week or more because symptoms may only appear several days after the initial exposure and may be misdiagnosed at first. In addition, some biological agents, such as smallpox, are communicable and can spread to others who were not initially exposed. These characteristics require responses that are unique to bioterrorism, including health surveillance, epidemiologic investigation, laboratory identification of biological agents, and distribution of antibiotics or vaccines to large segments of the population to prevent the spread of an infectious disease. The resources necessary to undertake these responses are generally beyond state and local capabilities and would require assistance from and close coordination with the federal government. National preparedness is a complex mission that involves a broad range of functions performed throughout government, including national defense, law enforcement, transportation, food safety and public health, information technology, and emergency management, to mention only a few. While only the federal government is empowered to wage war and regulate interstate commerce, state and local governments have historically assumed primary responsibility for managing emergencies through police, firefighters, and emergency medical personnel. The federal government’s role in responding to major disasters is generally defined in the Stafford Act, which requires a finding that the disaster is so severe as to be beyond the capacity of state and local governments to respond effectively before major disaster or emergency assistance from the federal government is warranted. Once a disaster is declared, the federal government—through the Federal Emergency Management Agency (FEMA)—may reimburse state and local governments for between 75 and 100 percent of eligible costs, including response and recovery activities. There has been an increasing emphasis over the past decade on preparedness for terrorist events. After the nerve gas attack in the Tokyo subway system on March 20, 1995, and the Oklahoma City bombing on April 19, 1995, the United States initiated a new effort to combat terrorism. In June 1995, Presidential Decision Directive 39 was issued, enumerating responsibilities for federal agencies in combating terrorism, including domestic terrorism. Recognizing the vulnerability of the United States to various forms of terrorism, the Congress passed the Defense Against Weapons of Mass Destruction Act of 1996 (also known as the Nunn-Lugar- Domenici program) to train and equip state and local emergency services personnel who would likely be the first responders to a domestic terrorist event. Other federal agencies, including those in the Department of Justice, Department of Energy, FEMA, and Environmental Protection Agency, have also developed programs to assist state and local governments in preparing for terrorist events. The attacks of September 11, 2001, as well as the subsequent attempts to contaminate Americans with anthrax, dramatically exposed the nation’s vulnerabilities to domestic terrorism and prompted numerous legislative proposals to further strengthen our preparedness and response. During the first session of the 107th Congress, several bills were introduced with provisions relating to state and local preparedness. For instance, the Preparedness Against Domestic Terrorism Act of 2001, which you cosponsored, Mr. Chairman, proposes the establishment of a Council on Domestic Preparedness to enhance the capabilities of state and local emergency preparedness and response. The funding for homeland security increased substantially after the attacks. According to documents supporting the president’s fiscal year 2003 budget request, about $19.5 billion in federal funding for homeland security was enacted in fiscal year 2002. The Congress added to this amount by passing an emergency supplemental appropriation of $40 billion dollars. According to the budget request documents, about one- quarter of that amount, nearly $9.8 billion, was dedicated to strengthening our defenses at home, resulting in an increase in total federal funding on homeland security of about 50 percent, to $29.3 billion. Table 1 compares fiscal year 2002 funding for homeland security by major categories with the president’s proposal for fiscal year 2003. We have tracked and analyzed federal programs to combat terrorism for many years and have repeatedly called for the development of a national strategy for preparedness. We have not been alone in this message; for instance, national commissions, such as the Gilmore Commission, and other national associations, such as the National Emergency Management Association and the National Governors Association, have advocated the establishment of a national preparedness strategy. The attorney general’s Five-Year Interagency Counterterrorism Crime and Technology Plan, issued in December 1998, represents one attempt to develop a national strategy on combating terrorism. This plan entailed a substantial interagency effort and could potentially serve as a basis for a national preparedness strategy. However, we found it lacking in two critical elements necessary for an effective strategy: (1) measurable outcomes and (2) identification of state and local government roles in responding to a terrorist attack. In October 2001, the president established the Office of Homeland Security as a focal point with a mission to develop and coordinate the implementation of a comprehensive national strategy to secure the United States from terrorist threats or attacks. While this action represents a potentially significant step, the role and effectiveness of the Office of Homeland Security in setting priorities, interacting with agencies on program development and implementation, and developing and enforcing overall federal policy in terrorism-related activities is in the formative stages of being fully established. The emphasis needs to be on a national rather than a purely federal strategy. We have long advocated the involvement of state, local, and private-sector stakeholders in a collaborative effort to arrive at national goals. The success of a national preparedness strategy relies on the ability of all levels of government and the private sector to communicate and cooperate effectively with one another. To develop this essential national strategy, the federal role needs to be considered in relation to other levels of government, the goals and objectives for preparedness, and the most appropriate tools to assist and enable other levels of government and the private sector to achieve these goals. Although the federal government appears monolithic to many, in the area of terrorism prevention and response, it has been anything but. More than 40 federal entities have a role in combating and responding to terrorism, and more than 20 federal entities in bioterrorism alone. One of the areas that the Office of Homeland Security will be reviewing is the coordination among federal agencies and programs. Concerns about coordination and fragmentation in federal preparedness efforts are well founded. Our past work, conducted prior to the creation of the Office of Homeland Security, has shown coordination and fragmentation problems stemming largely from a lack of accountability within the federal government for terrorism-related programs and activities. There had been no single leader in charge of the many terrorism- related functions conducted by different federal departments and agencies. In fact, several agencies had been assigned leadership and coordination functions, including the Department of Justice, the Federal Bureau of Investigation, FEMA, and the Office of Management and Budget. We previously reported that officials from a number of agencies that combat terrorism believe that the coordination roles of these various agencies are not always clear. The recent Gilmore Commission report expressed similar concerns, concluding that the current coordination structure does not provide the discipline necessary among the federal agencies involved. In the past, the absence of a central focal point resulted in two major problems. The first of these is a lack of a cohesive effort from within the federal government. For example, the Department of Agriculture, the Food and Drug Administration, and the Department of Transportation have been overlooked in bioterrorism-related policy and planning, even though these organizations would play key roles in response to terrorist acts. In this regard, the Department of Agriculture has been given key responsibilities to carry out in the event that terrorists were to target the nation’s food supply, but the agency was not consulted in the development of the federal policy assigning it that role. Similarly, the Food and Drug Administration was involved with issues associated with the National Pharmaceutical Stockpile, but it was not involved in the selection of all items procured for the stockpile. Further, the Department of Transportation has responsibility for delivering supplies under the Federal Response Plan, but it was not brought into the planning process and consequently did not learn the extent of its responsibilities until its involvement in subsequent exercises. Second, the lack of leadership has resulted in the federal government’s development of programs to assist state and local governments that were similar and potentially duplicative. After the terrorist attack on the federal building in Oklahoma City, the federal government created additional programs that were not well coordinated. For example, FEMA, the Department of Justice, the Centers for Disease Control and Prevention, and the Department of Health and Human Services all offer separate assistance to state and local governments in planning for emergencies. Additionally, a number of these agencies also condition receipt of funds on completion of distinct but overlapping plans. Although the many federal assistance programs vary somewhat in their target audiences, the potential redundancy of these federal efforts warrants scrutiny. In this regard, we recommended in September 2001 that the president work with the Congress to consolidate some of the activities of the Department of Justice’s Office for State and Local Domestic Preparedness Support under FEMA. State and local response organizations believe that federal programs designed to improve preparedness are not well synchronized or organized. They have repeatedly asked for a one-stop “clearinghouse” for federal assistance. As state and local officials have noted, the multiplicity of programs can lead to confusion at the state and local levels and can expend precious federal resources unnecessarily or make it difficult for them to identify available federal preparedness resources. As the Gilmore Commission report notes, state and local officials have voiced frustration about their attempts to obtain federal funds and have argued that the application process is burdensome and inconsistent among federal agencies. Although the federal government can assign roles to federal agencies under a national preparedness strategy, it will also need to reach consensus with other levels of government and with the private sector about their respective roles. Clearly defining the appropriate roles of government may be difficult because, depending upon the type of incident and the phase of a given event, the specific roles of local, state, and federal governments and of the private sector may not be separate and distinct. A new warning system, the Homeland Security Advisory System, is intended to tailor notification of the appropriate level of vigilance, preparedness, and readiness in a series of graduated threat conditions. The Office of Homeland Security announced the new warning system on March 12, 2002. The new warning system includes five levels of alert for assessing the threat of possible terrorist attacks: low, guarded, elevated, high, and severe. These levels are also represented by five corresponding colors: green, blue, yellow, orange, and red. When the announcement was made, the nation stood in the yellow condition, in elevated risk. The warning can be upgraded for the entire country or for specific regions and economic sectors, such as the nuclear industry. The system is intended to address a problem with the previous blanket warning system that was used. After September 11th, the federal government issued four general warnings about possible terrorist attacks, directing federal and local law enforcement agencies to place themselves on the “highest alert.” However, government and law enforcement officials, particularly at the state and local levels, complained that general warnings were too vague and a drain on resources. To obtain views on the new warning system from all levels of government, law enforcement, and the public, the United States Attorney General, who will be responsible for the system, provided a 45-day comment period from the announcement of the new system on March 12th. This provides an opportunity for state and local governments as well as the private sector to comment on the usefulness of the new warning system, and the appropriateness of the five threat conditions with associated suggested protective measures. Numerous discussions have been held about the need to enhance the nation’s preparedness, but national preparedness goals and measurable performance indicators have not yet been developed. These are critical components for assessing program results. In addition, the capability of state and local governments to respond to catastrophic terrorist attacks is uncertain. At the federal level, measuring results for federal programs has been a longstanding objective of the Congress. The Congress enacted the Government Performance and Results Act of 1993 (commonly referred to as the Results Act). The legislation was designed to have agencies focus on the performance and results of their programs rather than on program resources and activities, as they had done in the past. Thus, the Results Act became the primary legislative framework through which agencies are required to set strategic and annual goals, measure performance, and report on the degree to which goals are met. The outcome-oriented principles of the Results Act include (1) establishing general goals and quantifiable, measurable, outcome-oriented performance goals and related measures, (2) developing strategies for achieving the goals, including strategies for overcoming or mitigating major impediments, (3) ensuring that goals at lower organizational levels align with and support general goals, and (4) identifying the resources that will be required to achieve the goals. A former assistant professor of public policy at the Kennedy School of Government, now the senior director for policy and plans with the Office of Homeland Security, noted in a December 2000 paper that a preparedness program lacking broad but measurable objectives is unsustainable. This is because it deprives policymakers of the information they need to make rational resource allocations, and program managers are prevented from measuring progress. He recommended that the government develop a new statistical index of preparedness,incorporating a range of different variables, such as quantitative measures for special equipment, training programs, and medicines, as well as professional subjective assessments of the quality of local response capabilities, infrastructure, plans, readiness, and performance in exercises. Therefore, he advocated that the index should go well beyond the current rudimentary milestones of program implementation, such as the amount of training and equipment provided to individual cities. The index should strive to capture indicators of how well a particular city or region could actually respond to a serious terrorist event. This type of index, according to this expert, would then allow the government to measure the preparedness of different parts of the country in a consistent and comparable way, providing a reasonable baseline against which to measure progress. In October 2001, FEMA’s director recognized that assessments of state and local capabilities have to be viewed in terms of the level of preparedness being sought and what measurement should be used for preparedness. The director noted that the federal government should not provide funding without assessing what the funds will accomplish. Moreover, the president’s fiscal year 2003 budget request for $3.5 billion through FEMA for first responders—local police, firefighters, and emergency medical professionals—provides that these funds be accompanied by a process for evaluating the effort to build response capabilities, in order to validate that effort and direct future resources. FEMA has developed an assessment tool that could be used in developing performance and accountability measures for a national strategy. To ensure that states are adequately prepared for a terrorist attack, FEMA was directed by the Senate Committee on Appropriations to assess states’ response capabilities. In response, FEMA developed a self-assessment tool—the Capability Assessment for Readiness (CAR)—that focuses on 13 key emergency management functions, including hazard identification and risk assessment, hazard mitigation, and resource management. However, these key emergency management functions do not specifically address public health issues. In its fiscal year 2001 CAR report, FEMA concluded that states were only marginally capable of responding to a terrorist event involving a weapon of mass destruction. Moreover, the president’s fiscal year 2003 budget proposal acknowledges that our capabilities for responding to a terrorist attack vary widely across the country. Many areas have little or no capability to respond to a terrorist attack that uses weapons of mass destruction. The budget proposal further adds that even the best prepared states and localities do not possess adequate resources to respond to the full range of terrorist threats we face. Proposed standards have been developed for state and local emergency management programs by a consortium of emergency managers from all levels of government and are currently being pilot tested through the Emergency Management Accreditation Program at the state and local levels. Its purpose is to establish minimum acceptable performance criteria by which emergency managers can assess and enhance current programs to mitigate, prepare for, respond to, and recover from disasters and emergencies. For example, one such standard is the requirement that (1) the program must develop the capability to direct, control, and coordinate response and recovery operations, (2) that an incident management system must be utilized, and (3) that organizational roles and responsibilities shall be identified in the emergency operational plans. Although FEMA has experience in working with others in the development of assessment tools, it has had difficulty in measuring program performance. As the president’s fiscal year 2003 budget request acknowledges, FEMA generally performs well in delivering resources to stricken communities and disaster victims quickly. The agency performs less well in its oversight role of ensuring the effective use of such assistance. Further, the agency has not been effective in linking resources to performance information. FEMA’s Office of Inspector General has found that FEMA did not have an ability to measure state disaster risks and performance capability, and it concluded that the agency needed to determine how to measure state and local preparedness programs. Since September 11th, many state and local governments have faced declining revenues and increased security costs. A survey of about 400 cities conducted by the National League of Cities reported that since September 11th, one in three American cities saw their local economies, municipal revenues, and public confidence decline while public-safety spending is up. Further, the National Governors Association estimates fiscal year 2002 state budget shortfalls of between $40 billion and $50 billion, making it increasingly difficult for the states to take on expensive, new homeland security initiatives without federal assistance. State and local revenue shortfalls coupled with increasing demands on resources make it more critical that federal programs be designed carefully to match the priorities and needs of all partners—federal, state, local, and private. Our previous work on federal programs suggests that the choice and design of policy tools have important consequences for performance and accountability. Governments have at their disposal a variety of policy instruments, such as grants, regulations, tax incentives, and regional coordination and partnerships, that they can use to motivate or mandate other levels of government and private-sector entities to take actions to address security concerns. The design of federal policy will play a vital role in determining success and ensuring that scarce federal dollars are used to achieve critical national goals. Key to the national effort will be determining the appropriate level of funding so that policies and tools can be designed and targeted to elicit a prompt, adequate, and sustainable response while also protecting against federal funds being used to substitute for spending that would have occurred anyway. The federal government often uses grants to state and local governments as a means of delivering federal programs. Categorical grants typically permit funds to be used only for specific, narrowly defined purposes. Block grants typically can be used by state and local governments to support a range of activities aimed at achieving a broad national purpose and to provide a great deal of discretion to state and local officials. Either type of grant can be designed to (1) target the funds to states and localities with the greatest need, (2) discourage the replacement of state and local funds with federal funds, commonly referred to as “supplantation,” with a maintenance-of-effort requirement that recipients maintain their level of previous funding, and (3) strike a balance between accountability and flexibility. More specifically: Targeting: The formula for the distribution of any new grant could be based on several considerations, including the state or local government’s capacity to respond to a disaster. This capacity depends on several factors, the most important of which perhaps is the underlying strength of the state’s tax base and whether that base is expanding or is in decline. In an August 2001 report on disaster assistance, we recommended that the director of FEMA consider replacing the per-capita measure of state capability with a more sensitive measure, such as the amount of a state’s total taxable resources, to assess the capabilities of state and local governments to respond to a disaster. Other key considerations include the level of need and the costs of preparedness. Maintenance-of-effort: In our earlier work, we found that substitution is to be expected in any grant and, on average, every additional federal grant dollar results in about 60 cents of supplantion. We found that supplantation is particularly likely for block grants supporting areas with prior state and local involvement. Our recent work on the Temporary Assistance to Needy Families block grant found that a strong maintenance- of-effort provision limits states’ ability to supplant. Recipients can be penalized for not meeting a maintenance-of-effort requirement. Balance accountability and flexibility: Experience with block grants shows that such programs are sustainable if they are accompanied by sufficient information and accountability for national outcomes to enable them to compete for funding in the congressional appropriations process. Accountability can be established for measured results and outcomes that permit greater flexibility in how funds are used while at the same time ensuring some national oversight. Grants previously have been used for enhancing preparedness and recent proposals direct new funding to local governments. In recent discussions, local officials expressed their view that federal grants would be more effective if local officials were allowed more flexibility in the use of funds. They have suggested that some funding should be allocated directly to local governments. They have expressed a preference for block grants, which would distribute funds directly to local governments for a variety of security-related expenses. Recent funding proposals, such as the $3.5 billion block grant for first responders contained in the president’s fiscal year 2003 budget, have included some of these provisions. This matching grant would be administered by FEMA, with 25 percent being distributed to the states based on population. The remainder would go to states for pass-through to local jurisdictions, also on a population basis, but states would be given the discretion to determine the boundaries of substate areas for such a pass-through—that is, a state could pass through the funds to a metropolitan area or to individual local governments within such an area. Although the state and local jurisdictions would have discretion to tailor the assistance to meet local needs, it is anticipated that more than one- third of the funds would be used to improve communications; an additional one-third would be used to equip state and local first responders, and the remainder would be used for training, planning, technical assistance, and administration. Federal, state, and local governments share authority for setting standards through regulations in several areas, including infrastructure and programs vital to preparedness (for example, transportation systems, water systems, public health). In designing regulations, key considerations include how to provide federal protections, guarantees, or benefits while preserving an appropriate balance between federal and state and local authorities and between the public and private sectors (for example, for chemical and nuclear facilities). In designing a regulatory approach, the challenges include determining who will set the standards and who will implement or enforce them. Five models of shared regulatory authority are: fixed federal standards that preempt all state regulatory action in the subject area covered; federal minimum standards that preempt less stringent state laws but permit states to establish standards that are more stringent than the federal; inclusion of federal regulatory provisions not established through preemption in grants or other forms of assistance that states may choose to accept; cooperative programs in which voluntary national standards are formulated by federal and state officials working together; and widespread state adoption of voluntary standards formulated by quasi- official entities. Any one of these shared regulatory approaches could be used in designing standards for preparedness. The first two of these mechanisms involve federal preemption. The other three represent alternatives to preemption. Each mechanism offers different advantages and limitations that reflect some of the key considerations in the federal-state balance. To the extent that private entities will be called upon to improve security over dangerous materials or to protect vital assets, the federal government can use tax incentives to encourage and enforce their activities. Tax incentives are the result of special exclusions, exemptions, deductions, credits, deferrals, or tax rates in the federal tax laws. Unlike grants, tax incentives do not generally permit the same degree of federal oversight and targeting, and they are generally available by formula to all potential beneficiaries who satisfy congressionally established criteria. National preparedness is a complex mission that requires unusual interagency, interjurisdictional, and interorganizational cooperation. The responsibilities and resources for preparedness reside with different levels of government—federal, state, county, and local—as well as with various public, private, and non-governmental entities. An illustration of this complexity can be seen with ports. As a former Commissioner on the Interagency Commission on Crime and Security in U.S. Seaports recently noted, there is no central authority, as at least 15 federal agencies have jurisdiction at seaports— the primary ones are the Coast Guard, the Customs Service, and the Immigration and Naturalization Service. In addition, state and local law enforcement agencies and the private sector have responsibilities for port security. The security of ports is particularly relevant in this area given that the ports of Long Beach and Los Angeles together represent the third busiest container handler in the world after Hong Kong and Singapore. Promoting partnerships between critical actors (including different levels of government and the private sector) facilitates the maximizing of resources and also supports coordination on a regional level. Partnerships could encompass federal, state, and local governments working together to share information, develop communications technology, and provide mutual aid. The federal government may be able to offer state and local governments assistance in certain areas, such as risk management and intelligence sharing. In turn, state and local governments have much to offer in terms of knowledge of local vulnerabilities and resources, such as local law enforcement personnel, available to respond to threats and emergencies in their communities. Since the events of September 11th, a task force of mayors and police chiefs has called for a new protocol governing how local law enforcement agencies can assist federal agencies, particularly the FBI, given the information needed to do so. As the United States Conference of Mayors noted, a close working partnership of local and federal law enforcement agencies, which includes the sharing of intelligence, will expand and strengthen the nation’s overall ability to prevent and respond to domestic terrorism. The USA Patriot Act provides for greater sharing of intelligence among federal agencies. An expansion of this act has been proposed (S.1615, H.R. 3285) that would provide for information sharing among federal, state, and local law enforcement agencies. In addition, the Intergovernmental Law Enforcement Information Sharing Act of 2001 (H.R. 3483), which you sponsored Mr. Chairman, addresses a number of information-sharing needs. For instance, this proposed legislation provides that the United States Attorney General expeditiously grant security clearances to governors who apply for them, and state and local officials who participate in federal counterterrorism working groups or regional terrorism task forces. Local officials have emphasized the importance of regional coordination. Regional resources, such as equipment and expertise, are essential because of proximity, which allows for quick deployment, and experience in working within the region. Large-scale or labor-intensive incidents quickly deplete a given locality’s supply of trained responders. Some cities have spread training and equipment to neighboring municipal areas so that their mutual aid partners can help. These partnerships afford economies of scale across a region. In events that require a quick response, such as a chemical attack, regional agreements take on greater importance because many local officials do not think that federal and state resources can arrive in sufficient time to help. Mutual aid agreements provide a structure for assistance and for sharing resources among jurisdictions in response to an emergency. Because individual jurisdictions may not have all the resources they need to respond to all types of emergencies, these agreements allow for resources to be deployed quickly within a region. The terms of mutual aid agreements vary for different services and different localities. These agreements may provide for the state to share services, personnel, supplies, and equipment with counties, towns, and municipalities within the state, with neighboring states, or, in the case of states bordering Canada, with jurisdictions in another country. Some of the agreements also provide for cooperative planning, training, and exercises in preparation for emergencies. Some of these agreements involve private companies and local military bases, as well as local government entities. Such agreements were in place for the three sites that were involved on September 11th— New York City, the Pentagon, and a rural area of Pennsylvania—and provide examples of some of the benefits of mutual aid agreements and of coordination within a region. With regard to regional planning and coordination, there may be federal programs that could provide models for funding proposals. In the 1962 Federal-Aid Highway Act, the federal government established a comprehensive cooperative process for transportation planning. This model of regional planning continues today under the Transportation Equity Act for the 21st century (TEA-21, originally ISTEA) program. This model emphasizes the role of state and local officials in developing a plan to meet regional transportation needs. Metropolitan Planning Organizations (MPOs) coordinate the regional planning process and adopt a plan, which is then approved by the state. Mr. Chairman, in conclusion, as increasing demands are placed on budgets at all levels of government, it will be necessary to make sound choices to maintain fiscal stability. All levels of government and the private sector will have to communicate and cooperate effectively with each other across a broad range of issues to develop a national strategy to better target available resources to address the urgent national preparedness needs. Involving all levels of government and the private sector in developing key aspects of a national strategy that I have discussed today—a definition and clarification of the appropriate roles and responsibilities, an establishment of goals and performance measures, and a selection of appropriate tools— is essential to the successful formulation of the national preparedness strategy and ultimately to preparing and defending our nation from terrorist attacks. This completes my prepared statement. I would be pleased to respond to any questions you or other members of the subcommittee may have. For further information about this testimony, please contact me at (202) 512-6737, Paul Posner at (202) 512-9573, or JayEtta Hecker at (202) 512- 2834. Other key contributors to this testimony include Jack Burriesci, Matthew Ebert, Colin J. Fallon, Thomas James, Kristen Sullivan Massey, Yvonne Pufahl, Jack Schulze, and Amelia Shachoy. Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs. GAO-02-160T. Washington, D.C.: November 7, 2001. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Homeland Security: Need to Consider VA’s Role in Strengthening Federal Preparedness. GAO-02-145T. Washington, D.C.: October 15, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: October 12, 2001. Homeland Security: A Framework for Addressing the Nation’s Issues. GAO-01-1158T. Washington, D.C.: September 21, 2001. Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness. GAO-01-162T. Washington, D.C.: October 17, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. Combating Terrorism: Actions Needed to Improve DOD’s Antiterrorism Program Implementation and Management. GAO-01-909. Washington, D.C.: September 19, 2001. Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Preparedness. GAO-01-555T. Washington, D.C.: May 9, 2001. Combating Terrorism: Observations on Options to Improve the Federal Response. GAO-01-660T. Washington, D.C.: April 24, 2001. Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy. GAO-01-556T. Washington, D.C.: March 27, 2001. Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response. GAO-01-15. Washington, D.C.: March 20, 2001. Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination. GAO-01- 14. Washington, D.C.: November 30, 2000. Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training. GAO/NSIAD-00-64. Washington, D.C.: March 21, 2000. Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism. GAO/T-NSIAD-00-50. Washington, D.C.: October 20, 1999. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attack. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Combating Terrorism: Observations on Growth in Federal Programs. GAO/T-NSIAD-99-181. Washington, D.C.: June 9, 1999. Combating Terrorism: Analysis of Potential Emergency Response Equipment and Sustainment Costs. GAO-NSIAD-99-151. Washington, D.C.: June 9, 1999. Combating Terrorism: Use of National Guard Response Teams Is Unclear. GAO/NSIAD-99-110. Washington, D.C.: May 21, 1999. Combating Terrorism: Observations on Federal Spending to Combat Terrorism. GAO/T-NSIAD/GGD-99-107. Washington, D.C.: March 11, 1999. Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency. GAO-NSIAD-99-3. Washington, D.C.: November 12, 1998. Combating Terrorism: Observations on the Nunn-Lugar-Domenici Domestic Preparedness Program. GAO/T-NSIAD-99-16. Washington, D.C.: October 2, 1998. Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments. GAO/NSIAD-98-74. Washington, D.C.: April 9, 1998. Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination. GAO/NSIAD-98-39. Washington, D.C.: December 1, 1997. Bioterrorism: The Centers for Disease Control and Prevention’s Role in Public Health Protection. GAO-02-235T. Washington, D.C.: November 15, 2001. Bioterrorism: Review of Public Health and Medical Preparedness. GAO- 02-149T. Washington, D.C.: October 10, 2001. Bioterrorism: Public Health and Medical Preparedness. GAO-02-141T. Washington, D.C.: October 10, 2001. Bioterrorism: Coordination and Preparedness. GAO-02-129T. Washington, D.C.: October 5, 2001. Bioterrorism: Federal Research and Preparedness Activities. GAO-01- 915. Washington, D.C.: September 28, 2001. Chemical and Biological Defense: Improved Risk Assessments and Inventory Management Are Needed. GAO-01-667. Washington, D.C.: September 28, 2001. West Nile Virus Outbreak: Lessons for Public Health Preparedness. GAO/HEHS-00-180. Washington, D.C.: September 11, 2000. Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Chemical and Biological Defense: Program Planning and Evaluation Should Follow Results Act Framework. GAO/NSIAD-99-159. Washington, D.C.: August 16, 1999. Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives. GAO/T-NSIAD-99-112. Washington, D.C.: March 16, 1999.
Federal, state, and local governments share responsibility in preparing for catastrophic terrorist attacks. Because the national security threat is diffuse and the challenge is intergovernmental, national policymakers need a firm understanding of the interests, capacity, and challenges when formulating antiterrorism strategies. Key aspects of this strategy should include a definition and clarification of the appropriate roles and responsibilities of federal, state, and local entities. GAO has found fragmentation and overlap among federal assistance programs. More than 40 federal entities have roles in combating terrorism, and past federal efforts have resulted in a lack of accountability, a lack of cohesive effort, and program duplication. This situation has led to confusion, making it difficult to identify available federal preparedness resources and effectively partner with the federal government. Goals and performance measures should be established to guide the nation's preparedness efforts. For the nation's preparedness programs, however, outcomes have yet to be defined in terms of domestic preparedness. Given the recent and proposed increases in preparedness funding, real and meaningful improvements in preparedness and establishing clear goals and performance measures are critical to ensuring a successful and a fiscally responsible effort. The strategy should include a careful choice of the most appropriate tools of government to best achieve national goals.
Program and policy decisions require a wide array of information that answers various questions. For example, descriptive information tells how a program operates—what activities are performed, who performs them, and who is reached. In contrast, evaluative information speaks to how well a program is working—such as whether activities are managed efficiently and effectively, whether they are carried out as intended, and to what extent the program is achieving its intended objectives or results. There are a variety of methods for obtaining information on program results, such as performance measurement and program evaluation, which reflect differences in how readily one can observe program results. Performance measurement, as defined by the Results Act, is the ongoing monitoring and reporting of program accomplishments, particularly progress towards preestablished goals. It tends to focus on regularly collected data on the type and level of program activities (process), the direct products and services delivered by the program (outputs), and the results of those activities (outcomes). While performance may be defined more broadly as program process, inputs, outputs, or outcomes, results usually refer only to the outcomes of program activities. For programs that have readily observable results, performance measurement may provide sufficient information to demonstrate program results. In some programs, however, results are not so readily defined nor measured. In such cases, program evaluations may be needed, in addition to performance measurement, to examine the extent to which a program is achieving its objectives. Program evaluations are systematic studies conducted periodically to assess how well a program is working. While they may vary in their focus, these evaluations typically examine a broader range of information on program performance and its context than is feasible in ongoing performance measurement. Where programs aim to produce changes, as a result of program activities, outcome (or effectiveness) evaluations assess the extent to which those outcomes or results were achieved, such as whether students increased their understanding of or skill in the material of instruction. In cases where the program’s outcomes are influenced by complex systems or events outside the program’s control, impact evaluations use scientific research methods to establish the causal connection between outcomes and program activities, estimate what would have happened in the absence of the program, and thus isolate the program’s contribution to those changes. For example, although outcome measures might show a decline in a welfare program’s caseload after the introduction of job placement activities, a systematic impact evaluation would be needed to assess how much of the observed change was due to an improved economy rather than the new program. In addition, a program evaluation that also systematically examines how a program was implemented can provide important information about why a program did or did not succeed and suggest ways to improve it. For the purposes of this report, we used the definition of program evaluation that is used in the Results Act, “an assessment, through objective measurement and systematic analysis, of the manner and extent to which federal programs achieve intended objectives.” We asked about assessments of program results, which could include both the analysis of outcome-oriented program performance measures as well as specially conducted outcome or impact evaluations. Two government initiatives could influence the demand for and the availability and use of program evaluation information. The Results Act seeks to promote a focus on program results, by requiring agencies to set program and agency performance goals and to report annually on their progress in achieving them (beginning with fiscal year 1999). In addition to encouraging the development of information on program results for activities across the government, the Results Act recognizes the complementary nature of program evaluation and performance measurement. It requires agencies to include a schedule for future program evaluations in their strategic plans, the first of which was to be submitted to Congress by September 30, 1997. The Results Act also requires agencies to review their success in achieving their annual performance goals (which are set forth in their annual performance plans) and to summarize the findings of program evaluations in their annual program performance reports (the first of which is due by March 31, 2000). The National Performance Review (NPR) led by the Vice President’s office has asked agencies to reexamine their policies, programs, and operations to find and implement ways to improve performance and service to their customers. Both of these initiatives—because of their focus on program results—could be expected to increase the demand for and the availability and use of program evaluation information. Other recent governmentwide initiatives could have potentially conflicting effects. In several program areas, devolution of program responsibility from the federal level and consolidation of individual federal programs into more comprehensive, multipurpose grant programs has shifted both program management and accountability responsibilities toward the states. These initiatives may thus make it more difficult for federal agencies to evaluate the results of those programs. In addition, efforts to reduce the growth of the federal budget have resulted in reductions in both federal staff and program resources in many agencies. The combination of these initiatives raises a question: In an environment of limited federal resources and responsibility, how can agencies meet the additional needs for program results information? To identify the roles and resources available for federal program evaluation, in 1996, we conducted a mail survey of offices identified by federal agency officials that were conducting studies of program results or effectiveness in 13 cabinet-level departments and 10 independent executive agencies. Detailed information on program evaluation studies refers to those conducted during fiscal year 1995 (regardless of when they began or ended). To identify how recent reforms were expected to affect federal evaluation activities and what strategies were available for responding to those changes, we interviewed external evaluation experts and evaluation and other officials at selected federal and state agencies. In this report, we use the term “agency” to include both cabinet-level departments and independent agencies. We distributed surveys in 1996 regarding federal evaluation activities within 13 cabinet level departments and 10 independent executive agencies in the federal government. We excluded the Department of Defense from our survey of evaluation offices because of the prohibitively large number of offices it identified as conducting assessments of effectiveness. Although we asked agency officials to be inclusive in their initial nominations of offices that conducted evaluations of program results, some offices that conducted evaluations may have been overlooked and excluded from our survey. However, many offices that were initially identified as having conducted evaluations later reported that they had not done so. In our survey, we asked each office about the range of its analytic and evaluation activities and about the length, cost, purpose, and other characteristics of the program evaluation studies they conducted during fiscal year 1995. (See appendix I for more details on the scope and methodology of the survey.) Between 1996 and 1997, we conducted interviews of program evaluation practitioners selected to represent divergent perspectives. We asked what had been or were expected to be the effects of various government changes and reforms on federally supported and related program evaluation activities and strategies for responding to those effects. We identified individuals with evaluation knowledge and expertise from a review of the survey responses, the evaluation literature, and our prior work; they were from an array of federal and state agencies and the academic and consulting communities. We then judgmentally selected 18 people to interview to reflect (1) a mix of different program types and diverse amounts of experience with program evaluation and (2) experience with some of the reforms at the state or federal level. Those selected included nine evaluation officials (six from offices in federal agencies and three from state legislative audit agencies) and seven external evaluation experts (four from private research organizations and three from universities). In addition, we interviewed an OMB official and one official from a state executive branch agency, and we also asked the officials from the state legislative audit agencies about their experiences with state performance reporting requirements. We conducted our review between May 1996 and July 1997 in accordance with generally accepted government auditing standards. However, we did not independently verify the types of studies conducted, other information reported by our respondents, nor information gained from interviewees. The resources allocated to conducting systematic assessments of program results (or evaluation studies) were small and unevenly distributed across the 23 agencies (departments and independent agencies) we surveyed. We found 81 offices that reported expending resources—funds and staff time—on conducting program effectiveness studies in fiscal year 1995. Over half of those offices had 18 or fewer full-time equivalent staff FTEs, while only a few offices had as many as 300 to 400 FTEs. (See figure 1.) Moreover, about one-third of the offices reported spending 50 percent or more of their time on evaluation activities (including development of performance measures and assessments of program effectiveness, compliance, or efficiency), since program evaluation was only one of these offices’ many responsibilities. (See survey question 9 in appendix I.) Two of the 3 largest offices (over 300 FTEs) spent about 10 percent of their staff time on program evaluation activities. Thus, the estimated staff and budget resources that the 81 offices actually devoted to evaluation activities totaled 669 FTEs and at least $194 million across the 23 agencies surveyed. In addition, most (61 of 81) offices reported also conducting management analysis activities; the most frequent activities were conducting management studies, developing strategic plans, and describing program implementation. Of those offices that could estimate their staff time, about half reported spending less than 25 percent of their time on management analysis. Similarly, many offices reported conducting policy planning and analysis, but most of them reported spending less than 25 percent of their time on it. Thus, a majority of the offices (45 of the 81 identified) conducted few evaluation studies (5 or less in fiscal year 1995), while 16 offices—representing 7 agencies—accounted for two-thirds of the 928 studies conducted. (See table 1.) Finally, 6 of the 23 agencies we surveyed did not report any offices conducting evaluation studies in fiscal year 1995. A few of these agencies indicated that they analyzed program accomplishments or outputs or conducted management reviews to assess their programs’ performance but did not conduct an evaluation study per se. Some of the 6 agencies also reported conducting other forms of program reviews that focused on assessing program compliance or efficiency rather than program results. Offices conducting program evaluations were located at various levels of federal agencies. A few of the 81 offices were located in the central policy or administrative office at the highest level of the organization (5 percent) or with the Inspector Generals (5 percent); many more were located in administrative offices at a major subdivision level (43 percent) or in program offices or technical or analytic offices supporting program offices (30 and 16 percent, respectively). (See table 2.) Four of the 23 agencies surveyed had offices at all 3 levels (agency, division, and program), and over half the agencies (14 of 23) had conducted evaluations at the program level. The 16 offices conducting 20 or more studies were more likely to be centralized at the agency or division level than at the program level. A diverse array of evaluation studies were described in the surveys. Just over half of the studies for which we have such information were conducted in-house (51 percent), and 27 percent lasted under 6 months. But the studies that were contracted out tended to be larger investments—almost two-thirds of them took over a year to complete, and over half cost between $100,000 and $500,000. Moreover, almost a third of all the studies lasted more than 2 years, reflecting some long-term evaluations. (See table 3.) For example, a study of the impact of a medical treatment program, which used an experimental design with a complex set of Medicare program and clinical data from thousands of patients on numerous outcomes (for both patients and program costs), took over 2 years and cost over $1 million to complete. Many of the 1995 studies reportedly used relatively simple designs or research methods, and many relied on existing program data. The two most commonly reported study designs were judgmental assessments (18 percent) as well as experimental designs employing random assignment (14 percent). (See table 4 for a list of designs ranging from the most to least amount of control over the study conditions.) Many of the studies (over 70 of the 129) that used experimental designs were evaluations of state demonstration programs, which were required by law to use such methods, and were conducted out of one office. Experimental designs and designs using statistical controls are used to identify a program’s net impact on its objectives where external factors are also known to affect its outcome. However, without knowing the circumstances of many of the programs being evaluated, it is impossible for us to determine the adequacy of the designs used to assess program effectiveness. At least 40 percent of the studies employed existing program records in their evaluations, while about one-quarter employed special surveys or other ad hoc data-collection methods specially designed for the studies. Just under half (40 percent) of the studies used data from program administrative records that were produced and reported at the federal level; almost a third (28 percent) used data from routinely produced, but not typically reported, program records; 5 percent of the studies used data from administrative records of other federal agencies; and 14 percent used administrative records from state programs. Some studies may have used many types of data sources, which would suggest a heavy reliance on administrative and other program-related data. (See table 5.) The primary reported purpose of the studies was to evaluate ongoing programs, either on an office’s own initiative or at the request of top agency officials. In the survey, most officials conducting evaluations reported having a formal and an ad hoc planning process for deciding what evaluation work they would do. Many criteria were indicated as being used to select which studies to do (such as a program office request, congressional interest, or continuation or follow-up of past work), but the criterion most often cited was the interest of high-level agency officials in the program or subject area. Moreover, about one-fourth of the studies were requested by top agency officials. About one-fourth of the studies were indicated to be self-initiated. Most offices were not conducting studies for the Congress or as the result of legislative mandates; only 17 percent of the studies were reported to have been requested in those ways. (See table 6.) For those offices reporting that they conducted studies, about half of the 570 studies for which we have information evaluated ongoing programs.Ongoing programs of all sizes were evaluated, ranging in funding from less than $10 million to over $1 billion. About one-third of these studies evaluated demonstration programs and many of them cost less than $10 million. In contrast, few reported evaluations of new programs and many of these new programs reportedly were small (with funding under $10 million). Program evaluation was reported to be used more often for general program improvement than for direct congressional oversight. Their primary uses most often were said to be to improve program performance (88 percent), assess program effectiveness (86 percent), increase general knowledge about the program (62 percent), and guide resource allocation decisions within the program (56 percent). (See table 7.) Accordingly, these offices overwhelmingly (over three-fourths of respondents) reported program managers and higher-level agency officials as the primary audience of their studies. (See table 8.) About one-third of the offices reported support for budget requests as a primary use and one-third reported congressional audiences were primary users for their studies. Fewer respondents (20 percent) reported program reauthorization as a primary use of the study results. (See tables 7 and 8.) Program evaluation was not the primary responsibility for most of these offices and the offices often reported “seldom, if ever” performing the program evaluation roles we asked about. The role most likely to be characterized as ‘most often performed’ was conducting studies of programs administered elsewhere in their agency. (See table 9.) About one-half of those who responded reported “sometimes” providing technical or design assistance to others or conducting joint studies, while a few offices saw their role as training others in research or evaluation methods. One office dealing with an evaluation mandate conducted work sessions with state and local program managers and evaluators as well as provided training to enhance state evaluation capabilities. Two-thirds of the offices seldom, if ever, designed evaluations conducted by other offices or agencies, trained others in research or evaluation methods, or approved plans for studies by others. Some of our interviewees thought that recent governmentwide reforms would increase interest in learning the results of federal programs and policies but would also complicate the task of obtaining that information. Devolution of federal program responsibility in the welfare and health care systems has increased interest in evaluation because the reforms are major and largely untested. However, in part because programs devolved to the states are expected to operate quite diversely across the country, some evaluation officials noted that evaluating the effects of these reforms was expected to be more difficult. In addition, federal budget reductions over the past few years were said by some not only to have reduced the level of federal evaluation activity but also to have diminished agency technical capacity through the loss of some of their most experienced staff. Because implementation of the Results Act’s performance reporting requirements is not very far along (the first annual reports on program performance are not due until March 2000), several of our interviewees thought it was too early to estimate the effect of the Results Act. Some hoped the Act would increase the demand for results information and expand the role of data and analysis in decisionmaking. One interviewee thought it would improve the focus of the evaluations they now conduct. A few evaluation officials were concerned that a large investment would be required to produce valid and reliable outcome (rather than process) data. A few also noted that resources for obtaining data on a greatly expanded number of program areas would compete for funds used for more in-depth evaluations of program impact. Other evaluators noted that changes in the unit of analysis for performance reporting from the program level to budget account or organization might make classic program evaluation models obsolete. As we previously reported, the federal program officials who have already begun implementing performance measurement appeared to have an unusual degree of program evaluation support and found it quite helpful in addressing the analytic challenges of identifying program goals, developing measures, and collecting data. Many of these program officials said they could have used more of such assistance; but, when asked why they were not able to get the help they needed, the most common response was that it was hard to know in advance that evaluation expertise would be needed. In addition to using program evaluation techniques to clarify program goals and develop reliable measures, several of these program officials saw the need for impact evaluations to supplement their performance data. Their programs typically consisted of efforts to influence highly complex systems or events outside government control, where it is difficult to attribute a causal connection between their program and its desired outcomes. Thus, without an impact evaluation or similar effort to separate the effects of their programs from those of other external events or factors, program officials from the previous study recognized that simple examination of outcome measures may not accurately reflect their programs’ performance. Some states’ experiences with performance measurement suggested that performance measurement will take time to implement, and the federal experience suggests that it will not supplant the need for effectiveness evaluations. Two state officials described a multiyear process to develop valid and reliable measures of program performance across the state government. While performance measures were seen as useful for program management, some state agency and legislative staff also saw a continuing need for evaluations to assess policy impact or address problems of special interest or “big-picture” concerns, such as whether a government program should be continued or privatized. NPR was seen by several of those we interviewed as not having much of an effect on efforts to evaluate the results of their programs beyond increasing the use of customer surveys. This may have been because it was seen as primarily concerned with internal government operations, or because, as one agency official reported, its effect was most noticeable in only a few areas: regulatory programs and other intergovernmental partnerships. However, one agency official said that NPR had a big impact on reorienting their work toward facilitating program improvement, while two others felt that it reaffirmed changes they had already begun. Given constraints on federal budgets, some officials we interviewed in general did not expect federal evaluation resources to rise to meet demand, so they described efforts to leverage and prioritize available resources. While an evaluation official reported supplementing his evaluation teams with consultants, concern was also expressed that staff reductions in their unit had left the technical expertise too weakened to competently oversee consultants’ work. Another evaluation official explained that they responded to the increasing demand for information by narrowing the focus and scope of their studies to include only issues with major budget implications or direct implications for agency action. Both a state official and two external evaluation experts felt that states grappling with new program responsibilities would have difficulty evaluating them as well, so that continued federal investment would be needed. A federal official, however, noted that private foundations could fund the complex rigorous studies needed to answer causal questions about program results. Some of the evaluators we interviewed expected that fewer impact studies would be done. Some expected that the range of their work may broaden to rely on less rigorous methods and include alternatives such as monitoring program performance and customer satisfaction. From our interviews, we learned that a few agencies have devolved responsibility for evaluations to the program offices, which may have more interest in program improvement. Another agency reported that it had built evaluation into its routine program review system, which provides continuous information on the success of the program and its outcomes, noting that it thereby reduced the need for special evaluation studies. One evaluation official reported that by having redefined evaluation as part of program management, program evaluation became more acceptable in his agency because it no longer appeared to be overhead. A few agencies reported that they were adapting the elements of their existing program information systems to yield information on program results. But in other agencies, evaluation officials and external experts thought that their systems were primarily focused on program process, rather than results. The evaluation official said that structural changes to, and a major investment in, their data systems will be required to provide valid and meaningful data on results. As program responsibility shifts to state and local entities, evaluation officials and others we interviewed described the need for study designs that can handle greater contextual complexity, new ways to measure outcomes, and the need to build partnerships with the programs’ stakeholders. One of the officials saw classical experimental research designs as no longer feasible in programs, which, due to increased state flexibility in how to deliver services, no longer represented a discrete national program or were unlikely to employ rigorous evaluation techniques that entailed random assignment of particular program services to individuals. Others noted the need to develop evaluation designs that could reflect the multiple levels on which programs operate and the organizational partnerships involved. To address some of these complexities, federal offices with related program interests have formed task groups to attempt to integrate their research agendas on the effects of major changes in the health and welfare systems. Similarly, a few federal evaluation officials reported an interest in consulting with their colleagues in other federal offices to share approaches for tackling the common analytic problems they faced. In other strategies, federal evaluation officials described existing or planned efforts to change the roles they and other program stakeholders played in conducting evaluations. One agency has arranged for the National Academy of Sciences to work with state program officials and the professional communities involved to help build a prototype performance measurement system for federal assistance to state programs. One evaluation office expects to shift its role toward providing more technical assistance to local evaluators and synthesizing their studies’ results. Another federal office has delegated some evaluation responsibility to the field while it synthesizes the results to answer higher level policy questions, such as which types of approaches work best. The Results Act recognizes and encourages the complementary nature of program evaluations and performance measures by asking agencies to provide a summary of program evaluation findings along with performance measurement results in their annual performance reports. One federal evaluation official said his agency had efforts under way to “align” program evaluation and performance measurement through, for example, planning evaluations so that they will provide the performance data needed. But, the official also expressed concern about how to integrate the two types of information. Officials from states that had already begun performance measurement and monitoring said they would like to see the federal government provide more leadership by (1) providing a catalog of performance measures available for use in various program areas and (2) funding and designing impact evaluations to supplement their performance information. Seeking to improve government performance and public confidence in government, the Results Act has instituted new requirements for federal agencies to report on their results at the same time that other management reforms may complicate the task of obtaining such information. Comparison of current federal program evaluation resources with the anticipated challenges leads us to several conclusions. First, federal agencies’ evaluation resources have important roles to play in responding to increased demand for information on program results, but—as currently configured and deployed—they are likely to be challenged to meet these future roles. It is implausible to expect that, by simply conducting more program evaluation studies themselves, these offices can produce data on results across all activities of the federal government. Moreover, some agencies reported that they had reduced their evaluation resources to the point that the remaining staff feel unable to meet their current responsibilities. Lastly, the devolution of some program responsibilities to state and local governments has increased the complexity of the programs they are being asked to evaluate, creating new challenges. Second, in the future, carefully targeting and reshaping the use of federal evaluation resources and leveraging federal and nonfederal resources show promise for addressing the most important questions about program results. In particular, federal evaluators could assist program managers to develop valid and reliable performance reporting by sharing their expertise through consultation and training. Early agency efforts to meet the Results Act’s requirements found program evaluation expertise helpful in managing the numerous analytical challenges involved, such as clarifying program goals and objectives, developing measures of program outcomes, and collecting and analyzing data. In addition, because performance measures will likely leave some gaps in needed information, strategic planning for future evaluations might strive to fill those gaps by focusing on those questions judged to have the most policy importance. In many programs, performance measures alone are not sufficient to establish program impact or the reasons for observed performance. Program evaluations can also serve as valuable supplements to program performance reporting by addressing policy questions that extend beyond or across program borders, such as the comparative advantage of one policy alternative to another. Finally, without coordination, it is unlikely that the increasingly diverse activities involved in evaluating an agency’s programs will efficiently supplement each other to meet both program improvement and policymaking information needs. As some agencies devolve some of the evaluations they conducted in the past to program staff or state and local evaluators, they run the risk that, due to differences in evaluation resources and questions, data from several studies conducted independently may not likely be readily aggregated. Thus, in order for such devolution of evaluation responsibility to better provide an overall picture of a national program, those evaluations would have to be coordinated in advance. Similarly, as federal agencies increasingly face common analytic problems, they could probably benefit from cross-agency discussion and collaboration on approaches to those problems. The Director of OMB commented on a draft of this report and generally agreed with our conclusions. OMB noted that other countries are experiencing public sector reforms that include a focus on results and increasing interest in program evaluation. OMB also provided technical comments that we have incorporated as appropriate throughout the text. OMB’s comments are reprinted in appendix II. We are sending copies of this report to the Chair and Ranking Minority Member of the House Committee on Government Reform and Oversight, the Director of OMB, and other interested parties. We will also make copies available to others on request. Please contact me or Stephanie Shipman, Assistant Director at (202) 512-7997 if you or your staff have any questions. Major contributors to this report are listed in appendix III. The 23 federal executive agencies (13 cabinet-level departments and 10 independent agencies) that we surveyed are listed as follows. These represent 23 of the 24 executive agencies (we excluded the Department of Defense) covered by the Chief Financial Officer’s Act. The 24 represent about 97 percent of the executive branch’s full-time staff and cover over 99 percent of the federal government’s outlay for fiscal year 1996. To identify the roles and resources expended on federal program evaluation, we surveyed all offices (or units) in the 23 executive branch departments and independent agencies that we identified as conducting evaluation in fiscal year 1995. We defined evaluation as systematic analysis using objective measures to assess the results or the effects of federal programs, policies, or activities. To identify these evaluation offices, we (1) began with the list of evaluation offices that we surveyed in 1984 (2) added offices based on a review of office titles implying analytical responsibilities and discussions with experts knowledgeable about evaluation studies, and (3) talked with our liaison staff and other officials in the federal departments and agencies to ensure broad yet appropriate survey coverage. In some instances, the survey was distributed to offices throughout an agency by agency officials, while in other instances we sent the survey directly to named evaluation officials. We attempted to survey as many evaluation offices as possible; however, in some cases, we may not have been told about or directed to all such offices. Therefore, we cannot assume that we have identified all offices that conducted program evaluation studies in fiscal year 1995. Overall, we received about 160 responses, of which 81 were from offices that conducted such studies. The survey was directed toward results-oriented evaluation studies, such as formal impact studies, assessments of program results, and syntheses or reviews of evaluation studies. We sought to exclude studies that focused solely on assessing client needs, describing program operations or implementation, or assessing fraud, compliance, or efficiency. However, we allowed the individual offices to (1) define “program” since a federal program could be tied to a single budget account, represent a combination of several programs, or involve several state programs and (2) determine whether or not they did this type of study and, if not, they could exempt themselves from completing the survey. We did not verify the accuracy of the responses provided by evaluation units. We also had some information on fiscal year 1996 activities but did not report those results since they were comparable to the fiscal year 1995 results. Some respondents were unable to complete different parts of the survey. About one-third of the respondents did not report either the office’s budget, its number of full-time equivalent staff (FTE), cost information about studies, or the sources of data used in the studies. For some questions, respondents were asked to answer in terms of the number of studies conducted, and we used the total number of studies indicated by all respondents to the question as the denominator when computing percents. However, when the level of nonresponse to individual survey questions was above 20 percent or was unclear due to incomplete information on how many studies had been reported on, we used the full complement of 928 studies to provide a conservative estimate. The questions for which we reported results are reproduced on the following pages. Committee on Governmental Affairs, United States Senate. “Government Performance and Results Act of 1993.” Report No. 103-58, June 16, 1993. Evaluation Practice. “Past, Present, Future Assessments of the Field of Evaluation.” Entire Issue. M.F. Smith, ed., Vol. 15, #3, Oct. 1994. Martin, Margaret E., and Miron L. Straf (eds.). Principles and Practices for a Federal Statistical Agency. Washington, D.C.: National Academy Press, 1992. National Performance Review. “Mission-Driven, Results-Oriented Budgeting.” Accompanying Report of the National Performance Review of the Office of the Vice President, Sept. 1993. New Directions for Program Evaluation. “Evaluation in the Federal Government: Changes, Trends, and Opportunities.” Entire issue. C.G. Wye and R. Sonnichsen, eds. #55, Fall 1992. New Directions for Program Evaluation. “Progress and Future Directions in Evaluation: Perspectives on Theory, Practice, and Methods.” Entire issue. Debra Rog and Deborah Fournier, eds. #76, Winter 1997. Office of Evaluation and Inspections. Practical Evaluation for Public Managers: Getting the Information You Need. Washington, D.C.: Office of Inspector General, Department of Health and Human Services, 1994. Public Law 103-62, Aug. 3, 1993, “Government Performance and Results Act of 1993.” Wargo, Michael J. “The Impact of Federal Government Reinvention on Federal Evaluation Activity.” Evaluation Practice, 16(3) (1995), pp. 227-237. The Results Act: An Evaluator’s Guide to Assessing Agency Annual Performance Plans (GAO/GGD-10.1.19, Mar. 1998). Balancing Flexibility and Accountability: Grant Program Design in Education and Other Areas (GAO/T-GGD/HEHS-98-94, Feb. 11, 1998). The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven (GAO/GGD-97-109, June 2, 1997). Managing for Results: Analytic Challenges in Measuring Performance (GAO/HEHS/GGD-97-138, May 30, 1997). Block Grants: Issues in Designing Accountability Provisions (GAO/AIMD-95-226, Sept. 1995). Program Evaluation: Improving the Flow of Information to the Congress (GAO/PEMD-95-1, Jan. 30, 1995). Management Reform: Implementation of the National Performance Review’s Recommendations (GAO/OGC-95-1, Dec. 5, 1994). Public Health Service: Evaluation Set-Aside Has Not Realized Its Potential to Inform the Congress (GAO/PEMD-93-13, Apr. 1993). Program Evaluation Issues (GAO/OCG-93-6TR, Dec. 1992). “Improving Program Evaluation in the Executive Branch.” A Discussion Paper by the Program Evaluation and Methodology Division (GAO/PEMD-90-19, May 1990). Program Evaluation Issues (GAO/OCG-89-8TR, Nov. 1988). Federal Evaluation: Fewer Units, Reduced Resources, Different Studies from 1980 (GAO/PEMD-87-9, Jan. 23, 1987). (966704/973810) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed federal agencies' efforts to provide information on federal program results, focusing on: (1) the current resources and roles for program evaluation in federal agencies; (2) the anticipated effects of governmentwide reforms and other initiatives on evaluation of federal programs; and (3) potential strategies for agencies to respond to the anticipated effects and provide information on program results. GAO noted that: (1) existing federal evaluation resources--at least as currently configured and deployed--are likely to be challenged to meet increasing demands for program results information; (2) agencies reported devoting variable but relatively small amounts of resources to evaluating program results; (3) morever, agencies reported that the primary role of program evaluation was internally focused on program improvement, rather than direct congressional or other external oversight; (4) interest in the program by high-level officials was most often cited as a criterion for initiating evaluation work; a small portion of studies were said to be conducted for a congressional committee or in response to a legislative mandate; (5) some of the evaluation officials and experts that GAO interviewed anticipated not only increased interest in learning the results of federal programs and policies but also additional complications in obtaining that information; (6) some evaluation officials from states with performance measurement experience noted that effectiveness evaluations would continue to be needed to assess policy impact and address problems of special interest or larger policy issues, such as the need for any government intervention at all in an area; (7) to meet the anticipated increase in demand for program results information as well as the associated technical challenges, some evaluation officials GAO interviewed described efforts to leverage both federal and nonfederal resources; (8) however, some agencies anticipated that major investments in their data systems would be required to produce reliable data on program outcomes; and, in a prior study, program officials were concerned that reliance on less rigorous methods would not provide an accurate picture of program effectiveness; (9) moreover, while some federal evaluation officials envisioned providing increased technical assistance to state and local evaluators, a few state evaluation officials suggested an alternative strategy for the federal government; (10) GAO drew several conclusions from its comparison of current federal evaluation resources with the anticipated challenges to meeting increased demand for information on program results; (11) federal evaluation resources have important roles to play in responding to increased demand for information on program results, but--at least as currently configured and deployed--they are likely to be challenged to meet that demand; and (12) in the future, carefully targeting federal agencies' evaluation resources shows promise for addressing key questions about program results.
The Federal Reserve, as the United States’ central bank, has primary responsibility for maintaining the nation’s cash supply. In carrying out this responsibility, Federal Reserve Banks perform various cash-related functions to meet the needs of the depository institutions served by the Federal Reserve Banks. At the 37 Federal Reserve Banks and Branches which make up the Federal Reserve System, the cash operations function is responsible for shipping cash to meet the needs of depository institutions, receiving shipments of new currency from the Bureau of Engraving and Printing, new coin from the U.S. Mint, and incoming deposits of excess and unfit currency and coin from depository institutions. In addition to maintaining custodial controls over the cash in its possession, each Federal Reserve Bank and Branch processes currency received from circulation and records and summarizes the various accounting transactions associated with these activities. While the 37 Federal Reserve Banks and Branches perform the same cash-related functions, they may use different systems and processes to manage and account for the cash under their control. The Federal Reserve Banks and Branches in three of the System’s 12 districts—Atlanta, Philadelphia, and San Francisco—use the Cash Automation System (CAS) to manage and account for cash under their control. CAS is an electronic inventory system which, among other features, tracks coin and currency activities and balances by denomination and identifies bank operating units with custodial responsibility for cash. Certain data maintained in CAS are used to provide daily updates to the bank’s general ledger system. CAS data are also used by bank officials to prepare monthly currency activity reports. These reports, which track each Federal Reserve Bank’s monthly currency activities and end-of-month vault balance, are used by the Federal Reserve Board to monitor currency activities across the Federal Reserve System. In September 1996, we reported on the results of a review of currency activity reports prepared by the Los Angeles Branch. The review responded to concerns about reported inaccuracies in certain of the bank’s monthly currency activity reports. The review’s objectives were to determine the nature of currency reporting inaccuracies and review actions intended to resolve them. Our review found that certain data needed for the October through December 1995 currency activity reports were forced to ensure that the reports agreed with the Los Angeles Branch’s end-of-month balance sheet. As a result, analysis by a bank analyst showed that receipts from circulation were understated by $5.8 million in October, overstated by $61.8 million in November and understated by $111 million in December. Our review noted problems with the reporting of currency activities which raised concern about the quality of the Los Angeles Reserve Branch’s internal control environment and potential CAS system limitations which could affect currency accounting and reporting. In response to the review’s findings and recommendations, the Federal Reserve Board took a number of immediate actions specific to the Los Angeles Branch including (1) revising policies and procedures for preparing the monthly currency activity report, (2) conducting an unannounced 100-percent count of the Los Angeles Branch’s currency and coin holdings and comparing the results to the bank’s balance sheet, and (3) conducting an internal review of the bank’s cash operations and related financial records. The Federal Reserve Board reported that (1) the results of the physical count confirmed that the Los Angeles Branch’s balance sheet accurately reflected its currency and coin holdings and (2) its examiners found that the accounting for the cash handled by the bank was accurate and that proper safeguards and controls existed to ensure the integrity of the bank’s financial records. In addition to actions addressing the Los Angeles Branch’s currency reporting and controls, the Federal Reserve Board arranged for an external examination of internal control over cash operations at certain banks that use CAS to manage and account for cash operations—the subject of this report. Our September 1996 report recommended that, given the problems in preparing the currency activity report using CAS data in Los Angeles, the Federal Reserve Board require an external review of internal controls. In response to our recommendation, the Federal Reserve Board hired Coopers & Lybrand L.L.P., an independent public accounting firm, to examine and report on managements’ assertions about the effectiveness of the internal control structure over financial reporting and safeguardingfor cash at three banks—the Federal Reserve Bank of Atlanta’s Home Office, the Federal Reserve Bank of San Francisco’s Los Angeles Branch, and the Federal Reserve Bank of Philadelphia. These banks represent 3 of the 12 cash operations located in the Reserve System which use CAS to provide inventory and management control and accounting for cash-related activities. Table 1 provides 1996 currency data on the relative size and volume of currency processing activities at the 3 locations covered by Coopers & Lybrand’s external examinations, the 12 which use CAS, and the entire 37 banks and branches. The objective of Coopers & Lybrand’s examinations was to opine on whether managements’ assertions on the effectiveness of internal controls were fairly stated based on the internal control criteria used by management. In performing its examinations and concluding on the reliability of managements’ assertions, Coopers & Lybrand performed an attest engagement which is governed by the AICPA’s Attestation Standards. The attestation standards provide both general and specific guidance which is intended to enhance the consistency and quality of these engagements. The attestation standards consist of general, fieldwork, and reporting standards which apply to all attestation engagements and individual standards which apply to specific types of attestation engagements. The attestation standards supplement existing auditing standards by reenforcing the need for technical competence, independence in attitude, due professional care, adequate planning and supervision, sufficient evidence, and appropriate reporting. In addition to the general, fieldwork, and reporting attestation standards, Coopers & Lybrand’s examination at the three Reserve Banks was also subject to requirements of a specific attestation standard—Statement on Standards for Attestation Engagements No. 2, Reporting on an Entity’s Internal Control Structure Over Financial Reporting. This standard provides guidance on planning, conducting, and reporting on the engagement, including evaluating the design and operating effectiveness of internal controls. A key provision of the standard is that management use reasonable control criteria which have been established by a recognized body in evaluating the internal control structure’s effectiveness. This requirement ensures that management uses commonly understood and/or accepted control criteria in concluding on the internal control structure’s effectiveness and that the practitioner uses the same criteria in forming an opinion on management’s assertion. Management for each of the Federal Reserve Banks covered by Coopers & Lybrand’s examinations based their assessments of internal control effectiveness on criteria contained in the Internal Control-Integrated Framework issued in September 1992 by the Committee on Sponsoring Organizations of the Treadway Commission (COSO). To develop a broad understanding of internal control and establish standards for assessing its effectiveness, COSO developed a structured approach—the Integrated Framework—which defines internal control and describes how it relates to an entity’s operations. Internal control represents the process, designed and operated by an entity’s management and personnel, to provide reasonable assurance that fundamental organizational objectives are achieved. The Integrated Framework describes internal control in terms of objectives, essential components of internal control, and criteria for assessing internal control effectiveness. Internal control objectives—what internal controls are intended to achieve—fall into three distinct but overlapping categories: operations—relating to effective and efficient use of an entity’s resources; financial reporting—relating to preparing reliable financial statements; and compliance—relating to an entity’s compliance with laws and regulations. Safeguarding controls are a subcategory within each of these control objectives. Safeguarding controls—those designed to prevent or promptly detect unauthorized acquisition, use, or disposition of an entity’s resources—are primarily operations controls. However, certain aspects of safeguarding controls can also be considered compliance and financial reporting controls. When legal or regulatory requirements apply to use of resources, operations controls designed to safeguard the efficient and effective use of resources also address compliance objectives. Similarly, objectives designed to ensure that losses associated with the use or disposition of resources are properly recognized and reflected in the entity’s financial statements also address financial reporting objectives. In May 1994, COSO issued an addendum to its Integrated Framework to provide specific reporting guidance on controls concerning safeguarding of assets. COSO stated that there is a reasonable expectation that a management report will cover not only controls to help ensure that transactions involving an entity’s assets are properly reflected in the financial statements, but also controls to help prevent or promptly detect unauthorized acquisition, use, or disposition of the underlying assets. COSO believes it is important that this expectation be met. The addendum provided suggested wording for management’s report on internal control over financial reporting to also specifically state safeguarding of assets when covered by management’s report. Internal control, as described in the Integrated Framework, consists of five essential and interrelated components: control environment, risk assessment, control activities, information and communication, and monitoring. The control environment represents the control consciousness of an entity, its management, and staff. Risk assessment refers to the awareness and management of relevant internal and external risk associated with achieving established objectives. Control activities represent the operating policies and procedures designed to help ensure that management’s directives—desired actions intended to address risks—are carried out. Information and communication refers to the need for relevant and useful information to be communicated promptly to management and staff for use in carrying out their responsibilities. The monitoring component refers to the need to monitor and assess over time the effectiveness of internal control policies and procedures in achieving their intended objectives. The nature and extent to which an entity’s internal control structure incorporates the five control components represent criteria that can be used in assessing the internal control effectiveness of operating, financial reporting, and compliance controls. Management can assess and report on the effectiveness of any of the three categories of control objectives. Internal controls can be judged effective if, for each category of control objective reported on, management has reasonable assurance that each of the five control components has been effectively incorporated into the entity’s internal control structure. COSO recognized that determining effectiveness was a subjective judgment. Similarly, with respect to effectiveness of safeguarding controls, controls can be judged effective if management has reasonable assurance that unauthorized acquisition, use, or disposition of an entity’s assets that could have a material effect on the financial statements are being prevented or detected promptly. For each examination, Coopers & Lybrand concluded that Federal Reserve Bank management fairly stated its assertion that the bank maintained an effective internal control structure over financial reporting and safeguarding for cash as of the date specified by management based on criteria established in the Internal Control—Integrated Framework issued by COSO. Coopers & Lybrand’s examinations were conducted at different times during the late summer and fall of 1996 because management for each of the three Reserve Banks made their assertions about the effectiveness of internal controls as of different specified dates (Atlanta, September 30, 1996; Los Angeles, August 31, 1996; and Philadelphia, October 31, 1996). In making an assertion as of a point in time, the scope of management’s assessment of internal controls is limited to the design and operating effectiveness of internal controls in place on the date of management’s assertion. In addition to its positive conclusions on the reliability of management’s assertion on the effectiveness of financial reporting and safeguarding controls, Coopers & Lybrand’s report contains standard language related to the inherent limitations in any internal control structure and projections of results of any internal control structure evaluation to other periods. This language, required by the AICPA’s Attestation Standards, is intended to remind readers that (1) internal controls, no matter how well designed and operated, can provide only reasonable assurance that internal control objectives are achieved, and (2) projections of the results of any internal control structure evaluation to any other period is subject to the risk that the internal control structure may be inadequate because of changes in conditions, or the degree of adherence to policies and procedures may deteriorate. To perform our work, we met with Federal Reserve officials and the Coopers & Lybrand partner and audit manager responsible for the examination and discussed the nature of the examination of internal controls over financial reporting and safeguarding for cash. We also discussed the applicable attestation standards and internal control criteria used by the firm in conducting the examination. We reviewed the applicable attestation standards and evaluation criteria (Internal Control—Integrated Framework issued by COSO) used by the bank’s management and Coopers & Lybrand to assess the effectiveness of internal controls over financial reporting and safeguarding for cash. We also reviewed the Coopers & Lybrand working papers supporting its opinions on internal controls at the Atlanta, Los Angeles, and Philadelphia Federal Reserve Banks. We looked for evidence that the work had been planned and performed in accordance with applicable attestation standards. We also looked for evidence that Coopers & Lybrand’s work addressed the applicable internal control criteria. Where necessary, we obtained additional understanding of the procedures performed through discussions with the partner and audit manager of Coopers & Lybrand. Where Coopers & Lybrand’s working papers indicated that it used work performed by the Federal Reserve Banks’ General Auditors with respect to electronic data processing controls, we conducted interviews with the General Auditor staff for the three banks and Federal Reserve Automation Services and reviewed their applicable internal audit working papers. We visited the Atlanta, Los Angeles, and Philadelphia banks to enhance our understanding of the respective internal control structures over financial reporting and safeguarding for cash. During our visits, which took place during April and May 1997, we observed the processes and internal controls in the respective bank’s cash department that had been identified and documented by Coopers & Lybrand, and held discussions with management and staff of the cash department and the internal audit department. We performed our work from January 1997 through June 1997. Our review was performed in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Federal Reserve System Board of Governors. On August 1, 1997, the Board of Governors of the Federal Reserve System provided us with comments that are included in appendix II and discussed in the agency comments section of this report. In performing its examinations, Coopers & Lybrand (1) obtained an understanding of the procedures and internal controls, (2) evaluated the design effectiveness of the controls, (3) tested and evaluated the operating effectiveness of the controls, and (4) formed opinions about whether managements’ assertions regarding the effectiveness of the internal controls were fairly stated, in all material respects, based on the COSO control criteria. Internal controls usually involve two elements: a policy establishing what should be done and procedures to effect the policy. The procedures include a range of activities such as approvals, authorizations, verifications, reconciliations, physical security, and separation of duties. Coopers & Lybrand found that the Federal Reserve has developed custody control standards and procedures that provide a framework for establishing systems of internal controls to protect cash processed and stored at the banks. Coopers & Lybrand’s working papers described the cash operating process the banks followed in managing, controlling, and accounting for cash operations. This process is broken down into four major areas: (1) receiving/shipping of cash, (2) processing of currency to check the accuracy of deposits from depository institutions, identify counterfeit currency, and determine the currency’s fitness for recirculation, (3) vault storage of cash, and (4) cash administration. The cash operations followed by the banks are discussed in more detail in appendix I. Coopers & Lybrand’s work focused on the internal controls designed to properly record, process, and summarize transactions to permit the preparation of reliable financial statements and to maintain accountability for assets (financial reporting controls) and safeguard assets against loss from unauthorized acquisition, use, or disposition (safeguarding controls). These controls include two categories of information system control activities which serve to ensure completeness, accuracy, and validity of the financial information in the system. In order to determine whether the internal controls provided reasonable assurance that losses or misstatements material in relation to the financial statements would be prevented or detected as of the date of management’s assertion, Coopers & Lybrand tested the operating effectiveness of the internal controls. The testing methods included observation, inquiry, and inspection. No one specific control test is necessary, applicable, or equally effective in every circumstance. Generally, a combination of these types of control tests is performed to provide the necessary level of assurance. The types of tests performed for each control activity are determined by the auditor using professional judgment and depend on the nature of the control to be tested and the timing of the control test. For example, documentation of some control activities may not be available or relevant and evidence about the effectiveness of operation is obtained through observation or inquiry. Also, some activities, such as those relating to the resolution of exception items, may not occur on the date that the auditor is conducting the tests. In those cases, the auditor needs to inquire about the procedures performed when exceptions occur. Observation tests are conducted by observing entity personnel actually performing control activities in the normal course of their duties. For example, Coopers & Lybrand observed the physical separation between the carriers and the receiving and shipping teams, the use of locks and seals on the containers used for storing currency, and the preparation of the end of day proof by each of the teams. In currency processing, Coopers & Lybrand observed preparation of the processing unit proof, transfer of currency to and from the processing teams, and processing team operations. Observation of processing operations documented in their working papers included the handling of currency rejected by the high speed machine and its processing on the slower speed machine, and the physical transfer of rejected currency from the processing team to the cancellation team. Inquiry tests are conducted by making either oral or written inquiries of entity personnel involved in the application of specific control activities to determine what they do or how they perform the specific control activity. For example, Coopers & Lybrand’s inquiries of bank personnel included asking about procedures performed when containers stored in the vault are found to have broken seals and when discrepancies in shipments are reported by the depository institutions. Inspection tests are conducted by examining documents and records for evidence (such as the existence of initials, signatures, or other indications) that a control activity was applied to those documents and records. Coopers & Lybrand used inspection to test controls such as the daily reconciliation of CAS and the general ledger system, the end of day proofs prepared by each team, vault inventories, and monitoring logs prepared by cash department management personnel. Similarly, Coopers & Lybrand tested computer controls through observation, inquiry, and inspection. For example, they observed the enforcement of physical access controls such as logging of visitors and video surveillance. They asked management about the control procedures over changes to the CAS program code and corroborated the information they were given by interviewing system users and application developers. They inspected a system log to verify that backup tapes were being produced on schedule. For many of the computer controls tests in their work program, Coopers & Lybrand consulted with Federal Reserve Bank’s General Auditors to gain an understanding of the computer controls and/or examined their working papers to further corroborate information that Coopers & Lybrand obtained through observation, inquiry, and inspection. In addition to other tests conducted by inspection, observation, and inquiry, the banks’ internal audit working papers evidenced tests based upon independent verification of compliance with computer control procedures. For example, the General Auditors for the Federal Reserve Bank in Philadelphia selected five days of work for each of five cash processing rooms and examined system reports and manual logs to verify that the high-speed currency processing machines were tested daily and that they returned acceptable results before being put into production. The results of our review disclosed no instances in which Coopers & Lybrand did not comply, in all material respects, with the AICPA’s Attestation Standards in the work described above. We found that Coopers & Lybrand’s working papers adequately documented that it had planned, performed and supervised the work. The working papers contained evidence that the auditor had an appropriate level of knowledge about the Federal Reserve Banks and had considered relevant work from prior years’ audits, such as descriptions of the internal control structure. The scope of the examination was detailed in a written engagement letter. We found that the work was performed by staff who were independent with respect to the Federal Reserve Banks and had adequate experience. Also, the working papers evidenced that the staff had been properly supervised. For example, key working papers were reviewed by the Audit Manager and Partner. We found that Coopers & Lybrand used audit tools to assist it in documenting the internal controls for each of the processes included in cash operations. For example, its auditors prepared worksheets which identified internal control objectives, the related risks and the control activities designed to address the objectives. Also, they prepared work programs which described the procedures to be performed to test the control activities, and they documented the results of their tests in written working papers. They used similar audit tools for their review of computer controls, documenting in their working papers the control objectives to be tested, the procedures performed, and their conclusions. In accordance with the attestation standards, the working papers contained written assertions made by management about the effectiveness of the bank’s internal controls and contained a written management representation letter. In commenting on a draft of this report, the Board of Governors of the Federal Reserve System fully concurred with our conclusion on Coopers & Lybrand’s work. The Board of Governors indicated that our conclusions are consistent with those of the Board’s Inspector General. Also, the Board of Governors noted that the financial controls in each Reserve Bank’s operations, including cash, will be evaluated on an ongoing basis as part of Coopers & Lybrand’s audit procedures in order to render an opinion on the financial statements. Further, the cash operations controls are reviewed regularly by the Banks’ internal auditors, the Board’s financial examiners, Board staff who conduct periodic operations reviews of Reserve Bank cash functions, and the Department of Treasury reviews of currency destruction activities. We are sending copies of this report to the Chairman of the Board of Governors of the Federal Reserve System; the Secretary of the Treasury; the Chairman of the House Committee on Banking and Financial Services; the Chairman and Ranking Minority Member of the Senate Committee on Banking, Housing, and Urban Affairs; and the Director of the Office of Management and Budget. Copies will be made available to others upon request. Please contact me at (202) 512-9406 if you or your staff have any questions. Major contributors to this report are listed in appendix III. As the United States’ central bank, the Federal Reserve has primary responsibility for maintaining the nation’s cash supply. In carrying out this responsibility, Federal Reserve Banks perform various cash-related operations. At the 37 Federal Reserve Banks and Branches, the cash operations function is responsible for receiving new coin from the U.S. Mint, new currency from the Bureau of Engraving and Printing, cash from depository institutions, currency processing, safeguarding cash held on deposit, and shipping cash to meet the needs of depository institutions. In addition, Federal Reserve Banks must record and summarize the various accounting transactions associated with their cash-related activities. While each Federal Reserve Bank performs the same basic cash-related functions, banks may use different systems and procedures to manage and account for the cash under their control. Federal Reserve Banks in Atlanta, Los Angeles, and Philadelphia use the Cash Automation System (CAS) to provide inventory, safeguarding, and accounting control over currency processing. CAS is an electronic inventory system which, among other features, tracks coin and currency activities and balances by denomination, and identifies bank operating units with custodial responsibility for cash. Certain data maintained in CAS are used to provide daily updates to the Federal Reserve’s general ledger system. CAS data are also used by Federal Reserve officials to prepare monthly currency activity reports. In addition to CAS, the three Federal Reserve Banks use procedural controls to safeguard cash and account for processing-related activities. These controls include restricted access, joint custody, segregation of processing-related duties, video surveillance cameras, supervisory review, and monitoring. Presented below is a general description of the cash operations functions at the three Federal Reserve Banks examined by Coopers & Lybrand. While the description focuses on currency operations, the handling and control procedures over coin are similar to those for currency, with a few notable differences. For example, coins are handled in bags and their content is verified through a weighing process, while currency notes received from depository institutions are individually checked by high-speed equipment for accuracy, fitness, and authenticity. Also, coin is stored in a separate vault from currency. Each work day, depository institutions may notify Federal Reserve Banks electronically of currency that they are depositing with or ordering from each bank. The notification includes the dollar amount and denominational breakdown for the deposit or order. The cash is transported between the Federal Reserve Banks and depository institutions by armored carriers which enter the bank buildings through secured entrances. To ensure the integrity of the currency received from or transferred to the carriers, the Federal Reserve Banks use a minimum of two-person receiving or shipping teams. These teams are always physically separated from the carriers as shipments are unloaded or loaded by the carriers. For example, carriers unload or load the currency into a glass-walled room (sometimes called an anteroom) which is bordered on one end by the carriers’ entrance and on the other end by the receiving or shipping room. Each anteroom has two sets of locking doors on either end. The receiving or shipping team cannot enter the anteroom when the carrier is unloading or loading currency. Currency transfers are accepted on a “said to contain” basis. Carriers verify currency transfers by checking the number and denomination of currency bags to see if they match the stated contents on the manifests. When currency is received by a Federal Reserve Bank, the receiving team counts the number of bags received from each depository institution and independently compares this to the carrier’s manifest before accepting the currency from the carrier. Subsequently, the receiving team counts the bundles of currency to verify the total amount received. These counts of the number of bundles received for each denomination are performed independently by each team member. The team members then independently put their counts into CAS where they are compared to each other and to the deposit notification received from the depository institution. If the counts match, the depository institution automatically receives credit for the shipment. If the counts do not match, the difference is investigated and must be resolved before the end of day closeout or reconciliation process can be completed. After the counts are completed, the currency is transferred to a vault in a sealed container where it is safeguarded until it goes through currency processing. When currency is being shipped to fill an order, the currency is transferred from the vault to a shipping team. The shipping team inspects the integrity of the seals on the containers prior to accepting accountability for the currency. The shipping team prepares the order by placing the currency in sealed bags. The team members independently count the order and put their counts into CAS where they are compared to each other and to the order notification received electronically from the depository institution. Because the carrier accepts the shipment on a said-to-contain basis, any discrepancies subsequently identified by the depository institution in the amounts of currency in the bags must be resolved with the Federal Reserve Bank that filled the order. At the end of each shift, each receiving and shipping team prepares a daily proof to ensure that all of the currency transferred to the team from a carrier or the vault is accounted for either in the team’s ending inventory or through transfers to the vault or carriers. Currency received from depository institutions is processed to check the accuracy of the deposit, identify counterfeit currency, and determine the currency’s fitness. The processing takes place in glass-walled rooms which have numerous surveillance cameras and locked doors that enable the processing team to control access to each room and its contents. Processing teams are composed of either two or three members who share joint custody and accountability for the team’s currency holdings and processing activities. On a scheduled basis, the processing machines are tested to ensure they are performing within established tolerance levels. The tests consist of running currency test decks through the machines to determine whether they are correctly counting the notes, identifying and rejecting different denominations and counterfeit currency, and identifying and shredding soiled currency. Testing is performed by trained currency processing staff who are not directly involved in routine processing activities. The test results are tracked through automated output reports which are reviewed by the test operator and management. If the test results indicate the need for service, site engineers are available to service the machines. Test decks are only used for a specified number of tests after which the test decks are destroyed. Custody of the test decks is tracked in the CAS inventory and access is restricted through the use of locked storage containers. All currency received from circulation is processed initially on a high-speed machine which counts the notes and tests for denomination, soil level, and authenticity. One of three things can happen to individual currency notes as they are processed on the high-speed machine. Currency which passes the machine’s various tests is considered fit for recirculation and is repackaged with a new currency strap which identifies the Federal Reserve Bank, the processing team, and the date the currency was processed. Currency failing only the soil test is shredded on-line by the high-speed machine which generates output reports that track the number and denomination of currency shredded during the shift. The high-speed machine also rejects currency for incorrect denomination, questionable soil levels, and/or potential counterfeit. This currency undergoes further processing to check denomination and authenticity on a slower speed machine. Differences in count are tracked by the automated output reports and recorded in CAS as adjustments to the depository institution’s deposit. Depository institutions are notified—via a written adjustment advice—of changes to their previously recorded deposit amounts. Rejected currency is transferred to a slower machine for further processing and inspection along with the straps that identify the depository institution that packaged the currency. The operator enters the rejected currency into the slower speed machine where it is retested for denomination, soil level, and counterfeit. Currency which passes the retests is shredded on-line and tracked in automated output reports. Currency which fails any one of the retests is rejected by the slower speed machine. The rejects, along with the cause for the rejections, are tracked and separately reported in automated output reports. These reports are also used to adjust the depository institution’s account with the Federal Reserve Bank for the amount of the difference. Currency rejected by the slower speed machine is sorted for off-line destruction or transfer. Counterfeit items are stamped “Counterfeit” and transferred daily from the processing team to independent clerks who examine, count, and collect counterfeit currency for shipment to the U.S. Secret Service for follow-up and analysis. Currency rejected for denomination and soil level is transferred daily to a separate team for cancellation and subsequent off-line destruction. In the presence of the processing team, a cancellation team counts and accepts the transfer of the rejected currency for cancellation. The transfer is recorded in the CAS system. The team takes the rejected currency in a locked container to a cancellation room where the currency is cancelled by punching bank-specific-shaped holes into the currency. The cancellation process is monitored by an independent observer who also monitors the transfer of the cancelled currency to a separate off-line destruction team. Upon verification and approval by the off-line destruction team, the transfer of cancelled currency is recorded in CAS. Off-line destruction occurs periodically throughout the week and is monitored by an independent observer who counts the number and denomination of the currency straps to be destroyed and matches it to the strap count performed by the off-line destruction team. In addition, the destruction team and independent observer follow prescribed policies which include sample counts of individual low value currency notes and a 100 percent count of higher value currency notes. Once this count is completed, the off-line destruction team, along with the independent observer, takes the cancelled currency to a special room where it is destroyed in a shredder. Once all currency has been destroyed, the destruction team and the independent observer inspect the shredder to ensure that all cancelled currency was destroyed. Following the off-line destruction, the team generates from CAS a certificate of destruction based on the earlier currency transferred to the off-line destruction team. The certificate of destruction is signed by the team and the observer and forwarded to Cash Administration for use in the end-of-day reconciliation. At the end of each shift, each processing team prepares its unit proof. The proof is designed to ensure that the processing team can account for the team’s currency holdings and processing activities by tracking the value of its beginning and ending inventory, its currency transfers in and out, and any adjustments arising during processing. After the team completes and accepts the proof data, it is transmitted electronically to CAS where it is compared to related currency data entered into CAS during the shift. If the proof data balance and agree with related currency data in CAS, the unit proof is accepted. If the proof data do not agree with related currency data in CAS, the processing team must request management assistance to identify and resolve differences. The Federal Reserve Banks use vaults to safeguard the currency they hold. The vault is a separate room within the cash department and a record is maintained of all persons who enter and exit the vault each day. Access to the vault may also be restricted through the use of keys or swipe cards. When stored in the vault, currency of the same denomination is stacked in locked containers. Cash department employees have a set of locks with their own personal key or combination. The employees use these locks to secure the containers for which they are accountable. In addition to the locks, each two-person team secures the containers with two prenumbered seals. In some Federal Reserve Banks, the locks are removed while the containers are stored in the vault. When this occurs, the integrity of the seals is verified when accountability for the container is transferred to another team. In some Federal Reserve Banks, accountability for the currency is transferred to vault custodians when the currency is stored in the vault. In other Federal Reserve Banks, accountability for currency stored in the vault stays with either the receiving or shipping team, and the vault custodians serve more of an administrative function. In both cases, the vault custodians periodically conduct a rack count of the currency in the vault (i.e., daily in Atlanta and Los Angeles, weekly in Philadelphia) and reconcile the count to CAS. The custodians also prepare a daily proof at the end of each day to ensure that all transfers of currency in and out of the vault match shipping, receiving and high-speed processing records. The Cash Administration independent proof clerk is responsible for producing the department proof, the daily reconciliation of CAS and the Integrated Accounting System (IAS), and submitting manual entries to the IAS. All manual IAS entries must balance and be reviewed and approved by management. Before the department proof can be produced, CAS is used to verify that all teams have produced their final unit proofs, and the cash department inventory and transaction totals agree. The department proof lists all of the transactions and current inventory balances for each of the department’s teams (receiving, shipping, processing, and vault). The independent proof clerk then compares the department inventory total to the calculated balance from CAS. The calculated balance is determined by taking the ending inventory from the previous day and adding/subtracting for the current day’s transactions. The two totals must be equal. Throughout the day, transactions from CAS are automatically uploaded and posted to IAS. The daily reconciliation of CAS and IAS involves the comparison of the end-of-day department inventory totals from CAS to the total reflected in IAS. The two totals must be equal. Periodically, the independent proof clerk performs a blind confirmation of the reconciliation in which the clerk is “locked out” of IAS and submits the CAS balances to the accounting department for reconciliation. The daily reconciliations of CAS and IAS are reviewed and approved by cash administration management. Sharon S. Kittrell, Auditor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the work of the Federal Reserve's external auditor, Coopers & Lybrand L.L.P., in reporting on the effectiveness of the internal control structure over financial reporting for cash at the Atlanta and Philadelphia Federal Reserve Banks, and the Los Angeles Branch, focusing on whether: (1) the work was conducted in accordance with applicable professional standards; and (2) supported the auditor's opinion on managements' assertions on the effectiveness of the internal controls over cash operations. GAO noted that: (1) GAO's review disclosed no instances in which Coopers & Lybrand's work to support its opinions on the effectiveness of the internal control structures over financial reporting and safeguarding for coin and currency at the Atlanta and Philadelphia Federal Reserve Banks, and the Los Angeles Branch did not comply, in all material aspects, with the American Institute of Certified Public Accountants' Attestation Standards; (2) Coopers & Lybrand obtained and documented an understanding of the internal control policies and procedures, developed by the Federal Reserve Banks, to manage and account for each of the four main cash operating functions: receiving/shipping, currency processing, vault, and cash administration; (3) Coopers & Lybrand also performed tests and other procedures in support of its evaluation of the design and operating effectiveness of the internal controls in order to form an opinion about the reliability of management's assertion; and (4) for each examination, Coopers & Lybrand concluded that the Federal Reserve Bank management fairly stated its assertion that the bank maintained an effective internal control structure over financial reporting and safeguarding for cash as of the date specified by management based on criteria established in the Internal Control--Integrated Framework issued by the Committee on Sponsoring Organizations of the Treadway Commission.
Existing cap-and-trade programs that regulate greenhouse gases, such as the EU ETS, have experience in the sale of allowances. As we reported in November 2008, the ETS began the first of its trading periods, or “phases,” in 2005. In Phase I, which ran from 2005 to 2007, member states were allowed to auction up to 5 percent of their allowances, with the remainder distributed to covered entities free of charge. The auctioning limit increased to 10 percent in Phase II, which is to run from 2008 to 2012. The EU’s decentralized approach gives member states the authority to design and execute their own sales. While some member states chose to sell or auction a portion of their allowances in Phases I and II, the quantity sold has been a relatively small percentage of the overall quantity of allowances distributed (see appendix III for more information). For Phase III, which begins in 2013, the EU decided to increase the amount of auctioning significantly, and as a result approximately half of all the allowances will be auctioned. The EU is currently assessing various auction design options for Phase III and beyond—including holding centralized, EU-wide auctions—and plans to adopt an official auctioning regulation by June 2010. U.S. programs also offer experience in emissions allowance auctions. The federal government has auctioned allowances for the emission of sulfur dioxide under its Acid Rain Program since 1993, and the Commonwealth of Virginia auctioned allowances for nitrogen oxide emissions—a pollutant that contributes to the formation of smog—in 2004. More recently, in 2005, RGGI was created to regulate the carbon dioxide emissions of large fossil fuel-fired generators in participating states. RGGI has auctioned nearly 87 percent of emissions allowances issued under the program for 2009, and each of the six centralized auctions held since September 2008 has raised between $38 million and $117 million for programs to promote energy efficiency and renewable energy projects, among other uses. In addition to auctions for emissions allowances, the U.S. government has experience conducting other types of auctions, such as for government securities, surplus property, oil leases, timber harvests, and electromagnetic spectrum licenses. The Treasury Department’s Bureau of Public Debt, for example, conducts more than 250 public auctions per year involving over $5 trillion in marketable securities. In a cap-and-trade program for greenhouse gas emissions, covered entities and other interested parties will be able to buy allowances not only from the government, but also from participants in the secondary trading market. In the ETS, for example, allowances can be purchased through over-the-counter markets or on exchanges such as the European Climate Exchange in London or BlueNext in Paris. Secondary market trading can involve a range of intermediaries—including banks and brokers—and several types of allowance transactions can occur. “Spot” sales involve the immediate payment and delivery of allowances between two parties. Market participants may also trade “forward” or “futures” contracts, both of which allow for delivery of allowances at a later date. Futures contracts may be attractive to covered entities that wish to secure an allowance price in advance and reduce uncertainty about future compliance costs. Because the value of futures contracts fluctuates based on the current market price of allowances, parties that do not have a compliance obligation under the program may also wish to purchase them as an investment. The literature and programs that we reviewed present many options for the design of a mechanism to sell allowances under a cap-and-trade program. We drew two major observations from the literature and the cap- and-trade programs we reviewed. First, because elements of the mechanism’s design may affect outcomes—such as the price of allowances obtained at auctions and the cost of the program—it is important that design choices align with an emissions trading program’s goals. Second, once a goal is chosen, policymakers have numerous choices regarding the sale of allowances, including whether to sell them on an exchange or use auctions. If policymakers choose auctions—as did the majority of countries participating in the ETS and the RGGI states— they must also make important design choices in the areas of format, participation requirements, frequency and timing, price controls, and rules for reporting and monitoring. Program officials and economists suggested establishing clear goals to help guide the design and implementation of government allowance sales in a cap-and-trade program. Identifying priorities early is critical to developing an effective sales approach, as certain designs may better serve certain goals. Goals commonly cited by researchers and program officials include: Simplicity and transparency. Many economists and program officials recommended that allowance sales be simple and transparent for all participants. Sales should be guided by rules that are clear and understandable—both to participants and to the general public—to encourage participation, prevent discrimination, and ensure easy access to allowances. To that end, several economists and program officials recommended selecting an auction format that is easy to use and does not involve complicated bidding procedures. Reporting sales results in a public and timely manner can also help to create a transparent market. Maximizing participation. Ensuring sufficient levels of participation in allowance sales is critical, according to available information and program officials we interviewed. Participation fosters competition and limits opportunities for collusion. Economists and program officials also advised that sales should not discriminate against any one group of participants, whether by excluding them directly or indirectly, such as through high transaction costs. Participation can also be encouraged with a simple and transparent auction design. Economic efficiency. Economic literature suggests that efficiency is a key goal for allowance sales. In the case of allowance auctions, economic efficiency can be achieved if allowances are purchased by those who value them the most. A general measure of the efficiency of an auction, therefore, is its ability to generate bids that accurately reflect how much value a bidder places on the allowance. If efficiency is achieved, the resource—in this case, the right to emit greenhouse gases—is allocated to its highest-valued use. Efficiency may be affected by strategic bidding behavior or collusion if these activities artificially depress the price of allowances. Facilitating price discovery. Allowance sales may help facilitate price discovery—the process of determining a commodity’s price based on supply and demand. Sales that successfully facilitate price discovery will generate an allowance price that accurately reflects the marginal cost of reducing emissions. That is, the price would reflect decisions by covered entities either to reduce their emissions or to purchase allowances to cover them, whichever is more cost-effective. Economists expect that this process of price discovery will prompt emissions reductions by those covered entities that can undertake them most cost-effectively. Without effective price discovery, the overall efficiency of the allowance market may be diminished, increasing the program’s costs to the economy. According to several program officials involved in the ETS, the need for price discovery depends on the volume of allowances sold under the program. If only a small percentage is sold, and the rest freely allocated, price discovery will be accomplished in the secondary market. Avoiding market manipulation. According to economic literature, allowance sales should limit opportunities for participants to collude or engage in other forms of market manipulation. Collusion to depress allowance prices, if successful, could distort price signals, reduce revenues collected by the government from the sale of allowances, and cause participants and observers to question the fairness and transparency of the program. In addition, if allowance ownership were to become concentrated among a small group of participants in the secondary market, these participants could then withhold allowances from the market, driving up prices and impeding efficiency. However, several program officials and economists said it would be difficult to pursue such a strategy in the presence of a liquid market, since participants would have to acquire large shares of allowances both at individual auctions and in the secondary market. In addition, literature and economists we interviewed suggest that a U.S. cap-and-trade program for carbon dioxide allowances would likely attract a large number of participants. According to them, broad participation would prevent any single participant from gaining undue influence within the market, limiting opportunities for collusion and market manipulation. Revenue generation. Available literature suggests that policymakers could also aim to maximize the level of revenues collected from the sale of allowances. For example, policymakers could design a sale so that it is more likely to achieve high allowance prices. However, while maximizing revenue is a common goal in other government auctions of public assets, high allowance prices could increase the burden of a cap-and-trade program on covered entities or consumers of their products, which could erode political support for the program. Furthermore, some program officials noted that it would be much easier to accomplish this goal by increasing the stringency of the emissions cap. Minimizing administrative and transaction costs. Several economists and program officials recommended minimizing the administrative and transaction costs associated with allowance sales. Administrative costs are the time and resources governments spend designing and implementing allowance sales; available information suggests that most of these costs are incurred in the design phase of the program. Transaction costs, on the other hand, refer to costs incurred by participants in obtaining allowances—for example, costs associated with registering for an auction, developing a bidding strategy, and any bidding fees. According to available information, high transaction costs could discourage participation, and smaller entities in particular may face disproportionately high costs relative to the value of allowances they purchase. According to economic literature, the design of an allowance sale can have a significant impact on both administrative and transaction costs. For instance, weekly auctions may result in higher administrative and transaction costs than those held less frequently, although using existing infrastructure may help to minimize these costs. While several economists noted that any costs incurred by governments and participants would likely be minor compared to the value of the allowances, program officials said that keeping these costs low would help ensure ongoing support for the program. According to available literature and economists we interviewed, identifying the goals of the sale in advance can help policymakers evaluate the likely effects of a given sales method. This is especially important given that trade-offs may result from decisions regarding the various aspects of auction design. For example, a method that increases revenue collected from allowance sales may not be the most economically efficient approach. Some goals may also be interrelated—for example, a simple and transparent design may boost participation and reduce the risk of market manipulation. The literature and programs that we reviewed present many options for the design of a mechanism to sell allowances in a cap-and-trade program. Each option has implications that will help determine the extent to which a program meets its goals. At a high level, policymakers face a choice about using auctions or other types of sales to distribute allowances. Auctions, if used, would entail additional design choices. According to available literature, selling emissions allowances through an exchange enables buyers to purchase allowances from an electronic platform as they would stocks or other commodities. While sales on an exchange are less widely used for emissions allowance distribution than auctions, two ETS member states have had experience with exchange- based sales: Germany (in ETS’s Phase II) and Denmark (in ETS’s Phase I). Germany’s sales, directed by Germany’s state-owned bank, took place through two major European exchanges—the European Climate Exchange and the European Energy Exchange. Allowances were sold daily, according to rules specified by the government. The bank used the same process as other members of the exchange wishing to trade allowances, such that buyers could not distinguish the government from other participants in the secondary market. Denmark’s sales method differed from Germany’s in that it paid a fee to two private firms to sell allowances on European exchanges. Rather than instructing the two firms to sell allowances daily, as Germany did, Denmark structured the fees paid to the firms such that they received an incentive to sell when they judged price conditions to be most favorable. German and Danish officials we interviewed expressed general satisfaction with sales on exchanges and cited their potential strengths in meeting certain goals. For example, Germany’s goal was to match the average secondary market price as closely as possible without disrupting the market, according to officials. Information provided by Germany’s state-owned bank shows that the allowance price yielded through the exchanges did in fact mirror the market price of allowances sold during the same time period. Several officials praised the efficiency of such sales and reported general satisfaction among participants. Other potential advantages of sales over auctions include: Lower administrative costs. Germany and Denmark used existing exchanges, which made it unnecessary to design and administer auctions. Ease of use. Some member state officials we interviewed said that sales on exchanges are simpler than auctions on separate platforms because many of the large companies affected by the ETS are already registered and active participants in the exchanges. According to German officials, this is one reason Germany continued using existing exchanges when it made the transition from sales to auctioning in 2010. Despite these possible advantages, implementing such sales may prove challenging given the scale of a potential U.S. program. While Germany expects to sell or auction 40 million allowances annually in Phase II of the ETS—the highest volume among EU member states to date—legislation being considered by the U.S. Congress proposes initially auctioning over 1 billion allowances in 2012 alone. On such a large scale, auctions may be more feasible or desirable than sales, according to several economists and program officials. For example, European officials said that if large volumes of allowances are sold in this manner, it may be difficult to ensure that all participants, including smaller entities, are equally able to buy allowances at the market price. Some program officials also expressed reservations that that if a high volume of allowances were sold through exchanges, the government would become the dominant seller in the secondary market and affect the price formation process. Moreover, one economist pointed out that no extensive studies had been undertaken on the performance of sales on exchanges, whereas auctions are well understood as a mechanism of distributing government assets. Program officials reported that auctions, the more commonly used sales mechanism in the EU and RGGI, effectively distributed allowances to program participants, although several noted that allowance auctions have not yet been implemented on a large scale. According to economic literature and the economists and officials we interviewed, potential strengths of auctions include: Price discovery. As economists and officials have noted, the process of auctioning helps to establish the cost of emission reductions and maintain an allowance price that reflects that cost. Auctions enable a government to put allowances together in “batches” and sell them at predetermined times, and may help regulated entities make business decisions that incorporate the cost of compliance with emissions regulations. As one economist explained, auctions encourage covered entities to assess their marginal costs of emissions abatement, consider their allowance needs carefully, and bid accordingly. However, some economists and officials said that regardless of whether auctions or sales are used, price discovery will occur in the secondary market, once that market becomes established. Simplicity. Many of our interviewees cited auctions’ simplicity as an advantage over sales because auctions are well-understood. RGGI officials reported that covered entities with a range of auction experience received training in the RGGI auction process and had no trouble familiarizing themselves with it. Lower transaction costs. Holding periodic auctions—weekly, monthly, or quarterly, for example—may decrease the per-transaction cost of buying and selling allowances, since buyers and sellers would not need to devote the time and resources necessary to participate in daily trading. Transparency. According to economists and officials, auctions take place under established rules and time frames, and thus convey clear information about when and how the government will sell allowances. According to European Commission officials, establishing a clear and predictable auction calendar can help inform market participants of the precise timing of volumes coming into the market, thereby avoiding unnecessary uncertainty and price volatility. Sales on exchanges may be less transparent, since it is difficult to monitor who is selling allowances and at what time. Program officials noted that auctions can also be administered through exchanges. For example, Germany decided to conduct auctions using an existing European trading exchange beginning in 2010. According to a program official, this approach allows them to draw on preexisting auction infrastructure and administrative processes, as allowance bids are subject to the same rules as other exchange transactions. The European Commission’s for the draft auctioning regulation governing ETS auctions in Phase III and beyond also proposes using an exchange or other trading platform to vet participants and administer auctions. Allowance auctions could also be administered in other ways. For example, policymakers could use existing government auction mechanisms—such as those used by the U.S. Treasury to auction securities—to auction allowances. Alternatively, policymakers could choose to create a new auctioning platform or hire contractors to administer auction processes. For example, RGGI opted to use a proprietary auctioning platform, run by a contractor with experience in administering auctions for energy commodities. If auctions are used, several other design determinations must be made, including: format, timing and size, participation requirements, price controls, and monitoring and reporting requirements. The choice of auction format can affect how well the auction aligns with predetermined goals, such as maximizing simplicity, avoiding market manipulation, and aiding in price discovery. We focused our analysis on two classes of auctions most commonly discussed in the context of emissions allowance auctions—”uniform-price” and “discriminatory-price” auctions. Choosing an appropriate auction format involves answering two key questions: What price should winning bidders pay for allowances? In uniform-price auctions, all winning bidders pay the same price for the items purchased. In the hypothetical example illustrated in table 1, 5 companies bid for 10 allowances in a uniform-price auction. Even though they made different bids, each company that bids above a certain price receives allowances at the same price. This “clearing price” is the highest price point at which all of the 10 allowances available would be sold. In this example, companies A, B, and C each receive allowances at the clearing price of $3.50, although Company C receives only 2 of the 3 requested allowances because the number of bids at or above the clearing price exceeds the number of allowances available at this price. Companies D and E, which bid below the clearing price, receive no allowances. In “discriminatory-price” auctions, winning bidders pay different prices for allowances purchased at auction. In some discriminatory-price auctions, winning bidders pay the amount of their bid. For example, in the hypothetical auction presented in table 1, Company A would get its full share of requested allowances at $5 each, Company B would pay $4 each, and Company C would receive part of its request at $3.50 per allowance. How many rounds of bidding should take place? In a typical single-round auction, participants place bids once during a predetermined time period. Because participants do not see other bids before the outcome is announced, no opportunity is provided to change a bid based on information about others’ bids. The single-round auction is thus sometimes referred to as a “sealed-bid” auction. Figure 1 presents a simplified version of the RGGI software interface that participants use for RGGI’s single-round auctions. As the figure shows, participants assemble a bid sheet, specifying the quantity of allowances requested at a stated bid price. Each participant may submit several bids at several different prices, if desired. According to a RGGI program official, when RGGI’s auction results are tabulated by the auction administrator’s automated system, the results appear in a format similar to that shown in table 1. Participants whose bids appear above the line receive allowances, those bidding at the clearing price receive a partial allocation, and those below the line do not receive allowances. Auctions with multiple rounds of bidding occur in several formats, among them the “English” auction, in which the auction administrator raises the price of allowances round by round, and the “Dutch” auction, in which the auction administrator decreases the price round by round. Both the English and Dutch auctions are commonly referred to as “clock auctions,” since the price is raised or lowered incrementally, like a clock’s hands. Participants in multiple-round auctions have the opportunity to change the quantity of allowances for which they bid— or drop out of the bidding—as information is revealed round by round. Importantly, economists note that to discourage participants from potentially distorting allowance prices by increasing the quantity for which they bid in later rounds—after competitors have revealed their strategies—a clock auction can include a rule against increasing the bid quantity after the first bidding round. In designing an auction, policymakers may consider selecting a format that is sensitive to the context of the allowance market. Previous U.S. federal government experience with auctions has involved different formats in different markets. For example, in 1994 the Federal Communications Commission chose to auction spectrum licenses in a simultaneous multiple-round format. By contrast, Environmental Protection Agency auctions of allowances to emit sulfur dioxide involve a single round where successful bidders pay as they bid, and auctions of government securities held by the U.S. Department of the Treasury involve uniform pricing. In interviews and in economic literature, officials and economists have emphasized the importance of tailoring an auction for carbon dioxide allowances to the characteristics of the market, which may be different from other markets where auctions have been used. An auction of carbon dioxide allowances would sell many identical items—permits to emit a specified quantity of carbon dioxide in a particular time period. Additionally, bidders in this market could include a large number of covered entities that emit carbon dioxide. These characteristics reveal both similarities and differences from some of the other markets listed above. For example, not all broadband spectrum licenses are alike, and their value to a buyer may further depend on the portfolio of licenses held. Furthermore, the number of potential buyers may be greater in the market for carbon dioxide emissions than sulfur dioxide emissions, in part because carbon dioxide is emitted in greater volume. Existing cap-and-trade programs for carbon dioxide allowances—the EU ETS and RGGI—have employed the uniform-price, single-round format, in which winning bidders submit secret bids and pay the same price for allowances. Several program officials we spoke with expressed general satisfaction with this format, and the draft auctioning regulation governing ETS auctions in Phase III and beyond also proposes this approach. According to literature and economists we interviewed, advantages of this format include: Simplicity. For regulated entities that have participated in auctions, the simplicity and familiarity of the uniform-price, single-round format may prove valuable, according to several economists. This format has also proved easy to learn for those unfamiliar with auction processes, according to officials, as it involves relatively simple bidding procedures. One economist also reported that the uniform-price, single-round format is well-suited to automation compared to other auction formats, with much of the work handled by sophisticated but inexpensive computer programs. This economist pointed to RGGI, a small organization handling large pools of assets, as a case study in how simple the uniform-price, single-round auction can be to administer. Avoidance of market manipulation. Some economists said that other auction formats, such as clock auctions, may be more conducive to collusion than single-round auctions, because multiple bidding rounds give other bidders information and create opportunities for collusion. Reduced risks for bidders. Program officials and economists also said that the uniform-price, single-round format may alleviate concerns that could arise in discriminatory-price auctions. If a discriminatory-price auction requires participants to pay the value of their bids, for example, they run the risk of overbidding and paying more than other winning bidders for allowances. This may be of particular concern for small and inexperienced bidders, who may lack the information and resources to formulate a sophisticated bidding strategy. Uniform-price auctions reduce the possibility of making a costly bidding mistake, since all winning participants pay the same allowance price. For this reason, some economists believe that uniform-price auctions will generate greater participation than discriminatory-price auctions. Despite the strengths of the uniform-price, single-round format, some economists suggested that policymakers undertake further study before selecting an auction format. One study suggests that laboratory experiments with auction format options may provide insights that theoretical studies cannot, given the context-specific nature of the performance of various auction formats. One economist also said that legislation need not specify a single auction format and could instead instruct government agencies responsible for the program to choose among various format options. Several RGGI states followed this path, by issuing regulations authorizing the auction administrator to use the uniform-price, single-round auction format or the ascending price, multiple-round format. Policymakers could also leave room to revisit the auction format stipulated in cap-and-trade legislation, although introducing significant changes at later stages would require participants to relearn auctioning procedures. Among auction format options other than the uniform-price, single-round format, the clock format may have comparative strengths in achieving certain goals, according to economists we interviewed. For example, economic literature suggests that clock auctions may lead to more reliable price discovery, since each participant may raise its bids in an attempt to win allowances, so that allowances go to those who are willing to pay the most. However, economists who did experimental work on auction formats said that a clock auction fared no better in terms of price discovery than a uniform-price, single-round auction. Another argument for the clock format arises if multiple products are sold at an auction. For example, in addition to auctioning allowances for the current year, the government could decide to auction allowances of other future-year vintages—that is, allowances sold in advance of the compliance year(s) in which they may be remitted. A clock format would allow bidders to express preferences for different vintages, which may allow more readily for substitution of one vintage for another and prevent price irregularities. The clock auction format may also present some disadvantages. An official involved with Ireland’s auctions said they chose a uniform-price, single- round format after determining that a clock auction would be comparatively expensive and difficult to implement. The format may also complicate participation: one economist involved with Virginia’s clock auctions of nitrogen oxide allowances received complaints about participants having to work at a computer terminal all day to compete in the auction. A single-round auction format, by contrast, would only require participants to submit a single bid sheet, similar to that shown in figure 1 above. The economist also pointed out that having thousands of participants monitoring a day of multiple-round auctioning in a large federal program would increase the cost of both participating in the program and administering it. Apart from format, policymakers would face a number of choices related to auction design, each of which has implications for program outcomes. Among other things, choices must be made regarding participation rules, the frequency and timing of auctions, the use of reserve prices or other price controls, and the monitoring and reporting of auction activities. We briefly describe each of these considerations below and provide additional detail in appendix II. Participation. Maintaining high levels of auction participation can lead to greater competition which, in turn, can reduce the risk of collusion or other market manipulation. To maximize participation, economists and program officials recommended opening auctions up to as wide of a group of bidders as possible, including financial institutions and other entities that do not have compliance obligations under the program. According to them, limiting participation can increase the risk of market manipulation, making it difficult to ensure that all covered entities have access to allowances. In addition, program officials said that a well-designed vetting and registration system can reduce the risk that a participant will default on a bid. Frequency and timing. According to economists, the frequency of auctions should be driven by the volume of allowances sold: higher volumes of allowances may require more frequent auctions. Available information suggests that holding frequent auctions, such as monthly or weekly, can help maintain market liquidity and provide flexibility to covered entities. On the other hand, some officials said frequent auctions may also complicate planning and increase administrative costs, depending on how the auctions are conducted. In terms of timing, many officials recommended auctioning future-year vintage allowances, which allow covered entities to secure allowances in advance and reduce the risks associated with fluctuating prices. Price controls. Price controls could be implemented in a number of ways. A reserve price would set a price below which no allowances can be sold at an auction. Several program officials and economists suggested that setting a reserve price can be an effective way to guard against low auction clearing prices that may result from collusion or low participation. In addition, in some cases a reserve price may serve as a “price floor” throughout the secondary market. According to some economists and researchers, a price floor may help provide incentives for investment in low carbon technologies; however, some program officials cautioned that price floors could unduly interfere with the functioning of the allowance market. At the other end of the price spectrum, policymakers could also set upper limits on the price of allowances through price ceilings. While price ceilings could provide insurance against sustained high allowance prices, some program officials advised against the use of these measures, which they said could compromise emissions goals and impede international linkage of programs. Monitoring and reporting. An effective system to report auction results and monitor activity can increase transparency and help oversight entities identify and correct instances of market abuse. Information on auction results can also provide information with which to evaluate and improve the program. As a result, several officials and economists recommended establishing a market monitor to track activity at auctions and in the secondary market. However, in reporting auction results, economists and officials cautioned against reporting certain information, such as bidders’ identities, which they said could inadvertently facilitate collusion. We conducted our work from December 2008 to February 2010 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions in this product. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this report are listed in appendix IV. Our review assesses the implications of different options for selling allowances in cap-and-trade programs. To address this objective, we first identified cap-and-trade programs that regulate carbon dioxide emissions and have sold allowances through auctions or other means. Programs that met these criteria were the European Union’s (EU) Emissions Trading Scheme (ETS) and the Regional Greenhouse Gas Initiative (RGGI). We then selected a nonprobability sample of 5 member states involved in the ETS—Austria, Denmark, Germany, Ireland, and the United Kingdom—in order to assess different methods that EU governments have used to sell allowances. This sample enabled us to assess allowance sales that exhibited variation in several key areas: the size of the allowance market, the share of allowances auctioned, the design of the sale, and the amount of revenue generated. While the sample allowed us to learn about many important aspects of, and variations in, the design of allowance sales, it was not intended to provide findings that would be generalizable to all allowance sales. To identify various options for selling allowances, we identified and reviewed over 40 works of academic and professional research produced by economists, industry associations, research organizations, academic institutions, and environmental groups, including international research. We identified these works through an internet and database search using relevant key words such as “allowance sales” and “auction design.” We also analyzed literature from government agencies, including the Congressional Budget Office (CBO) and the Congressional Research Service (CRS). Reviewing this research helped us to assess the different methods for designing an allowance sales mechanism and the potential implications of these methods. We did not independently assess the validity of data, assumptions, or methodologies underlying the economic studies we reviewed. We met with U.S. and international stakeholders including officials from RGGI and the European Commission as well as program officials in Austria, Denmark, Germany, Ireland, and the United Kingdom. We also conducted semistructured interviews with leading economists and researchers selected on the basis of their expertise in climate policy or auction design. There are several key elements of auction design, in addition to format, that can affect whether an auction meets its established goals. This appendix provides more detail on the observations and recommendations of program officials, economists, and the economic literature regarding the following auction design elements: participation, frequency and timing, price controls, and reporting and monitoring. Policymakers may shape participation in a cap-and-trade auction through various aspects of auction design. For example, auctions could be designed to restrict participation or to provide special assistance to smaller entities. Available information suggests that maintaining high rates of participation in allowance auctions can help promote liquidity and reduce the risk of collusion. We discuss three auction characteristics that help determine participation: participation limits, bid limits, and procedures for vetting and registration. Participation limits. Available literature suggests that limiting or eliminating the ability of certain entities to participate in allowance auctions could reduce the amount of revenue generated through the auctions and hinder the efficient allocation of allowances. For example, excluding those entities that do not have a compliance obligation under the program would decrease overall auction participation, which, in turn, may increase the likelihood of collusive activities to depress allowance prices. In addition, if the auction clearing price falls below the price in secondary markets—whether due to collusion or other factors—auction participants may be able to buy allowances at auction and sell them on the secondary market for profit, thus capturing revenues that would have otherwise gone to the government. According to one study, the end result would be an implicit subsidy to the entities allowed to participate in the auction and a corresponding reduction in government revenue. Moreover, some program officials maintained that restricting certain groups from participating may present practical challenges given the interrelated nature of the marketplace—for example, while a buyer may not be a covered entity under the cap-and-trade program, it could be a parent company, supplier, or partner to a covered entity. According to researchers involved in the design of RGGI’s auctions, attempting to assess and monitor these relationships could prove costly for an auction administrator. Given these concerns, many economists and program officials favored maximizing the number of potential auction participants. Auctions held within the ETS and RGGI, for example, have allowed noncovered entities—including banks, brokers, and other private firms—to purchase allowances at auctions. In fact, noncovered entities have purchased between 16 and 30 percent of allowances sold in RGGI auctions thus far, according to auction results data published on RGGI’s Web site. Economists and officials involved with these programs said that financial entities can play an important role in the market. For example, banks and brokers can foster liquidity and help provide regular price signals to covered entities. Smaller entities, in particular, may prefer purchasing allowances at financial institutions, as this relieves them of the need to learn the particulars of the auction process and develop an appropriate bidding strategy. Bid limits. To prevent entities from hoarding allowances—which could allow these entities to gain a competitive advantage or raise the price of allowances, among other things—policymakers could set limits on the amount of allowances entities can purchase at auction. For example, in RGGI, associated entities can purchase no more than 25 percent of the allowances available in a given auction. However, such limits may be difficult to enforce, according to one program official, since one entity may be able to bid on behalf of another. Moreover, several economists and program officials we spoke with suggested that hoarding behavior would be highly unlikely in a future U.S. program, since an entity aiming to corner the market may have to buy the majority of allowances across several consecutive auctions, an unlikely possibility given the anticipated price and volume of allowances and the number of entities seeking them. Vetting and registration. For an auction to be successful, participants must meet the financial commitments associated with their bids. Auctions may therefore include an application and screening process in which potential bidders demonstrate their eligibility to participate by providing information such as their credit and bankruptcy history. The process could also include a declaration of “beneficial ownership,” which would require bidders to declare whether they would bid on their own account or on behalf of another entity. According to available information, identifying the beneficiaries of allowance transactions may help the entity responsible for monitoring the market spot evidence of potential market manipulation. Economists and program officials also recommended requiring participants to post some type of financial assurance, such as a bond, deposit, or letter of credit demonstrating their ability to pay. Financial assurance requirements can serve as collateral in the event that participants are unwilling or unable to pay, and thus should be set at a level that ensures payment without discouraging participation. In Ireland’s first auction, for instance, the nonrefundable deposit was set at about 3,000 Euro, or about $3,778. According to program officials, this amount was later determined to be insufficient to compel payment—for example, if the prices in the secondary market fell below the clearing price after the auction, it may have been less expensive for the bidder to simply forfeit their deposit and nullify the sale. As a result, Irish officials raised the deposit amount to about 15,000 Euro, or about $18,890, in the second auction. While economists emphasize the importance of providing financial assurance, it is also important not to impose undue costs or paperwork requirements on participants. As a result, some economists recommended relying on established measures—such as credit scores—and simplifying the process as much as possible. An official involved with administering RGGI, for example, reported that RGGI made several improvements to its participant qualification procedures—including allowing electronic submissions and eliminating notarization requirements—after receiving feedback from participants. Program officials involved with the ETS also said that outsourcing vetting activities to an entity with experience in these activities, like an exchange, may help reduce the cost of these activities. As existing programs have demonstrated, auction activities—including attracting and vetting potential auction participants and facilitating the bidding process—can be undertaken by either the government or a designated private entity. In RGGI, for example, a private consulting firm administers the auctions and conducts these activities. In contrast, Germany uses an existing emissions trading exchange to administer the auctions and conduct the required due diligence on potential auction participants. Another option to cut down on the government’s administrative burden is to implement a “primary participant” model, an approach used in the United Kingdom to auction both government bonds and emissions allowances. In United Kingdom allowance auctions, all bidders must go through one of 11 registered primary participants—all large financial firms—to place their bids. The primary participants can also bid on behalf of themselves. According to one official involved in United Kingdom auctions, this has improved auction participation and generated higher clearing prices. However, the official also acknowledged that some large bidders—such as electricity generators—oppose the primary participant model because it forces them to disclose their bidding strategy to another entity; they would rather participate in the auctions directly. As a result, the outline for the draft auctioning regulation for Phase III and beyond proposes allowing bidders to access auctions directly. Under United Kingdom Treasury rules, primary participants must prevent the disclosure of confidential information they receive from indirect bidders to their employees responsible for preparing or submitting bids on the primary participant’s behalf. entities will be able to obtain needed allowances without running the risk of overbidding. However, thus far participation in non-competitive auctions has been low—of the 100,000 allowances set aside for Austria’s first non-competitive auction, only about 5,000 were actually sold. Further, in the European Commission’s 2009 consultation on the auction regulation, respondents showed little interest in incorporating non- competitive auctions or other special provisions for smaller entities. While a Commission official we spoke with acknowledged that few small entities contributed to the consultation, this official and others reported that smaller entities have generally preferred to purchase their allowances from banks and brokers rather than at auction. If auctions are used to sell allowances, policymakers must consider issues related to timing, including how frequently to hold auctions and whether to auction future years’ allowances in advance. According to available literature and economists we interviewed, the timing of auctions can have implications for market dynamics, prices, administrative costs, and participation. A variety of timing approaches are used in existing programs—for example, RGGI holds quarterly auctions, whereas Germany began holding two auctions twice weekly in 2010. Program officials and economists we interviewed said that determining the appropriate auction frequency depends largely on the number of allowances auctioned during each year. Several noted that higher volumes of allowances may require more frequent auctions, so as to ensure a manageable and constant flow of allowances into the market. Because of uncertainty about the size of a future U.S. program, one program official was hesitant to recommend a specific frequency. Nevertheless, available literature and our interviews point to several arguments in favor of holding auctions relatively frequently, such as weekly. First, economic literature indicates that frequent auctions can help maintain market liquidity and price stability. Because allowances would be sold in smaller batches, frequent auctions help encourage a constant flow of allowances into the market, reducing the impact of individual auctions on market prices. For this reason, the outline for the draft regulation outline governing Phase III of the ETS and beyond—when the level of auctioning is expected to increase significantly—proposes holding auctions at least weekly. Second, frequent auctions may help covered entities to meet their compliance obligations in a timely and flexible manner, rather than running the risk of submitting a losing bid and having to wait several months until the next auction. Third, frequent auctions may alleviate the need for a covered entity to set aside large amounts of capital to compete for bigger, less frequently available blocks of allowances, which may be especially difficult for smaller entities. Frequent auctions also may facilitate efficient and flexible transactions through financial or other intermediaries, which can benefit both small and large covered entities. Finally, smaller, more frequent auctions may help mitigate the risk that participants could purchase a substantial fraction of allowances in an attempt to manipulate allowance prices. However, holding frequent auctions may present trade-offs, according to available literature. Administrative and transaction costs could rise if auctions are held more frequently, and higher costs could reduce participation. In addition, officials involved in administering RGGI’s quarterly auctions said it would be difficult to conduct necessary pre– and post-auction activities—including finalizing sales, returning auction collateral to bidders, and compiling reports—if auctions took place more frequently. Another potential disadvantage of frequently held auctions is the risk that participation will be low at some auctions. Policymakers may choose to sell future-year vintage allowances in advance of the compliance year(s) in which they may be remitted. For example, in addition to offering allowances for the current 3-year compliance period, RGGI auctions also offer participants the ability to purchase allowances for the second compliance period, which is to start in 2012. In the ETS, allowances for Phase III of the program, which begins in 2013, may be auctioned as early as 2011 or 2012, according to the draft outline for the auctioning regulation. Available literature and economists and officials we interviewed identified several potential benefits associated with auctioning allowances prior to the compliance period in which they may be remitted. Most importantly, it enables covered entities to hedge against the uncertainty of future allowance prices by purchasing them in advance. Auctions of future-year allowances may be particularly beneficial to electricity generators, which often establish contracts for fuel and electricity one to three years ahead of delivery. Entities could also potentially hedge price risk by establishing futures contracts with financial intermediaries; however, these intermediaries may charge risk premiums that may be passed on to customers. Available information suggests that holding auctions even before the cap- and-trade program’s first compliance period may help to jump-start the process of price discovery and improve liquidity. For example, RGGI held its first auction in September 2008, approximately 3 months before the first compliance period began. A RGGI official said that this early auction provided price information that proved beneficial to financial institutions and covered entities alike. However, holding advance auctions may also present risks. According to one program official, if actual emissions under the program are lower than expected, auctioning a greater number of allowances early on may depress prices in the short term. In designing a cap-and-trade program, policymakers may decide to set limits around the price of emissions allowances sold at auctions or in the secondary market. For example, setting a reserve price would establish a minimum price below which no allowances could be sold at auction. The use of reserve prices is relatively common in greenhouse gas auctions—for example, RGGI and the EU member states we reviewed all used reserve prices. Policymakers could also decide to limit the extent to which allowance prices could rise and fall in the secondary market through price “floors” or “ceilings”. Reserve prices and price floors. According to economic literature and program officials, a reserve price may have several benefits. For example, a reserve price could reduce incentives for collusion by limiting the profitability of collusive activities. A reserve price could also be used to safeguard against unusually low clearing prices at an auction due to low participation or other unforeseen events. According to one economist, an auction that produces allowance prices that are substantially lower than those in the secondary market may raise concerns about efficiency and equity. Protecting against such a scenario is the primary reason that the United Kingdom chose to institute reserve prices for its auctions, according to a program official. However, this official did not expect the reserve price ever to be triggered, since the likelihood of insufficient participation or collusion was extremely low. Despite the fact that allowances are traded in secondary markets—where the government does not control prices—a reserve price may have effects that extend beyond the auction itself. In some cases, for example, the reserve price may effectively set a “price floor” for allowances throughout the secondary market. According to economic literature, the extent to which a reserve price serves as a marketwide price floor depends on several factors, including the share of allowances to be auctioned and the ability to purchase offset permits or other imported allowances. For example, a CBO report said that a reserve price could create a price floor if the government chose to sell a significant fraction of emission allowances, as opposed to distributing them for free. A key benefit of a price floor, according to some economists and researchers, is that it provides more consistent financial incentives for investment in low-carbon and energy-efficient technologies that could potentially reduce compliance costs in the long run. By establishing a minimum price on emissions, a price floor could also result immediately in more intensive use of low- carbon energy sources or encourage consumers to choose goods and services that are less carbon-intensive. While few economists and program officials disagreed with the use of auction reserve prices as a general protective measure, some expressed concern about using a reserve price to implement a marketwide price floor. For example, program officials involved in administering the ETS cautioned that price floors are unnecessary and can unduly interfere with the functioning of the allowance market. Accordingly, a European Commission official said that while an auction reserve price may be incorporated into the auction design, the primary law underpinning the ETS bars the use of price floors. Finally, some program officials warned that price floors could limit participation from certain entities—such as large banks—by reducing their opportunities for profit. If a reserve price is used, policymakers would also need to consider what to do if it is triggered. Because no bids below the price would be accepted, some allowances at the auction would go unsold. One option for addressing unsold allowances is to retire them by removing them from the program entirely, an approach some researchers and program officials support as a way to help preserve the program’s environmental integrity. Specifically, triggering the reserve price may indicate that the emissions cap is too generous; retiring allowances that remain unsold at the reserve price would effectively make the cap more stringent. Another option would be to “roll forward” any unsold allowances to the next auction, an approach used by the United Kingdom in allowance auctions. Economists describe this approach as administratively simple; however, when any unsold allowances are rolled over, as opposed to retired, future allowance prices could be lower, reducing incentives for emission reductions. A third option would entail placing the unsold allowances into a “contingency bank” and releasing them for sale at the next auction that closed at a price above a pre-identified trigger price. This approach removes unnecessary allowances from the program while demand is low but keeps allowance prices (and compliance costs) from rising as sharply as they otherwise would in subsequent periods when demand is high. However, one economist cautioned that managing the bank could introduce political risks. Importantly, each of these three options applies to a scenario in which a reserve price is applied at auction. If policymakers decided to implement a firm price floor that applied throughout the secondary market, the government would have to guarantee a minimum price to sellers in that market. In this case, triggering the floor price would indicate that the quantity of allowances offered for sale at the floor price exceeded the quantity demanded by market participants. To guarantee the minimum price, the government could buy back the excess quantity. However, this could create budgetary and other complications, as the government would not be able to anticipate market outcomes. Price ceilings and strategic reserves. To protect against unexpectedly high compliance costs, some cap-and-trade proposals also involve setting an upper limit on the price of allowances, either through a price ceiling— often known as a “safety valve”— or by establishing a “strategic reserve” of allowances. A safety valve would give covered entities the opportunity to purchase an unlimited amount of additional allowances from the government at a predetermined price. In the event the allowance price rose higher than the safety valve price, covered entities could buy allowances from the government at the lower price rather than purchasing them on the market. If the safety valve was triggered and additional allowances released, however, emissions could rise beyond the level set by the initial cap and compromise the program’s emissions goals. As an alternative option, policymakers could set a strategic reserve of allowances to be released only if the price threshold is reached. The key distinction from a safety valve approach is that the allowances in the reserve would eventually be paid back in some way, thus maintaining the integrity of the cap over time. If the allowances in the reserve were used, for example, the corresponding increase in emissions could be offset, such as by tightening emissions caps in future years. Some economists and program officials have cited several possible advantages associated with price ceilings. Setting a maximum price for allowances would provide insurance against unexpected price spikes in allowances—or a sustained period of high prices—either of which could cause the price of consumer goods and services to rise. If allowance prices are higher than expected, for example, a price ceiling could limit the costs to businesses and consumers while new technologies are developed that may achieve reductions at less cost. Because it provides some parameters around the cost of the cap-and-trade program, a price ceiling could also reduce the risk that firms that are both energy-intensive and trade- intensive will face competitive pressures from industries in countries without comparable limits on greenhouse gas emissions. Finally, establishing price ceilings in advance may increase the likelihood that the program will endure through severe economic fluctuations, thus providing certainty to investors. Several of the potential disadvantages cited for price floors also apply to price ceilings—namely, that they may interfere with market functioning and discourage participation by financial entities. In addition to these disadvantages, a safety valve could have negative environmental implications if emissions rise considerably higher than the established caps. Available information suggests that establishing a safety valve in a U.S. program could impede linkages with other cap-and-trade programs, which would allow participants under a U.S. program to trade allowances with other programs, such as the ETS. In theory, linking can enhance the cost-effectiveness of the participating programs by enabling covered entities to take advantage of differences in the cost of abatement options. However, establishing a safety valve in one program would have implications for other linked programs—for example, linked countries may not be able to ensure that their emissions would be below a required level in a given year. Additionally, a price ceiling could discourage investment in research and development to create new energy-efficient technologies by limiting future profits from their sale. Moreover, some officials reported that price floors and ceilings are unnecessary if the initial cap is set correctly, using accurate, current emissions data, which they considered to be a better strategy for regulating prices than using price controls to artificially manipulate the market. According to available literature and program officials, establishing a system to report and monitor auction activities is an important aspect of auction design. Effective reporting can increase the program’s transparency and help participants make informed bidding decisions. In addition, monitoring auction results can help government agencies or designated private entities identify instances of market abuse and evaluate whether auctions have met established goals. Reporting of auction results. If auctions are used, policymakers would need to consider how and when to report data on auction results. Several program officials recommended making auction results available immediately after the auction. According to one official, providing timely and accurate data on auction results can provide covered entities with information on costs to use as part of their strategic planning efforts. In determining which data to report, program officials recommended that the government disclose aggregated data such as the quantity of allowances sold, the number of participants (and the fraction that won allowances), and the clearing price. RGGI makes auction results data available through an online program called CO2 Allowance Tracking System, which allows the public to download emissions data and relevant auction results. While providing timely aggregated data can serve a useful purpose, economic literature suggests that revealing too much information about auction results could inadvertently facilitate collusion or limit auction participation. For example, publicizing the names of auction winners and their respective purchases may enable entities to determine whether collusive agreements established prior to the auction were honored. In addition, program officials said that reporting the identity of bidders allows market participants to discern the patterns and strategies of others. According to a European Commission official, entities may choose not to participate at auctions if they fear that commercially sensitive information will be revealed through their bidding. Some program officials noted that while restricting access to such data may run counter to traditional ideas about transparency, it may serve the public’s best interest to establish clear limits around the amount and type of auction data released. Monitoring and oversight. An auction monitoring system can help identify cases of market abuse, ensure that auctions comply with established rules, and provide information useful in evaluating and improving auction design. Several program officials recommended designating a market monitor to observe and assess activities at auctions and in the secondary market. The auction monitor could either be a private or public authority—for example, the RGGI program uses a private consulting firm to perform a number of market monitoring activities. According to a RGGI official, these activities include: ensuring auction rules are consistently applied in each auction, analyzing participant behavior and identifying any irregularities, and modeling the impact of potential design modifications to the program. The RGGI market monitor also analyzes secondary market activity, although the Commodity Futures Trading Commission (CFTC) is responsible for protecting market participants against fraud, manipulation, and abusive trading practices. In Phase I and Phase II of the ETS, individual member states have used different methods to monitor auction results. For example, the United Kingdom government manages auctions but has appointed an independent observer to ensure auctions are conducted in accordance with the law. In Austria, an energy exchange handles vetting procedures and auction administration while the government handles oversight. For Phase III of the ETS and beyond, the EU Directive requires member states to report to the European Commission on various aspects of auction outcomes, including issues related to access, price formation, and technical issues. In addition, the outline for the draft auctioning regulation from Phase III and beyond foresees the appointment of a single auction monitor. Respondents to the EU’s consultation on Phase III auctioning generally favored this proposal, although some respondents noted efforts to curb market abuse at auctions would be ineffective if these efforts are not accompanied by similar efforts in the secondary market. Appendix III: Allowance Sales Conducted in the EU ETS and RGGI Percentage of each member state’s cap sold (or expected to be sold) 2006 and 2008 (auctions) 2.3 (Phase I) Auction 1: $33.12 Auction 2: 8.65 Auction 3: 0.01 0.5 (Phase II) 2009 (sale) 2.5 (Phase I) Auction 1: 9.34 Auction 2: 1.09 5.0 (Phase I) 5.1 Weighted average price: 1.5 (Phase I) Auction 1: $.07 7.0 (Phase II) Auctions 1 to 7: 1.3 (Phase II) Auction 1: $13.66 Auction 2: $16.68 8.8 (Phase II) 80.0 Weighted average price: 2008 sales: $27.97 2009 sales: $15.49 raised (rounded) In addition to the contact named above, Michael Hix (Assistant Director), Cindy Gilbert, Robert Grace, Richard Johnson, Jessica Lemke, Micah McMillan, Benjamin Shouse, Jeanette Soares, Ardith A. Spence, and Kiki Theodoropolous made key contributions to this report.
Congress is considering proposals for market-based programs to limit greenhouse gas emissions. Many proposals involve creating a cap-and-trade program, in which an overall emissions cap is set and entities covered by the program must hold tradable permits--or "allowances"-- to cover their emissions. According to the Congressional Budget Office (CBO), the value of these allowances could total $300 billion annually by 2020. The government could either sell the allowances, give them away for free, or some combination of the two. Some existing cap-and-trade programs have experience selling allowances. For example, member states participating in the European Union's (EU) Emissions Trading Scheme (ETS) have sold up to about 9 percent of their allowances, and the amount of auctioning is expected to increase significantly starting in 2013. In the United States, the 10 northeastern states participating in the Regional Greenhouse Gas Initiative (RGGI) have auctioned about 87 percent of their allowances. This report is part of GAO's response to a request to review climate change policy options. This report describes the implications of different methods for selling allowances, given available information and the experiences of selected programs. GAO reviewed relevant literature and interviewed program officials from the EU and RGGI, economists, and other researchers. This report contains no recommendations. The method of selling emissions allowances can have significant implications for a cap-and-trade program's outcomes, and therefore, it is important that the method be chosen based on well-defined goals. Goals often cited by program officials and economists include: maintaining simplicity and transparency, maximizing participation, promoting economic efficiency, generating a price that reflects the marginal cost of reducing emissions, avoiding market manipulation, raising revenues, and minimizing administrative costs. According to program officials, it is important to identify goals prior to choosing a sales method, as tradeoffs may exist. Some goals may also be interrelated--for example, a simple and transparent design may boost participation and reduce the risk of market manipulation. Once goals are identified, policymakers face a number of choices regarding the design of a sales mechanism. Existing programs have used different mechanisms to sell allowances, including direct sales through exchanges and auctions. EU officials described exchange-based sales as effective and easy to implement, although they and other economists questioned whether this approach would be suitable for selling a high volume of allowances. Program officials also reported that auctions, the more commonly used sales mechanism in the EU and RGGI, effectively distributed allowances to program participants. However, some economists noted that auctions are not "one size fits all," and should be designed to take into account market characteristics, such as the number of potential buyers. Using auctions to sell allowances would entail a number of other design choices. For example, policymakers could decide to utilize existing auction infrastructure, such as that used in exchanges or government auctions, or develop a new platform. Choices must also be made regarding the auction format and other design elements. (1) Auction format: The auction format determines, among other things, the price that winning bidders pay for allowances and the number of bidding rounds. To date, ETS and RGGI auctions have used a single round format in which each participant that bids above a certain price receives allowances at that price. Program officials expressed general satisfaction with this format, and economists noted that its relative simplicity may encourage participation. However, some economists also recommended that policymakers consider other formats as well, such as multiple-round auctions, given that experience with large-scale allowance auctions has been limited to date. (2) Other auction design elements: Apart from the auction format, other elements may affect outcomes, including: participation requirements, the frequency and timing of auctions, measures that establish lower or upper limits on allowance prices, and rules governing auction monitoring and the reporting of results.
Each fiscal year, the Department of Energy (DOE) requests new obligational authority from the Congress to meet the costs of running its programs. During a fiscal year, new obligational authority can be adjusted to reflect changes in authority, such as a rescission of authority. These adjustments result in an adjusted new obligational authority for DOE, representing the net amount of new resources available to DOE in a fiscal year. In fiscal year 1995, DOE received about $17.8 billion in adjusted new obligational authority. The Congress provides DOE with its obligational authority through two major appropriations acts—Energy and Water Development Appropriations and Interior and Related Agencies Appropriations. The Energy and Water Development appropriation provides the bulk of DOE’s funding—about $16.2 billion in fiscal year (FY) 1995. In comparison, the Interior and Related Agencies appropriation provided about $1.6 billion in FY 1995. These two major appropriations acts are further broken down into more specific appropriations for DOE’s programs. The programs can receive funding from more than one specific appropriation but usually receive a majority of their funding from one or two specific appropriations. When appropriating funds for an agency—providing the authority to incur obligations—the Congress sets the amount and purpose of the funds and the time frame during which the funds will be available. When a specific time frame is defined, referred to as a fixed appropriation, the period is typically 1 to 5 years. However, some appropriations do not restrict the time in which the funds must be obligated but state that the funds are “to remain available until expended” or to “remain available without fiscal year limitation.” This is generally referred to as “no-year” authority. DOE receives no-year authority for most of its activities. With no-year authority, DOE may retain unexpended balances (both unobligated balances and uncosted obligations) indefinitely. In contrast, under a fixed appropriation, unobligated balances are no longer available for new obligations after the appropriation has expired. Both unobligated balances and uncosted obligations are cancelled, and the expired account is closed 5 years after the period in which the funds were available. DOE has accumulated significant carryover balances from prior years’ appropriations but still receives new obligational authority each year under congressional appropriations. These carryover balances grew from $7.7 billion in fiscal year 1991 to $12 billion in fiscal year 1995. The carryover balances have come to represent a significant portion of the total resources DOE has available to meet the costs of its programs. In fiscal year 1995, DOE had $29.8 billion in total resources available, consisting of $12 billion in carryover balances and another $17.8 billion in adjusted new obligational authority. (Fig. 1.1 compares the sources of DOE’s total available resources—DOE’s adjusted new obligational authority and carryover balances—over the last 5 complete fiscal years.) The Congress, recognizing the growing significance of carryover balances, has begun to consider these balances when making decisions about providing new obligational authority. Over the last 5 years, the Congress has recommended that DOE use its carryover balances in lieu of new obligational authority. However, the Department’s carryover balances continue to be significant. (See fig.1.2.) DOE’s increasing budget needs have also heightened the Department’s and the Congress’s efforts to analyze why these carryover balances exist and whether they exceed the requirements for DOE’s programs and are therefore available to reduce the need for new obligational authority. Although attention has focused on uncosted obligations ($8.4 billion in fiscal year 1995), unobligated balances also contribute significantly to carryover balances ($3.6 billion in fiscal year 1995). Figure 1.3 shows the contribution of both uncosted obligations and unobligated balances to the total carryover balances over the last 5 years. Appendix I provides details on the carryover balances, DOE’s use of them, and the adjusted new obligational authority for DOE’s appropriations over the last 5 years. At the beginning of fiscal year 1995, DOE had $3.6 billion in unobligated balances—an increase from $1.7 billion at the beginning of fiscal year 1991. Unobligated balances can result from the “pooling” of funds as they move through DOE’s funding process, which is depicted in figure 1.4. At the beginning of the process, the Congress appropriates obligational authority to DOE through its appropriations acts. During the fiscal year, the Office of Management and Budget (OMB) apportions the appropriations to DOE. Once DOE receives the apportionment from OMB, it may allot the funding to its programs. DOE allots funding to its programs on the basis of approved financial plans for each program. These plans, developed by the programs in conjunction with the Office of the Chief Financial Officer (CFO), provide direction on the amount of funding to be allotted to the programs and the timing of those allotments. Not all the funding is allotted to the programs at once. Typically, some funds are held in reserve for various reasons, including the need to hold funding back because of direction from OMB or the Congress. Thus, unallotted appropriations are the first area where funding can pool. After receiving their allotment, DOE’s programs then make obligations within their organizations and to contractors to conduct the programs’ activities. However, not all the allotments are obligated. For example, funding may not be obligated if a program cannot yet proceed with an activity because it is awaiting completion of some legal proceeding or compliance with an environmental requirement. These unobligated allotments at the program level are the second area where funding can pool. Unobligated allotments and unallotted appropriations are the two categories of unobligated balances held by DOE. DOE must ask OMB to “reapportion” any unobligated balances remaining at the end of the fiscal year for the next year, but that step is typically only a formality, and DOE retains its unobligated balances from year to year. Unobligated balances can also be created by deobligating funding that had been obligated but for which the funding’s original purpose no longer exists—for example, a construction project that is cancelled or reduced in scope. This funding is held in reserve either by the program (as an unobligated allotment) or by the CFO (as an unallotted appropriation). Typically, this funding includes the excess funding that DOE has identified and is (1) holding to reduce the following year’s budget request or (2) proposing that the Congress allow the Department to reallocate to a use that differs from the funding’s original purpose. With no-year authority, DOE can request reallocation of unobligated funds indefinitely. DOE notifies the Congress of its intent to reallocate funds for purposes other than those specified in the appropriation by reprogramming within an existing appropriation or by transferring the funds between appropriations. DOE typically obligates the majority of its obligational authority for a new fiscal year to the various contractors that implement its programs at facilities throughout the nation. As the contractors receive goods and services, they liquidate or “cost” the obligations. However, not all the obligations are costed during a given year, and these uncosted obligations can accumulate from one fiscal year to the next. This accumulation represents the final area where DOE’s funding can pool and contribute to carryover balances, as figure 1.4 showed. In 1992, we testified that DOE’s uncosted obligations were growing—totaling over $7 billion at the beginning of fiscal year 1992—and that DOE did not have an effective system for analyzing these uncosted obligations to determine the extent to which they could be used to reduce DOE’s budget requests. At the beginning of fiscal year 1995, the uncosted obligations remained significant, at about $8.4 billion. In response to our testimony, the Congress, in the Energy Policy Act of 1992, directed DOE to submit with each annual budget request a report of its uncosted obligations at the end of the previous fiscal year. The report was to (1) show what portions of the uncosted obligations were committed and uncommitted, (2) describe the purposes for which all such funds were intended, and (3) explain the effects that the information in the report had on the annual budget request that DOE was submitting. As required by the act, DOE has issued three reports on the status of its uncosted obligations for the end of fiscal years 1992, 1993, and 1994. Our objectives in this review were to determine whether (1) DOE has an effective approach for identifying the carryover balances that exceed its programs’ requirements and may be available to reduce its budget request and (2) opportunities exist to develop a more effective approach for analyzing these carryover balances. This report is based in large part on a series of reviews that we have conducted over the past 3 years. (See a list of our related reports at the end of this report.) To obtain information on DOE’s unobligated balances and uncosted obligations, we obtained and reviewed internal reports from DOE’s accounting and financial systems that track how DOE manages and monitors its funding. We did not attempt to verify the data in these reports or reconcile the data to published sources. We obtained separate reports on the level of the unobligated balances and the status of the uncosted obligations for DOE’s programs to provide a complete picture of the total amount of carryover balances available for DOE’s programs. To examine the effectiveness of DOE’s approach for identifying the carryover balances that exceed the requirements of the Department’s programs and may be available to reduce the request for new obligational authority, we first reviewed DOE’s process for tracking and reporting on carryover balances and any analysis by DOE of what causes unobligated balances and uncosted obligations. Specifically, in reviewing the unobligated balances, we discussed with officials from DOE’s CFO how the funding moves from the congressional appropriations through DOE to the various programs and how unobligated balances are created in this process. We also obtained accounting reports from DOE’s CFO that track unobligated balances and discussed with that office DOE’s funding process and the reasons that unobligated balances exist. We examined reports from the current and past fiscal years to track the status of these balances over the last several years. We talked to budget officials for DOE’s programs and officials from DOE’s CFO to understand how, in developing its budget, DOE considers unobligated balances as a potential way to reduce its annual budget request. In examining the uncosted obligations, we reviewed DOE’s reports on uncosted obligations for the past 3 years. In reviewing these reports, we attempted to verify the accuracy of the characterization by DOE’s contractors of the uncosted obligations as committed or uncommitted to meeting the programs’ requirements. In our reviews over the first 2 years, we focused on the uncosted obligations relating to DOE’s Defense Programs and Environmental Management programs because they represent about half of DOE’s budget. However, in our most recent review, we expanded our scope to include other DOE programs: Energy Efficiency and Renewable Energy, Energy Research, Nuclear Energy, and Fossil Energy. Together, these six programs account for about 85 percent of DOE’s annual budget. Over the last 3 years, we have reviewed at least once the uncosted obligations held by M&O contractors at 17 different DOE sites.In our most recent review, we also examined some of the uncosted obligations held by the smaller, nonintegrated contractors. We talked to budget officials for DOE’s programs and officials from DOE’s CFO to understand how DOE develops its proposals for using uncosted obligations to reduce its budget request. In particular, we discussed the role that DOE’s report on the uncosted obligations plays in the process. To identify opportunities to develop a more effective approach for analyzing the carryover balances, we discussed current problems with the process and potential improvements with representatives from DOE’s programs and officials from DOE’s CFO. We also reviewed a 1994 report on uncosted obligations by a subcommittee of DOE’s Budget Stakeholders Group that attempts to provide greater assurance that the uncosted balances will not exceed what is necessary to pay for program commitments made in prior years. We further discussed with representatives from DOE’s programs and the CFO ideas within DOE for improving the Department’s approach to analyzing carryover balances and capacity to effectively identify available balances that may be used to reduce requests for new obligational authority. We conducted our review from June 1995 through March 1996 in accordance with generally accepted government auditing standards. We provided a draft of this report to DOE for its review and comment and discussed the draft with officials from DOE’s CFO, including the Director of the Office of Budget. In general, the officials agreed that the report was accurate and factual. Where appropriate, we made several changes to the report in response to specific comments on the facts presented. In commenting on the draft, DOE officials noted that as the report was being developed, recent data on the carryover balances at the beginning of fiscal year 1996 have become available and that these data show an overall reduction of $2.4 billion in carryover balances from fiscal year 1995. Thus, over the most recent 2 years, carryover balances declined from a high of $12.9 billion at the beginning of fiscal year 1994 to $9.6 billion at the beginning of fiscal year 1996. While DOE officials agreed that a more structured approach for analyzing carryover balances is needed and will improve the analysis of these balances, they believe that DOE efforts have had a positive impact on the levels of carryover balances. We were not able to include the details of these recent data in our report because complete data for fiscal year 1996 were not available at the time of our review. However, we agree that DOE’s overall carryover balances had decreased to $9.6 billion at the beginning of fiscal year 1996. We believe this downward trend should be recognized as a positive development and can be attributed, in part, to continued scrutiny by DOE, the Congress, and GAO. However, $9.6 billion in carryover balances still represents significant resources. The second of DOE’s overall comments on the report and our evaluation of this comment are discussed in chapter 2. In formulating a budget request, DOE officials do not use a standard, effective approach for identifying excess carryover balances. Instead, DOE relies on broad estimates of potentially excess balances in its individual programs. As a result, DOE cannot be sure that these estimates appropriately reduce future years’ budget requests. DOE’s required annual report on the status of uncosted obligations is limited and not used to propose how much in carryover balances should be used. The report is limited because it does not (1) provide detailed analysis of all of the sources of carryover balances—namely, uncosted obligations at DOE’s nonintegrated contractors and unobligated balances; (2) accurately identify the available balances; and (3) provide relevant information that can be used for consideration in formulating the budget. Budget officials for DOE’s programs and representatives from the CFO told us that DOE relies on broad estimates in proposing the amount of excess carryover balances that can be used to reduce the budget request. Without any specific guidance, DOE arrives at its estimates of the carryover balances available for its various programs in a variety of ways, according to these officials. Some programs arbitrarily establish a goal for the use of excess carryover balances. For example, DOE’s Environmental Management program proposed the use of $300 million in carryover balances for the fiscal year 1996 budget. According to program officials, however, that amount was not based on any detailed analysis of the program’s balances to determine what might be available. Only after the amount was proposed did this program attempt to identify where the available balances might be found. CFO officials noted that DOE requires its field offices and M&O contractors to certify that uncosted balances have been considered in formulating their budget requests. These officials said they did not have the resources to verify these certifications. However, they thought the requirement was important and should continue. According to the CFO officials, the amount of carryover balances proposed to reduce a program’s new obligational authority is often simply a number “plugged in” at the end of the budget formulation process in order to meet an overall budget target. Many DOE program officials concurred that the use of carryover balances was simply another way to justify a reduction in new obligational authority. Most programs do not study their carryover balances to identify specific areas where excess balances exist and move these balances to meet other needs. Typically, a program simply reduces its new obligational authority by the amount proposed for the use of carryover balances. Often, the total amount of the reduction is simply prorated among program areas regardless of the status of the existing carryover balances. For example, within the Nuclear Energy program, some fiscal year 1995 obligational authority for some program areas—such as isotope support and test reactors—was reduced to compensate for the use of carryover balances, even though these program areas did not have any carryover balances. In 1994, a subcommittee of DOE’s Budget Stakeholders Group reported on DOE’s uncosted obligations. The report noted that budget balances must be carefully analyzed to ensure they do not exceed the amount of work that can be performed. The subcommittee found that “hoarding” of financial resources in excess of actual needs is a common management behavior. The report noted that while DOE has continued to use carryover balances to reduce its request for new obligational authority, it is unclear whether the carryover balances have been minimized. The almost $1 billion in carryover balances actually used in fiscal year 1995 to reduce DOE’s budget request represented only 8 percent of the $12 billion in carryover balances DOE held going into that fiscal year. Another concern within DOE is whether too much is being offered in carryover balances without a clear picture of the programs’ requirements. While overall balances grew by $4.3 billion between fiscal year 1991 and fiscal year 1995, the status of DOE’s many programs varies. Programs such as Defense Programs and Environmental Management have used large portions of their carryover balances, thus reducing their balances, while other programs, such as Energy Efficiency and Renewable Energy and Energy Research, have not used significant amounts of their carryover balances and have experienced growing balances, as figure 2.1 shows. Although DOE’s program and contractor officials work to categorize and report on uncosted obligations, DOE’s report on the uncosted obligations is limited and is not being used to determine the amount of carryover balances the Department proposes using to reduce its new obligational authority request. The report does not (1) provide detailed analysis of all of the sources of carryover balances, (2) accurately identify the available balances, and (3) provide relevant information on the carryover balances that is useful in formulating a budget. The annual report on uncosted obligations focuses primarily on those balances held by DOE’s integrated M&O contractors at the end of the fiscal year. While uncosted obligations at these contractors represent a significant portion of DOE’s carryover balances, another major portion of these carryover balances is represented by uncosted obligations at nonintegrated contractors and unobligated balances. At the beginning of fiscal year 1995, DOE’s integrated M&O contractors had $5 billion in uncosted obligations. However, the nonintegrated contractors had an additional $3.4 billion in uncosted obligations. Furthermore, DOE had unobligated balances of $3.6 billion at the start of fiscal year 1995. Table 2.1 provides a breakdown of DOE’s carryover balances over the last 3 years. DOE’s report provides data on the amount of uncosted obligations held by nonintegrated contractors but assumes that all these balances are committed to meeting the programs’ requirements. According to CFO officials, DOE does not focus its analysis of uncosted obligations on the balances held by nonintegrated contractors because it is more difficult to deobligate excess funds from these contractors than from integrated contractors. Nonintegrated contractors perform specific tasks that must be examined individually, while integrated contractors are involved in broader tasks (referred to by DOE as “level of effort” tasks). DOE’s only recourse for the excess balances identified with a nonintegrated contractor is not to provide any additional funding to the contractor in the future or to terminate the contractor’s contract, which could result in financial penalties under the terms of the contract. However, in reviewing selected balances held by nonintegrated contractors during our review of DOE’s uncosted obligations at the end of fiscal year 1994, we found that not all of these balances were needed to meet requirements. For example, we found that DOE’s Environmental Management program had about $5 million in excess balances for one consulting contract. This program, which was carrying over about 8 months’ worth of funding for the contract (about $11 million) at year-end, agreed that it did not need about $5 million of that amount and took action to reduce the balance. In addition, we identified some balances that were not committed to any contract but were simply being held by the program office. For example, at the Albuquerque Operations Office, we identified about $4 million being held to reimburse employees for moving expenses that were incurred between 1986 and 1991. In reviewing 30 cases, we found that all or portions of the funds held in each case were excessive and should have been deobligated. Furthermore, while DOE’s report gives the amounts of unobligated balances used in a fiscal year, it does not provide any detailed information on the unobligated balances. According to officials from DOE’s CFO, the Energy Policy Act of 1992 requires DOE to report only on uncosted obligations. However, unobligated balances contributed $3.6 billion to DOE’s carryover balances at the start of fiscal year 1995, and these balances had grown from $1.7 billion at the beginning of fiscal year 1991. Valid reasons may explain why unobligated balances are not available to reduce future budget requests. For example, because DOE’s Nuclear Energy program received funding for a safety program for Russian reactors from the State Department late in fiscal year 1994, this funding was unobligated but was not available, according to program officials. Other balances represent uncosted obligations that DOE has identified as excessive and has deobligated (returned to unobligated status) to be used to reduce the new obligational authority the Department requests. For example, according to Defense Program officials, some funding was deobligated when activities were canceled because of the cutback in the demand for new nuclear weapons and materials. These cases, however, do not explain all of the unobligated balances. The total unobligated balances at the beginning of fiscal year 1995 ($3.6 billion) exceeded the carryover balances used in fiscal year 1995 ($0.9 billion) by $2.7 billion. Nevertheless, DOE’s report does not provide any detailed analysis on these balances—information that could be used as budget decisions are made. In an attempt to define the needs for uncosted obligations for its programs, DOE’s report, as required by the Energy Policy Act of 1992, divides uncosted obligations into two overall categories: (1) encumbered (or committed) uncosted obligations and (2) unencumbered (or uncommitted) uncosted obligations. Generally, encumbered uncosted obligations are balances committed under legally enforceable agreements entered into by DOE’s contractors, such as purchase orders or contracts. Unencumbered uncosted obligations are balances that have not yet been encumbered or committed by the contractors and that are thus potentially available to reduce DOE’s budget request. DOE’s report further divides the unencumbered uncosted obligations into three categories: (1) “approved work scope,” (2) “prefinancing,” and (3) “remaining unencumbered.” Generally, approved work scope consists of the funds for work, such as work under a purchase requisition, that is clearly defined and specific in scope but that does not yet represent a legal commitment; prefinancing is the funding maintained to ensure that operations at the facilities continue if funding lapses at the beginning of a fiscal year; and the remaining unencumbered funds are the balance of the uncosted obligations. The report’s detailed analysis of the uncosted obligations focuses on those balances held by DOE’s integrated M&O contractors. However, DOE relies on its M&O contractors to provide the detailed breakdown of their uncosted obligations into the categories of encumbered, approved work scope, prefinancing, and remaining unencumbered. DOE verification of the accuracy of the data reported by the contractors has been limited. Over the last 3 years, we have identified almost $500 million in uncosted obligations that was available but that the contractors had reported as not available. We have also found some balances reported as encumbered and approved work scope that were actually unencumbered. For example, we identified $46.2 million reserved for 15 projects at the Savannah River Site at the end of fiscal year 1994 that were no longer needed because of cost underruns, reductions in the projects’ scope, or cancellation of projects. In addition, we have found the contractors inconsistent in characterizing and reporting the status of their uncosted obligations. While the distinction between encumbered and unencumbered uncosted obligations is fairly straightforward, the contractors have not been consistent in how they break down the unencumbered obligations into the categories of approved work scope, prefinancing, and remaining unencumbered. For example, some contractors believed that congressional intent (i.e., the fact that the Congress provided the obligational authority) constituted approved work scope, while others thought that a specific directive from a DOE office was necessary. In addition, the quality and ability of the contractors’ systems to track uncosted obligations by these three categories has varied in the past, contributing to reporting inconsistencies and an additional burden to the contractors as they try to interpret DOE’s definitions and apply them to their balances. As a result, DOE’s report is inconsistent in its characterization of uncosted obligations, making it difficult to determine what balances are needed to meet the programs’ requirements and what balances are available. Although DOE provides a report on its uncosted obligations along with its congressional budget request, DOE officials said that this report is not very useful to them in determining the amount of carryover balances to propose for use in a given fiscal year. The report consists only of historical data and does not contain projections of what balances may be for the future budget under consideration. Using the development of the fiscal year 1996 budget—which began on October 1, 1995—as an example, DOE began receiving budget submissions from its field offices in the summer of 1994 and subsequently conducted internal budget reviews. After OMB’s review and approval, DOE’s fiscal year 1996 budget request was submitted to the Congress in February 1995. However, DOE’s report on uncosted obligations accompanying the fiscal year 1996 budget request was limited to reporting on the status of uncosted obligations at the end of fiscal year 1994 (Sept. 30, 1994)—a full year before the beginning of fiscal year 1996, whose budget was under development. The lack of any projections of what the balances might be for fiscal year 1996 limited the usefulness of this report in proposing an amount of carryover balances to use to reduce the budget request for fiscal year 1996. In commenting on a draft of this report, DOE officials noted that the draft report discussed at length the limitations of DOE’s annual report to the Congress on uncosted obligations. While the officials agreed that DOE did not have a standard, effective approach for analyzing carryover balances, they did not believe that the draft’s in-depth discussion of the limitations of DOE’s annual report on uncosted obligations was germane to this issue because the report is not used to analyze carryover balances. The report is provided to the Congress only to meet the requirement of the Energy Policy Act of 1992 for information on uncosted obligations. Instead, DOE relies on its field offices to take into account all of the budget resources that may be available in a budget year, including potential carryover balances, when formulating their requests for new obligational authority. We believe the discussion of the report on uncosted obligations is relevant to this issue because the purpose of the report, which is required by the Energy Policy Act of 1992 and submitted along with DOE’s budget request, is to analyze uncosted obligations to assist in the development of DOE’s budget. We believe it is important to identify the limitations of this report to explain why DOE does not use this report and to demonstrate that, by itself, this report does not provide DOE with an effective option for analyzing carryover balances. Within DOE, several ideas have been proposed to develop a more effective approach for determining the amount of carryover balances that are available to reduce budget requests for future years. To better determine the amount of carryover balances needed to meet its requirements, DOE’s Environmental Management program has proposed establishing goals for acceptable levels of uncosted obligations. In addition, to provide more useful information for budget decisionmakers, DOE’s CFO is exploring the idea of using cost estimates to project what the level of uncosted obligations will be at the beginning of the fiscal year under consideration. These ideas, if expanded throughout DOE’s programs, could form the basis for a more effective approach for analyzing carryover balances that (1) develops standard goals for all of the carryover balances in DOE’s programs on the basis of each program’s requirements, (2) projects what all carryover balances will be at the beginning of the fiscal year for which the budget is being developed, and (3) focuses on justifying the differences between the goals and the projected carryover balances. When the projected balances exceed the goals, DOE could analyze the cause of these differences and identify carryover balances that are available to reduce its budget request. DOE’s Budget Stakeholders Group noted that the Department did not have a methodology for determining the “proper” amount of uncosted obligations. The group stated that management would have to make a concentrated effort to develop a methodology. Some programs, such as Defense Programs, had examined their balances to some extent in order to identify the specific reasons why carryover balances existed, but there was no DOE-wide effort to develop a methodology for determining the appropriate or “proper” amount of carryover balances. However, the Office of Waste Management, within DOE’s Environmental Management program, has undertaken an effort to determine the “proper” or “right” amount of uncosted obligations. In general, this office is assessing what its needs would be under normal business operations in order to establish goals or target levels for the amount of uncosted obligations needed to meet these requirements. The office has established separate benchmarks for funding operating activities, capital equipment procurements, and construction projects. For operating funding, the office has estimated that, on average under normal operations, it would expect a 1-month lag between a commitment (or encumbrance) of the funding and the actual expenditure of the funding for that commitment. Thus, for a year’s operating funding, the office would expect a minimum of 1 month’s funding (or 8 percent of the total operating funding) at year-end to represent the uncosted obligations necessary to meet the program’s requirements. Using similar logic, the office has estimated that an average time of about 15 to 18 months is needed before procurements of capital equipment and construction projects are costed. Thus, for every dollar available for capital equipment and construction, the office would expect about 3 to 6 months’ worth of the funding (or 25 to 50 percent of the total) to be uncosted at year-end. The office’s idea of establishing goals will need some adjustments if it is to be applied to all of DOE’s programs. First, major construction projects are unique in that they are line items in DOE’s budget, so that the funding is provided directly. The status of these projects is easier to assess because they have a clear scope of work, milestones, and budgets within which to work. Thus, as Defense Programs and other programs noted, there is no need to establish a target level of carryover balances for construction projects because each one is unique and its level of carryover balances can easily be measured against the remaining scope of work, milestones, and specific budget request. In addition, major construction projects can last more than 18 months and can be funded over several years. Second, under the Office of Waste Management’s approach, the goals for uncosted obligations would be a certain percentage of the total resources that can be costed—including all of the carryover balances at the beginning of the year (uncosted obligations and unobligated balances) plus all of the adjusted new obligational authority received in a year. However, this approach assumes that a percentage of the uncosted obligations existing at the beginning of the year would again be carried over for an additional fiscal year. This assumption is inconsistent with the assumption made in developing the goal that uncosted obligations would be needed only for a certain amount of time (e.g., 1 month for operating funding) before the balances were costed. A more consistent alternative would be to set the goals in relation only to the total obligation authority. Thus, this approach would assume that all uncosted obligations carried into the beginning of a fiscal year would be used up before the end of the fiscal year. Finally, the approach proposed by the Office of Waste Management focuses on establishing goals for the levels of uncosted obligations. To apply this approach to all carryover balances, some consideration needs to be given to what the programs would require for the other major part of carryover balances—unobligated balances. DOE program officials we talked to did not see any programmatic requirements for unobligated balances in the transition between fiscal years. Thus, the goal for unobligated balances for a program would normally be zero. It should be recognized, however, that unobligated balances can exist for legitimate reasons. For example, funding received late in the year may result in a program’s having unobligated balances. To provide more relevant information on carryover balances that can be used when formulating the budget, DOE could project what all carryover balances (unobligated balances and uncosted obligations) are going to be for its programs at the beginning of the fiscal year under consideration. Although DOE projects unobligated balances and unpaid obligations in its budget request, the projections are very broad—one gross number for the entire appropriation. Accurate projections of programs’ carryover balances are necessary to provide information that is relevant to the budget under consideration. In studying uncosted obligations, DOE’s Budget Stakeholder Group noted that the Department needs to project balances in order to improve the quality and timing of information on balances, which form the basis for management’s actions to reduce these balances. The key element in projecting carryover balances is estimating what the costs will be for a given year. DOE’s CFO has recently sought some programs’ estimates of costs in order to project uncosted obligations. DOE’s programs involve myriad contractors and efforts, and it can be difficult to develop overall cost estimates. However, DOE’s programs are in the best position to estimate their costs. Because some programs use multiple facilities, no individual contractor or site can address questions involving a program’s needs and the resources that may be available. Some programs have developed fairly accurate cost-tracking and estimating systems. For those programs that lack good cost-estimating systems, the use of historic costing experience is a viable alternative. DOE’s CFO has examined historical costing averages in order to develop cost estimates for the Department’s programs on the basis of historical data. In addition, the use of historic cost estimates can serve as a check on the reasonableness of the cost estimates generated by each program. Comparing the goals for carryover balances with the projected carryover balances for DOE’s programs at the beginning of the fiscal year for which the budget is being developed could provide current information that is more relevant to the budget process. In particular, such a comparison would enable DOE to identify the carryover balances that may exceed the programs’ requirements. Such a comparison also places the burden of justifying the carryover balances on DOE’s program managers as well as on DOE’s M&O contractors. Budget decisionmakers can review the programs’ goals, projected carryover balances, and justifications for deviations from the goals in the same way they consider the justifications for requests for new obligational authority. DOE program officials we spoke to agreed with the general principle of establishing goals for and projecting the carryover balances. However, they noted that expanding this idea throughout DOE would require considering the unique characteristics of each program in establishing goals. For example, for research and development activities in the Fossil Energy program, grants are awarded on a 1- to 3-year funding cycle, so that this program’s expected level of carryover balances would differ from that of a program such as Waste Management. DOE program officials agreed that the programs should be able to explain and justify the unique aspects of their programs that may require adjustments to their goals for carryover balances. CFO officials suggested that their office may need to establish a DOE-wide working group to evaluate DOE’s programs and establish reasonable goals. In general, the programs should be able to justify the differences between the goals and the projected balances. Officials of some major programs we spoke to outlined programmatic reasons that their programs’ projected balances could exceed the goals, citing the following examples: The balances for Defense Programs were higher than normal in fiscal year 1995 because (1) operating funding was tied up in Cooperative Research and Development Agreements and (2) funding had been transferred to the Environmental Management program as sites were transferred from weapons activities to environmental cleanup. Funding for environmental restoration is provided as operating funding, but the restoration activities actually involve many construction projects. Thus, the balances for environmental restoration can be expected to be higher than those needed for other operating activities. The Nuclear Energy program received about $30 million in funding for a safety program for Russian reactors from the State Department late in fiscal year 1994. This funding was thus unobligated at year-end but was not available to reduce the program’s request for new obligational authority. In an attempt to develop a more effective approach for analyzing carryover balances, offices within DOE have proposed several promising ideas that provide an opportunity to address the major problems with the Department’s current approach. These ideas involve (1) developing standard goals for carryover balances in order to define the programs’ needs for balances and (2) projecting the carryover balances to provide relevant information on the status of the balances. Without a more structured process for considering carryover balances in formulating the budget, it is unclear whether the amount of carryover balances DOE is currently proposing for use by its programs is adequate, too small, or possibly even too large. We recommend that the Secretary of Energy develop a more effective approach for identifying the carryover balances that exceed the requirements for the Department’s programs and are thus available to reduce the annual budget request. Expanding on the efforts already being explored could lead to a process that establishes goals for each program for all of the carryover balances (including unobligated balances) needed to meet its unique requirements, projects the carryover balances for the beginning of the new fiscal year’s budget on the basis of each program’s cost estimates and cost history, and focuses analysis on justifying any differences between a program’s goals and the projected balances in order to identify the balances that exceed the program’s requirements and are thus available to reduce DOE’s budget request.
GAO reviewed the Department of Energy's (DOE) approach for identifying carryover balances from previous years' budgets that may be available to reduce budget requests for new fiscal years. GAO found that: (1) DOE does not use a uniform, effective method of identifying carryover balances; (2) DOE makes separate, general estimates of excess funds in its individual programs; (3) some DOE programs have reduced their carryover balance while others have allowed their carryover balances to increase; (4) the DOE annual report on uncosted obligations is not used to identify potential carryover balances because it does not identify all of the carryover balances, identify all of the uncosted obligations that are available to reduce DOE budget request, or include enough information to be useful during the formulation of DOE budget; (5) within DOE, the Office of Waste Management has studied the operations of DOE programs and developed standard expectations for their levels of uncosted obligations; (6) the Office of Waste Management has determined that balances in excess of one month's worth of operating activities funding may indicate carryover balances and should be investigated; (7) within DOE, the Office of the Chief Financial Officer has suggested estimating the uncosted obligations at the start of each fiscal year using estimates of yearly costs or past costs; and (8) the various plans developed in DOE have not been implemented departmentwide and would need to be altered to fit individual programs.
On a typical day, approximately 100,000 flights around the world reach their destinations without incident, and the safety of the global air transportation system has continued to improve in recent years. The United States air transport system, in particular, is experiencing one of the safest periods in its history. The FAA, an agency of the Department of Transportation, is primarily responsible for the advancement, safety, and regulation of civil aviation, as well as overseeing the development of the air traffic control (ATC) system. The FAA’s stated mission is to provide the safest, most efficient aerospace system in the world. Air traffic control on a global level is coordinated by ICAO, which establishes global standards for air navigation, air traffic control, aircraft operations, personnel licensing, airport design, and other issues related to air safety. FAA collaborates with ICAO in setting standards and procedures for aircraft, personnel, airways, and aviation services domestically and throughout the world. Air navigation service providers (ANSPs) are organizations authorized to provide air navigation services. For example, FAA’s Air Traffic Organization is responsible for providing safe and efficient air navigation services in U.S. airspace. Air traffic services units provide air traffic control, flight information, and alerting services in portions of airspace called flight information regions. According to Annex 11 to the Convention on International Civil Aviation, air traffic control is provided to prevent collisions and expedite and maintain an orderly flow of air traffic, among other things. Surveillance plays an important role in air traffic control as the ability to accurately and reliably determine the location of aircraft has a direct influence on how efficiently a given airspace may be utilized. Additionally, according to ICAO, surveillance can be used as the basis for automated alert systems, as the ability to actively track aircraft enables air traffic control to be alerted, for example, when an aircraft deviates from its altitude or route. Radar is a surveillance technology that provides the air traffic controller with an on-screen view of aircraft position. Air traffic control uses radar to determine the position of aircraft and the aircraft’s reported altitude at a given time when traveling over land or coastlines. An aircraft’s transponder automatically transmits a reply when it receives an interrogation radio signal from ground radar stations. During the cruise portion of a flight within radar coverage, the aircraft’s position is reported at least every 12 seconds, depending on the rotational speed of the ground radar antenna. FAA and its aviation counterparts in other parts of the world—including Europe, Asia, and Australia—are in the process of transitioning from radar-based surveillance to a system using Automatic Dependent Surveillance-Broadcast (ADS-B), which once implemented, is expected to provide air traffic controllers and pilots with more accurate information to help keep aircraft safely separated in the sky and on runways. In areas without radar or ADS-B coverage—including oceanic airspace, remote geographic regions such as the North and South Poles, and some areas in Africa, Asia, and South America—pilots use radio communications systems to report the position of their aircraft to air traffic control. According to FAA guidance for oceanic and international operations, aircraft are to report their position to the ANSP responsible for the airspace where the aircraft is operated and should do so before passing from one flight information region to another. On routes that are not defined by designated reporting points, aircraft should report as soon as possible after the first 30 minutes of flight and at hourly intervals thereafter, with a maximum interval between reports of 1 hour and 20 minutes. On oceanic routes, aircraft should report their position at all designated reporting points, as applicable; otherwise, flights should report their position at designated lines of latitude and longitude. Aircraft flying in oceanic airspace may be equipped with additional avionics and satellite communications capabilities, such as the Future Air Navigation System (FANS), which creates a virtual radar environment to allow air traffic control to safely place more aircraft in the same airspace. ICAO requires that, at a minimum, aircraft operating over oceans must have a functioning two-way radio to communicate with the appropriate air traffic control unit. FAA requires Part 121 operators (i.e., scheduled commercial air carriers) to carry certain communication and navigation equipment for extended over-water operations. For example, aircraft of these operators must have at least two independent long-range navigation systems and at least two independent long-range communication systems to communicate with at least one appropriate station from any point on the route. software, which is integrated with an aircraft’s flight management system, provides a means for digital transmission of short messages between the aircraft and ATC using radio or satellite communication systems. To fly at optimum altitudes in the North Atlantic—the busiest oceanic airspace in the world—operators were required to equip with FANS by February 5, 2015. Aircraft equipped with FANS can transmit Automatic Dependent Surveillance-Contract (ADS-C) reports, which may include information on the plane’s current position and intended path to air traffic control. Position reports sent through ADS-C transmit at defined time intervals, when specific events occur such as a sudden loss of altitude, or based on a request from air traffic control. In addition to the role of surveillance in air traffic control, FAA requires commercial airlines conducting scheduled and nonscheduled operations under part 121 of federal aviation regulations to have a flight following system in place to ensure the proper monitoring of the progress of each flight from origin to destination, including intermediate stops and diversions. Major airlines monitor the progress of flights from their operational centers using technologies such as the Aircraft Communications Addressing and Reporting System (ACARS), a communications system used predominantly for transmission of short text messages from the aircraft to airline operational centers via radio or satellite communication. When an aircraft is in distress, or does not communicate as expected, air traffic control’s responsibility for providing alerting services is provided in Annex 11 to the Convention on International Civil Aviation. There are 3 phases intended to notify search and rescue services to take appropriate measures, including: Uncertainty phase: Established when communication has not been received from the crew within the 30-minute period after a communication should have been received. Alert phase: Established when subsequent attempts to contact the crew or inquiries to other relevant sources have failed to reveal any information about the aircraft. Distress phase: Established when more widespread inquiries have failed to provide any information, and when the fuel on board is considered to be exhausted. ICAO members establish search and rescue regions to provide communication infrastructure, distress alert routing, and operational coordination for supporting search and rescue services.and rescue regions, aeronautical rescue coordination centers are responsible for the search and rescue operations prompted by an aviation accident. ICAO encourages countries, where practicable, to combine their search and rescue resources into a joint rescue coordination center with responsibility for both aeronautical and maritime search and rescue. Search and rescue authorities may be alerted to distress situations by satellite constellations operated by the International Cospas-Sarsat Program that detect transmissions from an aircraft’s emergency locator transmitter. The emergency locator transmitter may be automatically activated by the shock typically encountered during an emergency or manually by a member of the flight crew. Satellites detect an activated transmitter and send the signals to ground stations, which determine the transmitter’s position and report to search and rescue authorities. Individual countries are responsible for providing these services. In the rare event of a disaster, after any survivors have been rescued, physical recovery of the aircraft’s wreckage and the flight data recorder (FDR) and cockpit voice recorder (CVR)—commonly referred to as the black boxes—from the crash site is a priority in order to determine the cause of and circumstances surrounding the accident. After recovering the recorders, each about 20 pounds and roughly the size of a shoebox, investigators from civil accident investigation authorities—such as the NTSB in the United States or the French Bureau d’Enquêtes et d’Analyses (BEA)—download and analyze data on flight conditions and the cockpit’s audio environment. Other potential sources of information that can help determine the cause and circumstances of an aviation accident include any communications between the flight crew and air traffic control, radar track history, data transmitted from various systems onboard the aircraft, aircraft wreckage, and the crash site. ICAO standards for flight recorders, established in Annex 6 to the Convention on International Civil Aviation, are based on aircraft weight and provide that flight data recorders and cockpit voice recorders should retain the information recorded during at least the last 25 hours and 30 minutes of operation, respectively. FAA establishes the domestic regulations, policies, and guidance for the certification and airworthiness of flight recorders, as it does for other equipment and instruments on part 121 aircraft. In general, turbine powered commercial aircraft operating under part 121 must have a flight data recorder and a cockpit voice recorder, both of which undergo extensive testing to minimize the probability of damage resulting from a crash. See figure 1 for more detailed information on the FDR and CVR components. Surveillance limitations in oceanic airspace and disabled aircraft communications systems may make it more difficult to determine the precise location of an aircraft in distress or an accident site. Over land, radar monitors aircraft position in real time but coverage diminishes more than 150 miles from coastlines or in remote airspace such as the polar regions. In non-radar environments, flight crews rely on procedural surveillance by periodically reporting their position to air traffic control when passing certain waypoints on their flight plan. Intervals between position reports vary, but aircraft in oceanic and remote airspace should report their position at least every 80 minutes, according to FAA guidance for oceanic operations. According to one avionics manufacturer, it may take a flight crew 10 to 20 minutes to report their position using high frequency (HF) radio due to the disruptions caused by weather and atmospheric conditions over the oceans. Given that an aircraft cruises at speeds of more than 500 miles per hour (depending on altitude), an aircraft could travel more than 167 miles between 20-minute position reports, in comparison to radar’s ability to determine an aircraft’s location at least every 12 seconds. An aircraft reporting every 80 minutes could travel more than 600 miles between position reports. In airspace with less air traffic, such as remote oceanic regions, air traffic control does not require continuous contact with aircraft to maintain safe separation. Even more frequent position reports may not provide precise information on the aircraft’s location. For example, during its scheduled flight to Paris on June 1, 2009, AF447’s last regular ACARS position report was sent at 02:10 Coordinated Universal Time (UTC)maintenance messages were transmitted between 02:10 and 02:15, the approximate time of the plane’s crash. Based on the time the last ACARS message was received, investigators established a search area of more than 17,000 square kilometers, more than 500 nautical miles from any coastline, with a radius of 40 nautical miles centered on the plane’s last known location. MH370’s communications systems also included ACARS, but as discussed below, this system and other onboard systems stopped transmitting data during flight. Additionally, air traffic control and airline operation centers may be unable to determine an aircraft’s location if communications equipment onboard the plane is damaged, malfunctioning, or has been manually turned off. For example, MH370 departed from Kuala Lumpur at 16:42 UTC on March 8, 2014 on a scheduled flight to Beijing, China. According to the Australian Transport Safety Bureau, the agency leading the search for the plane, MH370’s flight path includes three distinct sections: 1. an initial stage after takeoff in which the aircraft was under secondary radar, the transponder was operational, and ACARS messages were being transmitted; 2. a second stage in which onboard communications equipment were no longer working and the plane was only being tracked by military radar; and 3. a final stage in which the only available information on the flight’s path comes from satellite communications log data. At 17:07, the aircraft transmitted its final automatic ACARS message, which included the weight of the fuel remaining on board. The flight crew’s last radio contact with Malaysian air traffic control occurred at 17:19 and then MH370 lost contact with air traffic control during a transition between Malaysian and Vietnamese airspace at 17:22. At 18:22, Malaysian military radar tracked MH370 flying northwest along the Strait of Malacca; this was the final radar data indicating the airplane’s position. After disappearing from military radar, MH370’s satellite communications system exchanged seven signaling messages—also referred to as “handshakes”—with the ground station, a satellite over the Indian Ocean, and the aircraft’s satellite communications from 18:25 until 00:19. According to the Australian Transport Safety Bureau, the final signaling message, a log-on request from the aircraft, indicates a power interruption on board that may have been caused by an exhausted fuel supply. At 01:15, MH370 did not respond to the signaling message from the ground station. Using the handshake data to determine that it continued to fly for several hours after disappearing from radar and estimates of the aircraft’s range based on the fuel quantity included in the final ACARS message, investigators placed MH370’s final location somewhere in Australia’s search and rescue region on an arc in the southern Indian Ocean. The current phase of the search is focused on an area of approximately 60,000 square kilometers. Communication and coordination between air traffic control centers can be difficult in oceanic and remote areas. As discussed previously, responsibility for global air traffic control is divided geographically by flight information region. Over the course of an oceanic flight, an aircraft may transition frequently between flight information regions in areas in which the ability to communicate with air traffic control can be limited. With respect to AF447, the TASIL waypoint in the Atlantic Ocean is located on the boundary between the Brazilian and Senegalese flight information regions. According to the final report on the investigation of the AF447 accident, controllers from the Atlantico Area Control Center in Brazil— which had been in contact with the aircraft—and the adjacent Dakar Oceanic Area Control Center in Senegal—which never established contact with the aircraft—stated that the quality of the HF radio reception was poor the night of the accident, resulting in recurring communication problems. The report found that there were powerful cloud clusters on the route of AF447, which may have created notable turbulence. The Atlantico controller last had contact at 01:35, when the crew read off their altitude and flight plan. Over the next minute, the controller asked the crew three times for its estimated time to cross the TASIL waypoint but received no response. The flight, however, did not encounter serious problems until 02:10 and the accident occurred in the Atlantico region at approximately 02:14. The crew should have established contact with Dakar air traffic controllers at approximately 02:20 when the aircraft was due to pass the TASIL waypoint. Dakar controllers stated that they were not concerned about the absence of radio contact with AF447 given the HF problems that night and since aircraft frequently crossed all or some of the Dakar flight information region without making radio contact. The AF447 final report concluded that the radio communication problems and meteorological conditions resulted in the controllers considering the situation (i.e., no contact with the flight) as normal. Furthermore, there were several communications breakdowns at critical junctures. Specifically, the report noted the lack of contact between the Atlantico controller and the flight crew before the transfer to the Dakar controller and the lack of contact between the Atlantico and Dakar controllers after AF447’s projected passage of the TASIL waypoint, both of which indicated that air traffic control had not effectively monitored the aircraft. The report also noted that a timely alert was not triggered because the controller in each region failed to communicate with the other as each individual controller anticipated. See figure 2 for a timeline of the AF447 accident. Furthermore, according to the AF447 final report, information inquiries regarding the aircraft were not coordinated, resulting in air traffic control, search and rescue, and the operators questioning each other without making a decision about what action to take. Although the last contact with AF447 occurred at approximately 01:35, it took more than 9 hours for search teams to take off from Senegal and Brazil. The first search plane arrived at the TASIL waypoint approximately 13 hours after the crash. Contrary to ICAO standards and recommended practices, Brazil and Senegal did not have a search and rescue protocol. Consequently, the Brazilian and Senegalese rescue coordination centers were not aware of each country’s available resources, and the report stated that it was not possible to quickly identify one aeronautical rescue coordination center to lead the search and rescue mission. In the absence of a protocol, the report noted that the rescue coordination centers wasted considerable time gathering information and determining whether to trigger a search. The report also noted that there was a lack of coordination within the French aeronautical rescue coordination center and with its foreign counterparts in organizing the search and rescue. After being informed by Air France about a series of failure messages issued by the aircraft to its maintenance center in France, authorities from the French aeronautical rescue coordination center considered themselves not competent to intervene in a zone outside their area of responsibility. The report noted that this belief could be explained by ineffective training for search and rescue agents, particularly in terms of coordination with foreign counterparts. The French aeronautical rescue coordination center also provided key information to organizations that were not, according to the final report, competent in search and rescue; for example, one of the organizations failed to forward the last known position of the aircraft from an ACARS message. MH370 also highlights the complexities of coordinating search and rescue activities in areas with multiple flight information regions when the final location of the plane is unknown. As noted earlier, MH370 departed from Kuala Lumpur at 16:42 UTC on March 8, 2014. At 17:19, Kuala Lumpur air traffic control instructed MH370 to contact Ho Chi Minh air traffic control, and the flight crew acknowledged this request over the radio. According to the interim report issued by the Malaysia Ministry of Transport’s Safety Investigation Team for MH370, another contact should have occurred at about 17:22 when MH370 passed the IGARI waypoint, but MH370 lost contact with ATC during the transition between Malaysian and Vietnamese airspace. The Ho Chi Minh City air traffic control center contacted the Kuala Lumpur air traffic control center at 17:39 to inquire Thereafter, according to the report, about the whereabouts of MH370.Kuala Lumpur initiated efforts involving the Malaysia Airlines operations center and Singapore, Hong Kong, and Phnom Penh air traffic control centers to establish MH370’s location, a process that lasted nearly 4 hours. The Kuala Lumpur aeronautical rescue coordination center transmitted the first distress message related to MH370 at 22:32—more than 5 hours after the last message expected from the crew—to begin search and rescue operations in the South China Sea based on the aircraft’s last known position. According to the report, Malaysian search and rescue aircraft took off heading to the search areas at 03:30. After an aviation accident, investigators have typically been able to recover the flight recorders in a matter of days or weeks. We reviewed data on the 16 commercial plane crashes over water that occurred globally since 2000. Additional information on each of the 16 accidents is found in appendix III. In two instances—AF447 and MH370—search and recovery efforts for the recorders exceeded 1 year. In these cases, recovery of the flight recorders was hampered because investigators did not know the precise location of the crash site. The search for AF 447 involved a 17,000 square kilometer area, and the ongoing search for the wreckage of MH370 is focused on 60,000 square kilometers in the southern Indian Ocean. Additionally, the complexities of the underwater environment may hamper efforts to retrieve recorders. While authorities located some debris from AF447 in a remote section of the Atlantic Ocean within a few days of the accident on June 1, 2009, they were unable to locate the recorders during that time. The first phase of the BEA’s search for the recorders focused on the underwater locator beacons. Batteries in the current beacons are designed to allow the signal to be transmitted for at least 30 days and its range is typically limited to less than 3 nautical miles depending on water’s depth, underwater topography, and surrounding conditions. If the location of the crash cannot be determined within 30 days, the time available to search for the recorders while the beacons’ batteries have life is limited. The search for AF447’s beacons using a towed pinger locator followed the airplane’s projected trajectory in the Atlantic but was unable to detect an acoustic signal within the minimum 30-day transmission period. From July 2009 to April 2011, the BEA attempted to locate the wreckage and recorders in several search phases using sonar detection, evaluation of aircraft debris drift, and satellite-tracked buoys, each time unsuccessfully. After identifying the wreckage site at a depth of more than 12,000 feet, investigators ultimately found the flight recorders in May 2011 amid aircraft debris scattered on the seafloor. The BEA subsequently determined that the cockpit voice recorder’s beacon was damaged on impact while the beacon on the flight data recorder separated and was never found. As a result of these challenges, the search for the plane’s flight recorders took 2 years and cost an estimated $40 million. In response to MH370, the international aviation community has undertaken a number of efforts to improve global aircraft tracking. In the near term, a task force formed by the International Air Transport Association (IATA) developed a set of voluntary performance standards that call for position reporting every 15 minutes with the capability to increase the reporting rate in case of emergency. Several technologies that are already on board most domestic aircraft can be used to meet this standard, although airlines would face some costs to equip if they do not already have those systems or satellite communications equipment. Over the longer term, ICAO has proposed a comprehensive new aircraft tracking framework designed to ensure that accurate information about the aircraft’s location is known at all times. In addition to incorporating the industry recommendations on aircraft tracking, the new concept also proposes an autonomous distress tracking system, an alternative to underwater flight data retrieval, and new procedures to improve coordination and information sharing during emergencies. In the aftermath of the MH370 tragedy, the international aviation community has undertaken a number of efforts to improve global aircraft tracking capabilities. For example, just weeks after the disappearance of MH370, ICAO convened a special meeting to study issues related to aircraft tracking, and international stakeholders agreed to accelerate the timeframes for a new aircraft tracking approach, according to the U.S. ambassador to ICAO. In addition, IATA, which represents the international aviation industry, formed an Aircraft Tracking Task Force (Task Force) that focused on what airlines could do to support aircraft in the near term using existing technologies. The Task Force developed a set of voluntary performance standards to establish a baseline aircraft The tracking capability for all commercial passenger aircraft worldwide.key aircraft tracking performance standards proposed by the Task Force include: Position reporting every 15 minutes, with capability to increase the reporting rate in response to an emergency. This performance standard calls for regular and automatic transmission of aircraft position information. The 15-minute frequency reflects the optimal balance between the benefit of knowing flight location with greater precision and the costs of transmitting data, as well as the cost of search and rescue operations, according to the Oceanic Position Tracking Improvement and Monitoring Initiative. The Task Force also called for any aircraft tracking system to have the capability to report more frequently when certain circumstances are met, such as unusual change in the trajectory, vertical speed, or altitude of the aircraft. The purpose of the increased position reporting rate is to narrow the search zone for an aircraft in distress. Position reports should include latitude, longitude, altitude, and time information. This performance standard calls for position reporting in four dimensions. Latitude and longitude provide the aircraft’s location on a map, while altitude and time provide other data points to pinpoint the precise position of the aircraft at any stage of the flight. Communications protocols between the airline and air traffic service provider to facilitate coordination in case of an emergency situation. The Task Force recognized that there is a need both to amend existing procedures and to develop new or improved communications protocols between airlines and air traffic service providers. The purpose of this performance standard is to establish communication procedures and protocols to better respond to instances of missing position reports or other unexplainable developments. In order to achieve this baseline aircraft tracking standard, the Task Force recommended that aircraft operators evaluate their capabilities, implement measures to meet the performance standards within 12 months, and exchange best practices. According to the Task Force, these standards and recommendations are designed to improve the collective ability of the airline industry to identify and track aircraft globally. The Task Force also recognized that near-term procedures are just first steps in a longer-term, integrated concept of operations for aircraft tracking during all phases of flight. This concept of operations is discussed further below. Several technologies could be used to meet the recommended aircraft tracking performance standards, according to the Task Force and aviation stakeholders we interviewed. For instance, ACARS- and FANS-equipped aircraft can be configured to report aircraft position information, even though ACARS is not specifically designed for that function and FANS is designed to report to air traffic control, not to airlines. According to the Task Force, ACARS uses information derived from the aircraft’s flight management system to report the aircraft’s position, and ACARS is configurable for enhanced reporting triggered by unanticipated altitude changes or flight levels below a predetermined altitude. For aircraft that are equipped with FANS, the airline ground systems can be configured to access information about the position of the aircraft, using Automatic Dependent Surveillance-Contract (ADS-C)—an application that allows the airline or air traffic control to establish a contract with the FANS system onboard the aircraft to deliver four dimensional position and other data at single, periodic, or event-based intervals. In addition, other benefits of FANS include reduced separation between aircraft, more direct routings leading to reduced fuel consumption, and improved communication clarity between the pilot and air traffic control. Other commercially available systems, including FLYHT Aerospace Solution’s Automated Flight Information Reporting System, would also meet the proposed performance standards by providing operators with precise information about the aircraft’s position in real-time, according to the manufacturer.Passenger Wi-Fi systems, which utilize satellite connectivity, could also be used to facilitate aircraft tracking, according to representatives from one domestic airline. The level of equipage for these various technologies differs across the U.S. and global fleet. According to one of the major air transport communications service providers, almost all commercial passenger jet aircraft operators in the U.S. install and use ACARS, including nearly all regional airlines. Three major domestic passenger airlines that we spoke to also confirmed that their entire fleets are equipped with ACARS. Generally, airlines based outside the U.S. use ACARS, except some low- cost airlines that have avoided the cost of installing ACARS avionics and use only very high-frequency (VHF) voice radio or other solutions, according to one air transport communications service provider. According to some aviation stakeholders we spoke to—including FAA, two domestic airlines, and one of the major air transport communications service providers—fewer airlines have equipped with FANS because it is only beneficial to the airlines when flying in certain, higher density oceanic airspace. FAA officials we spoke to estimated that approximately 70—80 percent of the aircraft operating in the busy North Atlantic airspace are currently FANS-equipped because it is required to access the optimal routes. According to one air transport communications service provider, FANS equipage on short-haul aircraft is very low, but is expected to increase because aircraft will need FANS avionics to be able to communicate with certain components of FAA’s Next Generation Air Transportation System (NextGen) in the future. Airlines that wish to take advantage of the optimal flight paths between North America and Europe will need to be FANS-equipped by 2015; therefore, FANS equipage is expected to increase in the future, according to FAA. Representatives from one large domestic airline said they are installing FANS on all of their aircraft used for international operations largely because of its operational and safety benefits. Another airline that we spoke to is also planning to equip its aircraft that operate from the U.S. West Coast to Hawaii because of the operational efficiencies expected by using FANS. Finally, two major airframe manufacturers told us that all new aircraft typically come equipped with the latest communications, navigation, and surveillance avionics, including ACARS and FANS, but the operator chooses to enable the system depending on where the aircraft is used. Despite their benefits, the technologies that could be used to achieve the Task Force’s baseline aircraft tracking standard in the near term do not address all the challenges associated with locating flights. For example, according to one major airframe manufacturer, neither ACARS nor FANS is tamper-proof, which means that a knowledgeable individual could disable both systems and the aircraft’s transponder. Should those systems and the transponder be turned off, the aircraft would be incapable of sending position data. Aviation stakeholders told us that there are legitimate engineering and operating reasons for the flight crew to have total control over all on-board systems. According to one major airframe manufacturer, aircraft are designed with the assumption that the pilot and flight crew are trusted and should have complete control over the aircraft. In certain situations, air traffic control may ask a pilot to turn the transponder off and back on to identify an aircraft. Other stakeholders we spoke to said that the pilot’s ability to turn off any system on board the aircraft is based first and foremost on safety considerations. Nevertheless, at least two major international airlines have called for a tamper-proof aircraft tracking solution. Should they choose to adopt the technologies described above, airlines that currently do not meet the Task Force aircraft tracking performance standards would face some costs. Estimates of those costs across the fleet are difficult to determine with any precision because data on the level of aircraft equipage were not consistently available and the contracts between the airlines, avionics manufacturers, and air transport communications service providers to provide such services are proprietary. Costs to equip with ACARS using VHF radio could be up to $100,000 per aircraft. Additionally, ACARS using satellite communications would cost another $60,000 to $150,000 per aircraft for Iridium or Inmarsat equipment, according to one air transport communications service provider. However, according to the airframe manufacturers we spoke with, most long-haul aircraft that fly in oceanic and remote regions are already equipped with those units. For aircraft without FANS, there would be an additional cost of up to $250,000 for a new FANS-capable flight management system, according to one air transport communications service provider. Costs for FLYHT Aerospace Solution’s Automated Flight Information Reporting System, a commercial system that could, among other things, provide aircraft position data, were approximately $70,000 per system, including installation labor per aircraft but not the cost of data transmission, according to company representatives. In order to more frequently report position information using ACARS or FANS, airlines may have to pay for increased data transmission, but we were unable to determine the extent of these costs to industry. According to one air transport communications service provider, ACARS data transmission costs per month can range from $500 per short-haul aircraft using VHF radio systems, to approximately $1,000 per aircraft for long- haul aircraft using satellite communications over oceans. According to one domestic airline, airlines pay for ACARS messages through plans similar to cellular text messaging, and therefore, it is not clear whether more frequent position reports would be covered under existing plans, or would require new plans at a higher cost. To help mitigate these costs and enhance tracking capabilities in the near term, aviation stakeholders have offered a number of proposals to enhance flight tracking. One proposal offered by the satellite communications provider Inmarsat would provide four free position reports per hour using FANS ADS-C capability. To take advantage of this proposal, aircraft would need to be equipped with FANS and Inmarsat satellite communications. A separate proposal from SITA, a major provider of ACARS data, aims to provide ADS-C reports to airlines. According to SITA, the company’s proposal may help improve coordination between the airline and the air navigation service provider, especially if there is an unexpected event onboard the aircraft. Rockwell Collins, the other major providers of ACARS data, unveiled a flight tracking service in March 2015 that utilizes several data sources, including ADS-B, ADS-C, and ACARS. Over the longer term, other aircraft surveillance systems may become available that build on FAA’s transition to NextGen. Aireon, a joint venture between four air navigation service providers—Nav Canada, ENAV (Italy) and the Irish Aviation Authority, and Naviair (Denmark)—as well as the satellite service provider Iridium, aims to use ADS-B technology on satellites to provide a global aircraft surveillance system. According to Aireon representatives, its space-based ADS-B system is scheduled to be fully deployed in 2017, although the system would not be operational until after a test and validation phase is completed, which is currently planned for early 2018. The real-time surveillance costs provided by this system are being discussed with individual air navigation service providers at this time. Aviation stakeholders we spoke to recognize the potential of this spaced- based surveillance system in terms of enhancing aircraft tracking in oceanic and remote regions. Aireon has also announced a free service to be offered using the space-based ADS-B system—Aircraft Locating and Emergency Response Tracking (ALERT)—that would provide the last known or current location of any aircraft equipped with ADS-B technology to search and rescue teams in emergency situations. In parallel with the Aircraft Tracking Task Force, an ICAO-led Ad-Hoc Working Group on Flight Tracking developed a long-term framework— called the Global Aeronautical Distress and Safety System (GADSS)—to ensure that accurate information about the aircraft’s location is known during the sequence of events before and after an accident. Both industry and ICAO worked to harmonize their proposals, according to stakeholders involved in the process, and at ICAO’s High-Level Safety Conference in February 2015, delegates from over 120 nations endorsed the GADSS framework for aircraft tracking. This framework is designed to maintain an up-to-date record of aircraft progress and, in the case of a forced landing, the location of survivors, the aircraft, and the flight recorders.Force recommendations on tracking aircraft, but goes further, as described below. The GADSS consists of four key system components: Aircraft tracking system: This tracking system incorporates the Task Force’s near-term recommendations to enhance aircraft tracking, as described above, and specifies that when an abnormal event is detected, the position reporting rate of the aircraft tracking system increases to around a 1-minute interval, an increase that translates to knowing the aircraft’s position within 6 nautical miles; such reporting can be achieved with the systems discussed previously. Autonomous distress tracking system: The GADSS framework goes further than the Task Force standards by calling for an autonomous distress tracking system. According to the Ad-Hoc Working Group, an autonomous distress tracking system operates independently from the regular aircraft tracking system and may be automatically or manually activated at any time. This system could be automatically triggered by unusual attitude, speed or acceleration, failure of the regular aircraft tracking system or surveillance avionics, or a complete loss of engine power. aircraft power or other systems, and be tamper-proof. In addition, the system would operate independently of Automatic deployable flight recorder: The GADSS proposal currently calls for an automatically deployed flight recorder. This device is designed to automatically separate from the aircraft in the event of an accident. At the February 2015 High-Level Safety Conference, ICAO proposed the use of deployable recorders or an alternative for data retrieval. Additional information about deployable flight recorders is provided later in this report. Procedures and information management: The final component of the GADSS aircraft tracking framework recognizes that the effectiveness of any search and rescue service is only as good as the weakest link in the chain of people, procedures, systems, and information. Therefore, in addition to the technology, the GADSS identifies key areas of improvement, such as existing procedures, improved coordination and information sharing, and enhanced training of personnel in reacting to emergency circumstances. According to ICAO, the performance specifications for the in-flight triggering criteria and broadcasting rate to be used are still under development. System Wide Information Management consists of standards, infrastructure, and governance enabling air traffic management information and its exchange between qualified parties via interoperable services, according to ICAO. assessment of the shortcomings in coordination and information sharing between air navigation service providers and search and rescue authorities is needed. Moreover, the GADSS also calls for the development of guidance material and training on emergency situations for air navigation service providers. These efforts outlined in the GADSS are essential; however, the steps to strengthen the people, procedures, systems and information sharing must be carried out by individual countries. In response to recent aviation accidents, government, international organizations, and industry have been developing proposals to enhance flight recorder recovery in oceanic regions. In the near term, manufacturers are scheduled to begin adding extended batteries on the underwater locator beacons (ULBs) attached to flight recorders and have the option to add a second low-frequency device. Longer-term proposals to equip commercial aircraft with automatic deployable flight recorders and the capability to stream up to all FDR data are in various stages of development. Some have also called for enhanced cockpit recorders to aid accident investigation by recording additional audio and adding images. While each technology or proposal is intended to improve data recovery and accident investigations, industry has raised various concerns for the commercial fleet regarding the need for such changes and the costs associated with them. To help address challenges in locating flight recorders and aircraft wreckage in oceanic areas, FAA and other international aviation authorities have taken actions recommended by the French BEA investigating the AF447 accident to enhance ULBs. Additionally, the NTSB has also recently issued a related recommendation. First, to help address the challenges posed by the limited time that search and rescue authorities have to detect the ULB signal, the BEA recommended extending the ULB battery life from 30 to 90 days. In February 2012, the FAA issued a Technical Standard Order (TSO) for a 90-day battery ULB effective March 1, 2015, at which point the previous TSO for a 30-day battery would be revoked. The FAA issued a TSO authorization to one manufacturer in December 2014 and to another in February 2015. However, on March 10, 2015, the FAA delayed the effective date of the TSO until December 1, 2015, to provide a major aircraft manufacturer additional time for testing and analysis of the device’s installation. Underwater locator beacons manufactured on or after the December 1, 2015, effective date must meet the new requirements. The FAA is not requiring U.S. airlines to retrofit aircraft with a 90-day battery immediately. Instead, airlines are expected to replace 30-day ULB batteries through attrition on their normal replacement schedule of approximately every 6 years so as not to introduce additional costs associated with taking aircraft out of service outside of regular schedules. We estimate that all U.S. domestic aircraft should have a 90-day battery ULB installed by the end of 2021. Related international standards will become effective by 2018. Second, to help search and rescue authorities locate the aircraft wreckage underwater, the BEA recommended adding a mandatory second underwater locating device with a greater transmission range. The FAA issued a TSO, effective June 26, 2012, that allows manufacturers to add a second optional device directly attached to the airframe. This second device would emit a low-frequency acoustic signal with a range of approximately 5 miles, which is about four times the range of the ULBs attached to the flight recorders. This additional device is optional for airlines and manufacturers. In January 2015, NTSB issued a recommendation for a low-frequency device attached to the airframe that will function for at least 90 days and that can be detected by equipment available on military search and rescue, and salvage assets commonly used to search for and recover wreckage. Stakeholders we spoke with, including airframe manufacturers, a trade association, and an avionics manufacturer, generally agreed that extending the ULB battery life made sense and could improve flight data recovery for oceanic accidents at a low cost to the airlines; however, there was no consensus on the need for a second underwater locating device. One airframe manufacturer told us that it was taking steps to prepare for attaching a low-frequency device to the airframe, whereas another told us that this might not be necessary if an aircraft tracking solution could meet the same performance of providing an aircraft’s last known location within 6 nautical miles. However, ocean currents can move aircraft wreckage from its initial point of impact, and therefore, it is unclear whether a tracking solution would provide the same function as a second device. While both the 90-day battery ULB and the low-frequency underwater locating device are intended to help locate the flight recorders and aircraft wreckage in oceanic regions and potentially mitigate some costs with an underwater sonar search, they do not address all the potential challenges of flight recorder retrieval in remote oceanic regions. For example, investigators must still know the general location of impact. Otherwise, the search area would be too large, hampering the location and retrieval of the flight data for the accident investigation despite these longer-life and longer-range devices. The flight tracking proposals discussed previously address this concern to a certain extent. In addition, underwater conditions—including depth, topography, and surrounding conditions—can still affect ULB performance. Range can also be reduced if the device is covered or blocked by aircraft wreckage. Finally, locating the signal is only one step. Even if investigators detect the signal, retrieving the recorders may be difficult if located deep underwater or in difficult terrain. In addition to enhancing the ULBs to help locate the recorders underwater, governments, international organizations, and industry have been looking at additional changes to improve flight data recovery in oceanic regions, one prescribing a specific technology and another using a performance-based approach. ICAO included automatic deployable flight recorders in its long-term GADSS framework. NTSB issued a safety recommendation calling for a means of recovering flight data without underwater retrieval, which would allow for either a deployable recorder or triggered transmission of mandatory flight data during emergencies. While these technologies are designed to improve flight data recovery, some aviation stakeholders had concerns with installing either of these technologies on the commercial fleet. As discussed above, ICAO’s longer-term GADSS framework calls for automatic deployable flight recorders in order to provide faster and easier recovery of flight recorder data, especially in oceanic regions. Deployable recorders, which have been used for decades on U.S. military aircraft, including military versions of commercial aircraft, and helicopters, combine an FDR/CVR with an emergency locator transmitter in one crash-survivable unit. The deployable recorder is designed to separate automatically from an external section of the tail or leading edge of the aircraft when sensors detect an imminent crash. After separation, the deployable recorder is designed to avoid the crash impact zone and, when over oceanic regions, to float indefinitely. These recorders are designed to deploy and emit alert messages even if the aircraft loses power. The embedded emergency locator transmitter would send Cospas-Sarsat satellites an alert every 50 seconds, including the aircraft tail number, country of origin, location of aircraft at separation, and the recorder’s current location. Several aviation stakeholders—including airframe manufacturers, trade associations, and U.S. domestic airlines—are divided in their support for implementing deployable recorders on the commercial fleet. Boeing and Airbus, for example, have publicly taken different positions. At the NTSB’s forum on flight recorder technology in October 2014, Airbus representatives announced the company’s tentative plans to install deployable recorders on its future A350 and A380 long-haul fleets as a second complementary system to a fixed combination FDR/CVR, though specific timing is unknown. However, at the same forum, Boeing representatives stated that the company had no plans to add deployable recorders to its fleet and that the risk for unintended consequences needed to be studied further. Deployable recorders would offer several potential benefits, according to stakeholders, including the following: Faster and easier flight data location and retrieval: Since a properly deployed and undamaged deployable recorder floats indefinitely on the ocean surface and transmits an alert message directly to Cospas-Sarsat for up to 150 hours, it may be easier for searchers to locate and retrieve than a fixed recorder located on the seabed. In the case of AF447, if equipped with a deployable recorder that operated correctly, the device could have alerted searchers to its location, and once found on the ocean’s surface, investigators would have recovered the flight data more quickly than the 2 years it took to locate and retrieve the fixed recorders from the ocean floor. Similarly, if MH370 had a deployable recorder which operated correctly in the event of a crash, investigators would generally know the location at impact and could have recovered the flight data from the ocean’s surface. Updated location information: The alert messages transmitted from the recorder after deployment could help investigators pinpoint the crash site. Additionally, satellites can track the device’s drift pattern from these messages for up to 150 hours, giving investigators information on ocean currents that may move survivors and debris away from the initial crash site. No recurring service fees: Data transmission of the emergency locator transmitter signals from a deployable recorder would be free because Cospas-Sarsat has no service fees. However, some stakeholders highlighted a number of concerns with introducing deployable recorders into the commercial fleet: Safety: A range of stakeholders that we spoke to, including an airframe manufacturer, avionics manufacturers, and faculty from an academic institution, identified potential safety risks to the aircraft, passengers, maintenance technicians, and others on the ground from inadvertent deployment. According to one manufacturer, even if the industry had met its standard failure rate of less than one incident per 10 million flight hours for civil airborne systems and equipment, there would have been an estimated five incidents involving deployable recorders with the 54.9 million total commercial fleet hours in 2013. These types of incidents could potentially cause damage to both people and property. An airframe manufacturer and an avionics manufacturer told us that the system would need to be designed to ensure the safety of those on the ground and in the air, especially given the infrequency of aviation accidents in which a deployable recorder would be useful. Infrequency of accidents and success of fixed recorder recovery: One avionics manufacturer and two U.S. domestic airlines that we spoke with questioned the need for deployable recorders given the safety concerns discussed above and the infrequency of aviation accidents. Additionally, when accidents do occur, investigators typically locate and recover the flight data even for accidents occurring over water. Investigators located and recovered the flight recorders in 15 of the 16 commercial accidents, approximately 94 percent of cases that occurred over water since 2000. Mixed recovery record in military aircraft: According to one stakeholder, flight data recovery rates are actually better on fixed FDRs compared to deployable recorders based on experience with one military aircraft model. For instance, according to industry data of a certain military aircraft that we reviewed from 2004 through 2014, there was a 100 percent flight data recovery for fixed recorders compared to 75 percent with deployable recorders. The causes of those failures included instances in which the recorder did not deploy, was not located, or did not have data on the memory card. Does not mitigate the need to recover the aircraft wreckage and fuselage, or human remains: While the flight recorders are an important part of the accident investigation, investigators still want to recover the aircraft wreckage and fuselage to help determine the cause of the accident. Therefore, there would still be costs for an underwater search and recovery even if investigators had the deployable recorder. Hardware costs: Deployable recorders would require adding more equipment on the commercial fleet and, according to the FAA and two avionics manufacturers, would result in additional costs to airlines. The estimated cost is $50,000—$60,000 per unit, according to an avionics manufacturer. We found that the total cost of equipage could be as much as $29 million, and any additional costs associated with certifying each aircraft type model, if this cost per unit were incurred for the current total U.S. long-haul transoceanic fleet. However, for several reasons it is difficult to extrapolate the per-unit cost estimate to obtain a cost estimate for the total fleet. First, costs could vary based on the type of aircraft, the regulatory environment, and other engineering factors, so when widely deployed these factors may affect the costs for varied contexts. Additionally, if a technology becomes mandated or, even if voluntary, becomes widely adopted, unit costs might decline due to the efficiencies of mass production and also possibly due to a greater number of providers entering the market. Given the safety concerns, costs of equipage, and the uncertain benefits associated with deployable recorders, certain stakeholders, including one airframe manufacturer, one trade association, and one avionics manufacturer, suggested additional study is needed on the use of deployable recorders in commercial aircraft. Additionally, several stakeholders raised concerns about prescribing a specific technology—in this case automatic deployable flight recorders—as opposed to a performance-based approach when implementing or enhancing flight data recovery capabilities, which is the preferred approach of both FAA and the airline industry. Several stakeholders told us that industry needs the flexibility of performance standards over prescriptive solutions due to the diversity of the fleet and to allow for technological advances. FAA officials described their position on deployable recorders as fluid and told us that they did not have plans to mandate a deployable recorder, but also would not prevent an operator from installing one provided the operator adequately demonstrated an acceptable level of safety and performance. In the GADSS framework, ICAO identifies the need to develop performance-based standards for deployable recorders. At the February 2015 Second High-Level Safety Conference, ICAO presented a prescriptive baseline recommendation for deployable recorders and a performance-based alternative, though it is unclear what that alternative would be at this time. At the meeting, member states agreed with a performance-based approach to data retrieval. In January 2015, NTSB issued a safety recommendation that all new commercial aircraft used in extended overwater operations be equipped with a means of recovering mandatory flight data that does not require underwater retrieval, which builds on earlier efforts from the BEA in response to AF447. While automatic deployable flight recorders as discussed above could meet the recommendation, this performance- based approach also allows for a solution that involves triggering a transmission of mandatory flight data from the aircraft to the ground during emergencies. NTSB’s approach, if adopted by FAA, allows airlines to choose a solution that fits best with their operations. The concept of triggering the transmission of flight data to the ground consists of using flight parameters to detect whether an emergency situation is forthcoming, and if an emergency arises, transmitting data automatically from the aircraft until either the emergency situation ends or the aircraft crashes. Industry stakeholders cited various potential benefits of triggered transmissions of flight data. Provides data when physical FDR cannot be recovered: Streaming FDR data allows for post-flight analysis in instances where the physical FDR or its data cannot be easily recovered, including cases where a deployable FDR may not be recovered. Faster flight data retrieval: Triggered transmission provides required flight data without the need to recover the physical FDR, and therefore, investigators would have data available to them more quickly than if they had to initiate a potentially lengthy search for the physical FDR or deployed FDR. Quicker access to flight recorder data after an incident, especially in oceanic regions, could potentially allow investigators to determine the accident’s cause more quickly. In the case of AF447, if the FDR data were streamed from the aircraft before impact, investigators could have had access to the FDR data more quickly, which would have potentially avoided some of the lengthy search and recovery effort. Provides location information for aircraft in the event of an accident: In addition to flight data, the transmission of position information could be triggered at a rate that should allow investigators to identify a narrower search area than would otherwise be known through position reports sent at 15-minute intervals. This could minimize the need for emergency locator transmitters, which as discussed previously, several stakeholders told us have been unreliable in providing crash location information. However, stakeholders we interviewed raised several concerns about implementing data streaming into the commercial fleet. Bandwidth limitations: Stakeholders—including the FAA, an airframe manufacturer, faculty from an academic institution, a U.S. domestic airline, and two trade associations—told us that while it would likely not be possible to stream FDR data continuously from every aircraft in flight with current satellite capabilities, it could be possible to stream FDR data in a limited number of emergencies. However, a representative from an avionics manufacturer that sells a flight data streaming product told us that streaming voice data was not technically possible at the current time due to limited satellite bandwidth. Therefore, investigators would still need to retrieve the physical CVR to obtain audio information; such retrieval would limit some of the potential benefits of triggered transmission. Technical challenges: Two avionics manufacturers and an air transport data communications provider also stated that it could be technologically difficult for an aircraft in distress with its satellite antenna not pointing in a fixed position prior to impact to transmit data to satellites. However, there is at least one commercial product— FLYHT’s Automated Flight Information Reporting System—capable of streaming up to all FDR data and position information in near real- time via satellites when triggered by an onboard emergency, and according to this manufacturer, a test flight showed FDR data transmission even when the plane was in an unusual position during flight. Data privacy and security: Flight recorder data are typically only used for accident investigation purposes. NTSB told us that it was unclear who would control and have access to streamed data and emphasized the importance of validating the data before it becomes public. As such, a few stakeholders raised privacy and security concerns with streaming flight data. The NTSB’s primary concern is having access to flight data to conduct an accident investigation and NTSB officials stated that in the event of an accident in the United States, they should be the first party to download flight recorder information to ensure data integrity. Does not mitigate the need to recover the aircraft wreckage and fuselage, or human remains: Identical to the concern raised above with deployable recorders, investigators still want to recover the aircraft wreckage and fuselage in order to determine the cause of the accident. Therefore, there would still be some cost for an underwater search and recovery even if investigators had the streamed FDR data. Equipage and data costs: Stakeholders, including FAA, airframe manufacturers, a trade association, and a U.S. domestic airline, stated that streaming flight recorder data could result in data transmission costs to airlines, especially using satellite communications over oceanic regions, and might require more equipment on aircraft that comes at a cost. The cost of the FLYHT’s Automated Flight Information Reporting System described above is estimated at $70,000 per unit for parts and installation, according to the manufacturer. Assuming that as an average cost, we found that the total cost for the current U.S. long-haul transoceanic fleet could approach $35 million. As noted above, however, extrapolating the current per-unit cost estimate to estimate the cost for the entire fleet has certain limitations. Furthermore, it would cost between $5—$10 per minute for data streaming during emergencies, according to the manufacturer. The FAA and other stakeholders told us that most aircraft could gather similar data using existing systems, so adding additional equipment to gather and transmit such information may not have an operational benefit for airlines. Representatives from one U.S. domestic airline that we spoke with told us that they did not see the need to equip their fleet with a new device since their system for monitoring aircraft performance could provide engine information through ACARS. Similarly, one airframe manufacturer representative told us that ACARS has helped provide information for accident investigations before FDR recovery. As noted above, ACARS, which is equipped on all new aircraft, can be programmed to transmit some aircraft operations data, including position information, when certain triggers are met. ICAO announced support for extending the duration of the audio captured by the CVR, and NTSB has reiterated its recommendation for installing a cockpit image recorder. According to the NTSB, enhanced cockpit recorders would provide investigators with more information during accident investigations. Despite the potential value of this information to accident investigation, concerns over privacy remain unresolved. The ICAO Second High-Level Safety Conference recognized the need to increase CVR recording time to ensure that accident investigators had all relevant flight data. The concern is that CVRs should record and retain audio data for more than 2 hours given the possibility that MH370’s CVR, which was designed to record on a continuous 2-hour loop, recorded over critical events in the plane’s presumed 7-hour flight that could help accident investigators. According to a representative from one avionics manufacturer, 2 hours is insufficient and voice recordings should cover the full duration of the flight like the FDR. Several manufacturers that we spoke with stated that they could make CVRs with additional recording time. One trade association representative told us that the CVR is meant to supplement the information recorded by the FDR and cannot definitively tell investigators what happened by itself and, therefore, cautioned that enhancing subjective audio data may not be necessary if it provides only a marginal improvement. NTSB, Safety Recommendation, A-00-30 and -31 (Washington, D.C.: Apr. 11, 2000); NTSB, Safety Recommendations, A-15-1 through -8 (Washington, D.C.: Jan. 22, 2015). but they raised concerns that video, similar to audio data, is a subjective and less precise means of information gathering than FDR data. Therefore, they cautioned that video from an image recorder could lead to misinterpretation of the situation. They also cited privacy concerns with video information if improperly disclosed. The NTSB acknowledged the privacy issues with recording images of pilots in its initial recommendation, but also stated that given the history of complex accident investigations and the lack of crucial cockpit environment information, the safety of the flying public must take precedence. In response to the AF447 and MH370 disasters, the international aviation community is considering short- and long-term steps to improve aircraft tracking and flight data recovery with the goal of enhancing accident investigation and aviation safety. Numerous technologies—including communications systems onboard most commercial aircraft today, flight recorders that deploy from aircraft, devices capable of streaming flight recorder data in real time, and global satellite surveillance systems under development—have the potential to enhance aircraft tracking and expedite flight data recovery in the event of an oceanic accident, and industry continues to develop solutions for these tasks. However, some industry stakeholders we spoke with cited concerns about these technologies, such as the cost to equip the fleet and safety implications. Additionally, as AF447 and MH370 make clear, global air traffic control’s preparedness and capability to effectively monitor oceanic flights and provide timely alerts in exceptional situations are important elements in the debate. At the international level, ICAO is finalizing its Global Aeronautical Distress and Safety System based on input from the 2015 High-Level Safety Conference, with formal adoption targeted for 2016. These developments may represent the foundation for a comprehensive, global approach to ensure that the location of an aircraft during all phases of its flight is known to authorities. However, stakeholders expressed concerns that international standards could prescribe adoption of certain solutions, such as deployable recorders. They preferred a performance- based approach that encourages voluntary adoption because of the flexibility such an approach affords industry in an era of rapidly evolving technology. Additionally, the safety record in the National Airspace System may make it difficult to demonstrate that the benefits of new equipage on U.S. airlines outweigh the costs as part of a regulatory analysis. Given the scope of the international processes that are underway, we are not making any recommendations to FAA or NTSB related to aircraft tracking or flight data recovery at this time. Ultimately, we believe it is important for FAA to remain active through the ICAO process to ensure that any new international standard for aircraft tracking and flight data recovery is consistent with a performance-based approach and is implemented in a globally harmonized manner. We provided a draft of this report to DOT and NTSB for review and comment. Both DOT and NTSB provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Transportation, the Chairman of the National Transportation Safety Board, and the appropriate congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or members of your staff have questions about this report, please contact me at (202) 512-2834 or dillinghamg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. Appendix I: Organizational Affiliations of Aviation Industry Stakeholders GAO Interviewed Federal agencies Federal Aviation Administration International organizations Flight Safety Foundation International Civil Aviation Organization Academic institution Embry-Riddle Aeronautical University FLYHT Aerospace Solutions Ltd. This appendix contains information describing how federal agencies and the aviation industry use operational flight data. As we reported in 2010, federal agencies and aviation industry stakeholders gather and analyze aviation data for a variety of purposes. Federal agencies, including the Federal Aviation Administration (FAA) and the National Transportation Safety Board (NTSB), gather and analyze aviation data primarily to improve safety. In addition, as we reported in 2010, the aviation industry gathers quantitative and narrative data on the performance of flights and analyzes these data to increase safety, efficiency, and profitability. Aviation industry stakeholders are required to report some data to FAA— such as data on accidents, engine failures, and near midair collisions— and they have agreements with FAA and other agencies to share other data voluntarily. For decades, the aviation industry and federal regulators, including FAA, have used data reactively to identify the causes of aviation accidents and incidents. In recent years, FAA has shifted to a more proactive approach to using data to manage aviation safety risk. The FAA continues to use data to analyze past accidents and incidents, and is also using data proactively to search for risks and address potential concerns in the National Airspace System (NAS) before accidents occur or to improve NAS efficiency. According to FAA officials, there is more safety data than ever before, and these data provide the agency with the opportunity to be more proactive about safety. FAA also recognizes that today’s aviation safety, information-sharing environment is not adequate to meet the next generation needs of the NAS. According to FAA, capabilities need to be developed that can continuously extract operationally significant, safety- related information from large and diverse data sources; identify anomalous events or trends; and fuse relevant information from all available sources. The FAA and the aviation industry have sought additional means for addressing safety problems and identifying potential safety hazards. The FAA has developed a number of programs to encourage the voluntary sharing and industry-wide analysis of operational flight data. In addition to enabling a more proactive approach to addressing safety concerns in the NAS, according to FAA, these programs could potentially enable it to predict the situations under which accidents could occur and take actions to help prevent them before occurring. Flight Operational Quality Assurance (FOQA): FOQA is a voluntary safety program designed to improve aviation safety through the proactive use of recorded flight data. Operators use these data to identify and correct deficiencies in all areas of flight operations, according to FAA. FAA officials told us that if properly used, FOQA data can reduce or eliminate safety risks, as well as minimize deviations from regulations. Through access to de-identified aggregate FOQA data, FAA can identify and analyze national trends and target resources to reduce operational risks in the NAS, air traffic control, flight operations, and airport operations, according to FAA. The value of FOQA programs, according to FAA, is the early identification of adverse trends, which, if uncorrected, could lead to accidents. FOQA is a program for the routine collection and analysis of flight data generated during aircraft operations. FOQA programs provide more information about, and greater insight into, the total flight operations environment. FOQA data are unique because they can provide objective information that is not available through other methods, according to FAA. FAA officials told us that a FOQA program can identify operational situations in which there is increased risk, allowing the airline to take early corrective action before that risk results in an incident or accident. For example, according to representatives from one airline, FOQA analysis and findings are incorporated into flight crew training as well as airline policies and procedures. The FOQA program is another tool in the airlines’ overall operational risk assessment and prevention programs. As such, according to the FAA, it must be coordinated with the airlines’ other safety programs, such as the Aviation Safety Action Program and pilot reporting systems, among others. Aviation Safety Information Analysis and Sharing (ASIAS): FAA and the aviation community have initiated a safety analysis and data sharing collaboration to proactively analyze broad and extensive data to advance aviation safety, according to FAA. The initiative, known as ASIAS, leverages internal FAA data, airline proprietary safety data, publicly available data, manufacturers’ data, and other data. FAA officials told us that the airline safety data are safeguarded by the MITRE Corporation, a federally funded research and development center, in a de-identified manner to foster broad participation and engagement. According to FAA, ASIAS fuses various aviation data sources in order to proactively identify safety trends and assess the impact of changes in the aviation operating environment. ASIAS resources include both public and non-public aviation data. Public data sources include, but are not limited to, air traffic management data related to procedures, traffic, and weather. Non- public sources include de-identified data from air traffic controllers and aircraft operators, including recorded flight data and safety reports submitted by flight crews and maintenance personnel. According to FAA, governance agreements with participating airlines and owners of specific databases provide ASIAS analysts with access to safety data. Governed by a broad set of agreements, ASIAS has the ability to query millions of flight records and de-identified reports via a secure communications network, according to FAA. Under the direction of the ASIAS Executive Board, which includes representatives from government and industry, ASIAS conducts studies, safety assessments, risk monitoring, and vulnerability discovery. In the interest of enhancing aviation safety, the results of these analyses are shared with the ASIAS participants, according to FAA. ASIAS has also established key safety benchmarks so that individual airlines may assess their own safety performance against the industry as a whole. According to aviation industry stakeholders we spoke with, the key benefit of participating in this program for the airlines is the opportunity to benchmark their individual performance against the aggregate performance of the industry. Furthermore, according to FAA, ASIAS serves as a central conduit for the exchange of data and analytical capabilities among program participants. The ASIAS vision is a network of at least 50 domestic and international airlines over the next few years, making it the only such center of its kind in the world. System Safety Management Transformation (SSMT): This FAA effort, which uses ASIAS and other data, seeks to move FAA from a post-hoc, reactive assessment of aviation safety to a more predictive, risk-assessment process. According to FAA, the SSMT project is developing data analysis and modeling capabilities that will enable FAA analysts to both determine how NextGen-related operational improvements will affect safety and evaluate potential risk mitigation strategies. One of the safety analysis methodologies the SSMT team is developing is called the Integrated Safety Assessment Model. The goal of this model is to 1) provide a risk baseline against which to measure future system changes and 2) forecast the risk and safety impacts of implementing changes to the NAS, including FAA’s NextGen initiative. According to FAA, the model has been published, but continues to evolve and efforts are currently under way to refine and update its various components. The model is available as a web- based platform accessible through a user login account. According to FAA, the goal of this work is to describe in a standard format the causes and consequences of aviation accidents since 1996 as well as to describe the precursor events that contributed to these accidents. Using event sequence diagrams, FAA can describe the sequence of events led to an accident. According to FAA, current results indicate positive trends in overall impact as a means to analyze and assess baseline risks as well as emerging risks. Airlines and airframe manufacturers also use flight data, but primarily to improve the efficiency of operations and increase profitability. As described above, domestic airlines use data collected through FOQA programs to enhance operations and proactively address maintenance issues. Airlines also use data provided through the Aircraft Communications Addressing and Reporting System (ACARS), a communications system that transmits short text messages via radio or satellite, to monitor aircraft and engine performance. According to representatives from one airline we spoke with, these data can show when certain systems and equipment need repair, and help the airline to schedule repairs in order to keep the plane in service. Another airline attributed their very low in-flight engine failure rate, in part, to proactive analysis of airplane health data. Aviation stakeholders said that this type of analysis can help identify the precursors to engine failure and help the airline address problems before the entire aircraft has to be taken out of service. Finally, airframe manufacturers, such as Airbus and Boeing, have also developed airplane health management programs, which are offered as a service to the airlines. According to Boeing, Airplane Health Management gives airlines the ability to monitor airplane systems and parts and to interactively troubleshoot issues while the plane is in flight. Data collected through these types of programs are captured in flight and transmitted in real time to the airline’s ground operations. Airbus representatives told us that their program is also designed to collect information from various aircraft systems and determine probabilities and likelihood of equipment failure. According to Boeing, airlines can use this service to make maintenance decisions before the plane has landed and be ready for any needed repairs as soon as the airplane arrives at the gate. This information is used by airlines to support operational decisions to “fix-or- fly,” which result in reduced schedule interruptions and increased maintenance and operational efficiency, according to Boeing. Boeing’s Airplane Health Management was first introduced with the 747 and 777 aircraft models, but it has been thoroughly embedded in the design of the 787 model. To support the Airplane Health Management service for the 787, Boeing has a control center where each plane is tracked and its systems are monitored. Similarly, Airbus representatives said their airplane health management program supports the most recent aircraft models. Airbus A320 Karimata Strait, Java Sea (off the coast of Pangkalan Bun, Indonesia) Malaysia Airlines Flight 370 Boeing B777 Southern Indian Ocean west of Australia (Presumed) Ethiopian Airlines Flight 409 Boeing B737 Mediterranean Sea (Beirut, Lebanon) Airbus A310 Moroni, Comoros Islands (off the coast of West Africa) Airbus A320 New York, USA (Hudson River) Off the coast of Moorea, French Polynesia Boeing B737 Makassar Strait (off the coast of Sulawesi, Indonesia) Airbus A320 Black Sea (off the coast of Sochi, Russia) Off the coast of Palermo, Italy Boeing B747 Taiwan Strait (northeast of Makung, Penghu Islands) Pacific Ocean (north of Anacapa Island, California) Airbus A310 Abidjan (Off the Ivory Coast, West Africa) In addition to the contact named above, the following individuals made important contributions to this report: Andrew Von Ah, Assistant Director; Amy Abramowitz; Nabajyoti Barkakati; Jonathan Carver; Leia Dickerson; Chris Falcone; Geoff Hamilton; Bert Japikse; Delwen Jones; Josh Ormond; Madhav Panwar; Justin Reed and Elizabeth Wood.
The AF447 and MH370 disasters have raised questions about why authorities have been unable to locate passenger aircraft. In response to these aviation accidents, government accident investigators, international organizations, and industry have offered proposals that aim to enhance oceanic flight tracking and flight data recovery on a global scale. Given the implications for the U.S. commercial fleet, it is essential that the Congress understand the strengths and weaknesses of these proposals. GAO was asked to review efforts to enhance aircraft tracking and flight data recovery. This report describes (1) the challenges in tracking aircraft and recovering flight data highlighted by recent commercial aviation accidents over oceanic regions; (2) government and industry proposals to enhance aircraft tracking, and how aviation stakeholders view their strengths and weaknesses; and (3) government and industry proposals to enhance the recovery of flight data, and how aviation stakeholders view the proposals' strengths and weaknesses. GAO reviewed reports by government accident investigators and others, and technology presentations by avionics manufacturers, including current cost data, which was not available in all cases. GAO also interviewed 21 aviation stakeholders, including FAA, the National Transportation Safety Board, and industry, selected based on their expertise in aviation technology and flight operations. FAA and NTSB provided technical comments on a draft of this report, which were incorporated as appropriate. The crash of Air France Flight 447 (AF447) off the coast of Brazil in June 2009 and the disappearance of Malaysia Airlines Flight 370 (MH370) in the southern Indian Ocean in March 2014 highlight several challenges authorities may face in locating aircraft in distress and recovering flight recorders. First, oceanic surveillance is limited, and an aircraft's position may not be precisely known. For example, MH370 continued to fly for several hours outside of radar coverage after onboard communications equipment were no longer working, according to investigators. Additionally, communication and coordination between air traffic control centers in oceanic regions pose challenges. Finally, these accidents show that investigators may have difficulty locating and recovering flight recorders, which are used to determine accident causes, because of the ocean's depth and terrain. For instance, locating AF447's flight recorders took 2 years at a cost of approximately $40 million. Proposals to enhance aircraft tracking: Following the disappearance of MH370, the international aviation community developed voluntary performance standards to establish a near-term aircraft tracking capability using existing technologies and a long-term comprehensive aircraft tracking concept of operations. Near-term voluntary aircraft tracking performance standards : An industry task force called for automatic position reporting to airlines every 15 minutes and faster reporting when triggers, such as an unusual change in altitude, are met. According to stakeholders, existing technologies can meet this standard, and many domestic long-haul aircraft are equipped to do so, although some additional ground infrastructure may be needed. However, some airlines may face costs to equip aircraft with these technologies. In the longer term, technologies like satellite-based surveillance may provide global aircraft tracking. Long-term global aeronautical distress system : The International Civil Aviation Organization has proposed a long-term framework, which is designed to ensure an up-to-date record of aircraft progress and, in the case of emergency, the location of survivors, the aircraft, and its flight recorders. Stakeholders noted that the new framework begins to address the need for improved coordination and information sharing. One component is a tamper-proof distress tracking system, which is not yet available. Proposals to enhance flight data recovery: Low-cost actions are planned to increase the battery life of the underwater locator beacon—which emits a “ping” to help locate the flight recorders—from 30 to 90 days. In the longer term, two proposals seek to enable flight data recovery without underwater retrieval; however, neither would eliminate investigators' need to recover the wreckage itself or eliminate all search and recovery costs. Automatic deployable recorders : Designed to separate automatically before a crash and float, deployable recorders may be easier to recover. However, stakeholders are divided on equipping the commercial fleet. Some raised concerns that safety testing is needed and that equipage costs are high and potentially unnecessary given the rarity of oceanic accidents. Triggered transmission of flight data : Transmitting data automatically from the aircraft during emergencies would allow for some post-flight analysis when the flight recorders cannot be easily recovered. However, some stakeholders raised feasibility and data protection concerns.
The MHS is a complex organization that provides health services to its beneficiaries across a range of care venues, from the battlefield to traditional hospitals and clinics at stationary locations. The current management of this large health system is spread over several organizations in order to meet its two-fold mission of ensuring servicemember readiness and delivering beneficiary care. Over the years many studies have been conducted to assess potential changes to the governance structure of the MHS. DOD operates its own large, complex health system that employs almost 140,000 military, civilian, and contract personnel who work in medical facilities throughout the world to provide health care to approximately 9.7 million beneficiaries. Operationally, the MHS has two missions: supporting wartime and other deployments, known as the readiness mission, and providing peacetime care, known as the benefits mission. The readiness mission provides medical services and support to the armed forces during military operations, including deploying medical personnel and equipment throughout the world, and ensures the medical readiness of troops prior to deployment. The benefits mission provides medical services and support to members of the armed forces, retirees, and their dependents. Beneficiaries fall into several different categories: (1) active duty servicemembers and their dependents, (2) eligible National Guard and Reserve servicemembers and their dependents, and (3) retirees and their dependents or survivors. As of May 2012, active duty servicemembers and their dependents represented 36.7 percent of the beneficiary population, eligible National Guard and Reserve servicemembers and their dependents represented 9.5 percent, and retirees and their dependents or survivors made up the remaining 53.8 percent. See figure 1. Reporting to the Under Secretary of Defense (Personnel and Readiness), the Assistant Secretary of Defense (Health Affairs) is the principal advisor for all DOD health policies, programs, and force health protection activities. The Assistant Secretary of Defense (Health Affairs) issues policies, procedures, and standards that govern management of DOD medical programs and has the authority to issue DOD instructions, publications, and directive-type memorandums that implement policy approved by the Secretary of Defense or the Under Secretary of Defense (Personnel and Readiness). As the Director of the TRICARE Management Activity, the Assistant Secretary of Defense (Health Affairs) is also responsible for awarding, administering, and overseeing approximately $24.4 billion in fiscal year 2012 funding for DOD’s purchased care network of private sector civilian primary and specialty care providers. Additionally, the Assistant Secretary of Defense (Health Affairs) integrates the military departments’ budget submissions into a unified medical budget that provides resources for MHS operations; however, the military services have direct command and control of the military hospitals and their medical personnel. See figure 2 for the current organizational structure of the MHS. The care of the eligible beneficiary population is also spread across the Army, the Navy, and the Air Force, which deliver care at 56 inpatient facilities and hundreds of clinics. Both the Army and the Navy have medical commands headed by surgeons general. The Army’s portion of the fiscal year 2012 Unified Medical Budget’s funding is approximately $11.8 billion, and it manages 24 of the 56 inpatient facilities. Additionally, the Navy’s portion of the fiscal year 2012 Unified Medical Budget funding was approximately $6.4 billion. It manages 19 of the 56 inpatient facilities and provides medical support to the Marine Corps. Additionally, the Air Force’s portion of the fiscal year 2012 Unified Medical Budget’s funding is approximately $6.6 billion, and it manages 13 of the 56 inpatient clinics. The Air Force Surgeon General serves as medical advisor to the Air Force Chief of Staff and as functional manager of the Air Force Medical Service. Air Force hospitals and their personnel do not report to the Air Force Surgeon General, but directly to local line commanders. Each military department also recruits, trains, and funds its own medical personnel to administer the medical programs and provide medical services to beneficiaries. Specifically for the management of Military Treatment Facilities within the National Capital Region and the execution of related Base Realignment and Closure (BRAC) actions in that area, an additional medical organizational structure and reporting chain was established in 2007. This structure is known as the Joint Task Force National Capital Region Medical, and its Commander reports to the Secretary of Defense through the Deputy Secretary of Defense. The two inpatient medical facilities in the area, Walter Reed National Military Medical Center and Fort Belvoir Community Hospital, were directed by the Deputy Secretary of Defense in January 2009 to become joint commands. “The Joint Chiefs of Staff recommend unanimously that the Secretary of Defense immediately institute studies and measures intended to produce, for the support of the three fighting services, a completely unified and amalgamated (single) Medical Service.” As noted in DOD’s 2011 Task Force report, a long series of studies have addressed the issue of DOD’s health care organization. Performed by both internal and external boards, commissions, task forces, and other entities, a number of these studies have recommended dramatic changes in the organizational structure of military medicine. See figure 3 for a timeline of MHS governance studies. Although many of these studies favored a unified system or a stronger central authority to improve coordination among the services, major organizational change has historically been resisted by the military services in favor of the retention of their respective independent health care systems. In 1995, we reported that interservice rivalries and conflicting responsibilities, hindered improvement efforts, and noted that the services’ resistance to changing the way military medicine is organized is based primarily on the grounds that each service has unique medical activities and requirements. In June 2011, with the pending completion of the consolidation of medical facilities and functions in the National Capital Region undertaken by DOD in response to 2005 BRAC Commission recommendations, the Deputy Secretary of Defense recognized that a final decision concerning the governance of military health care in the capital region needed to be made. This need for a decision provided an opportunity to address the desired end-state governance structure of the entire MHS. Furthermore, in light of the considerable, long-term fiscal challenges the nation faces, and the 2010 comprehensive review established by the then-Secretary of Defense to inform future decisions about spending on national security, the Deputy Secretary of Defense wrote that it was important to ensure that MHS was organized in a way that curtails expenses and achieves savings to the greatest extent possible in meeting its mission. As a result, in June of 2011, the Deputy Secretary of Defense established an internal task force to conduct a review of the current governance structure of the MHS. The Task Force was directed to evaluate options for the long-term governance of the MHS as a whole and for the governance of multi-service medical markets, to include the National Capital Region and to provide a report within 90 days detailing the relative strengths and weaknesses of each option evaluated as well as recommendations. The Deputy Secretary of Defense designated the co- chairs of the Task Force as the Deputy Assistant Secretary of Defense (Force Health Protection and Readiness) and the Joint Staff Surgeon. The Task Force also contained representatives from the military services, the Joint Chiefs of Staff, the DOD Comptroller, the Cost Assessment and Program Evaluation Office, and the Under Secretary of Defense (Personnel and Readiness). In addition to this membership, the co-chairs included representatives from the Office of the Deputy Secretary of Defense, the DOD Office of General Counsel, Legislative Affairs, and Administration and Management as advisors to the Task Force. Potential Governance Structures—DOD’s report, submitted to the Deputy Secretary of Defense in September of 2011 and later to Congress in response to section 716 of the National Defense Authorization Act for Fiscal Year 2012, presented 13 potential governance structures for the overall MHS, with other options for governance of multi-service medical markets and the National Capital Region. The potential options were variations of the following three governance structures: The defense health agency governance structures would create a combat support agency led by a 3-star flag officer (Lieutenant General or Vice Admiral) who would report to the Assistant Secretary of Defense (Health Affairs). This agency would be focused on consolidating and delivering a set of shared health care support services. DOD presented variations of a defense health agency which would (1) leave management of the Military Treatment Facilities with the military services, (2) place the management of the Military Treatment Facilities under the control of the defense health agency, or (3) create hybrid structures by pairing the agency with other options such as a unified medical command. The unified medical command governance structures would create a unified functional combatant command led by a 4-star flag officer (General or Admiral) who would report to the Secretary of Defense. The command would exercise direction and control over the entire MHS but would do so either through (1) service components, (2) geographic regions, (3) a subordinate health care command, or (4) various hybrid governance structures pairing the unified medical command with other options such as a designated single service structure. Finally, the single service governance structures would place overall control of the MHS under one designated military department Secretary, who would report to the Secretary of Defense. However, each of the services would continue to organize, train, and equip their respective forces. The Military Treatment Facilities report to the designated military department Secretary through a variety of local and regional commands combinations. See appendix II for a more detailed description of DOD’s potential governance options. Task Force Voting Process—Through the course of 20 formal meetings, the Task Force members evaluated 13 potential overall governance structures by first establishing criteria for evaluation and developing a system to weight the criteria to reflect their relative importance. After formulating criteria, the Task Force discussed each potential governance structure in detail. Following discussion, the individual Task Force members voted on the governance structures by scoring them according to the criteria. The member score was then adjusted by the weights established by the Task Force, and the governance structures were ranked according to the final, weighted score. The Task Force held five voting rounds on the governance options throughout the 90 days allotted to the review process (rather than holding a single vote at the end of the review). Using this method, the Task Force evaluated the potential structures in “head-to-head” voting rounds, until the governance option that the Task Force believed was the highest ranking was determined. Task Force Results—The Task Force provided the Deputy Secretary of Defense with a report and a recommendation as to which course of action to follow for changing the governance structure of the overall MHS, the multi-service medical markets, and specifically, the National Capital Region. For the overall MHS, the Task Force’s recommendation was to pursue the formation of a defense health agency which would consolidate common shared services in support of the three military departments and to leave the medical components of the military departments as they are currently. The rationale for this action, according to the Task Force report, would allow DOD to create shared services, common business and clinical practices under one leader without large-scale changes to the MHS at this time. According to the Task Force report, pursuing this preferred option would not preclude subsequent decisions by the Department to implement more sweeping changes in the future and was considered an appropriate incremental next step to improving MHS governance and providing a structure to rein in healthcare costs. DOD’s preferred option would create a defense health agency that would assume the responsibility for shared services in the MHS with military hospitals remaining under the control of the services while other potential options represent larger scale changes. In addition, DOD’s preferred option would include implementing a shared services concept, which was common to all of its governance options; however, DOD did not develop a business case analysis that would provide a data-driven rationale for implementing the concept. DOD’s preferred option creates a defense health agency that would report to the Assistant Secretary of Defense (Health Affairs) and would consolidate and deliver shared services in the MHS while services would maintain control of their military hospitals. DOD presented a wide range of governance structures in its report, such as creating another unified functional combatant command or establishing a single service in charge of all medical operations. However, DOD’s preferred option does not require complex changes in long-established military chains of command like some other structures would. As discussed earlier, currently the Army, the Navy, and the Air Force manage their own personnel, hospitals, and medical operations. The Assistant Secretary of Defense (Health Affairs) exercises authority, direction, and control over the policies and resources of the MHS, but does not have command and control over the military hospitals or over the respective military departments’ medical personnel. Over the years, DOD has shifted certain responsibilities and authorities among various MHS officials, as seen in the establishment of the Assistant Secretary of Defense (Health Affairs) authority over the Defense Health Program in 1991 and the TRICARE Management Activity in 1998, both of which remain part of the current MHS. In 1991, the Defense Health Program was established as the result of a study of governance options for the MHS to address concerns about recurring funding crises and concerns over the inconsistent distribution of health care services and benefits among the different military departments. In 1992, Department of Defense Directive 5136.1 assigned responsibility for the program to the Assistant Secretary of Defense (Health Affairs). Later, the TRICARE Management Activity was created to reduce duplication within management of the MHS and transfer the direct management of several functions away from the Assistant Secretary of Defense (Health Affairs) to allow that position to concentrate on major policy and Defense Health Program related issues and initiatives. Together, all of these entities and their responsibilities have evolved into the current MHS governance structure. The Task Force reviewed multiple versions of three basic governance structures—defense health agency, unified medical command, and single service. The options’ primary differences from the current structure of the MHS occur mainly in three particular areas of roles and responsibilities— overall control, budgetary authority, and control of personnel. DOD’s preferred governance option is a defense health agency with military hospitals remaining under the control of the military services. The unified medical command options would assign the services’ medical assets to a functional combatant command. Lastly, the single service options would assign these assets to a single military service. Figure 4 summarizes variations of these three structures and the current structure as it has evolved over the years, while figure 5 presents a number of hybrid models also considered by the Task Force, such as an option which includes a unified medical command sharing responsibilities with a defense health agency. Overall control – determines policy making authority, dispute resolution, and lines of accountability. Under the current MHS structure, the Assistant Secretary of Defense (Health Affairs) exercises authority, direction, and control of policy and resources, but DOD noted that in practice, this structure fails to take advantage of consensus opportunities to more rapidly implement common business processes. DOD’s preferred option of a defense health agency with military hospitals remaining under the control of the services would establish a military-led combat support agency organized under the Assistant Secretary of Defense (Health Affairs) that would have authority, direction, and control of shared services, health plan management, other strategic areas, while another version of this option would assign the proposed defense health agency control of military hospitals. The unified medical command options would place authority, direction, and control of the MHS with a functional combatant commander, along with direct responsibility for execution of health care services. This option would mark a departure from the current separation of these responsibilities among the three military services. The single-service options would assign responsibility for the MHS as a whole to a designated military service Secretary who would also command all military hospitals— a departure from the current arrangement of split responsibility between the Assistant Secretary of Defense (Health Affairs) and the military departments. Under all options, the Assistant Secretary of Defense (Health Affairs) would retain a policy-making role. Budgetary authority – determines the organizational entity or entities with responsibility over the Defense Health Program appropriation. In 2007, the Defense Health Board stated in its report, Task Force on the Future of Military Health Care, that the MHS does not function as a fully integrated health care system and this lack of integration diffuses accountability for fiscal management and results in misalignment of incentives. The current governance structure vests overall budgetary authority with the Assistant Secretary of Defense (Health Affairs), who allocates funds to the services to execute their respective budgets. The Assistant Secretary of Defense (Health Affairs) does not have command and control over the military hospitals or over the respective military departments’ medical personnel. DOD’s preferred option of a defense health agency with military hospitals remaining under the control of the services would not alter this aspect of the current governance structure. An alternative potential option of a defense health agency with the military hospitals under the agency’s control would assume direct control of the Defense Health Program appropriation. Like the latter option, the unified medical command options would consolidate budgetary authority with a single official, a functional combatant commander, along with direct responsibility for the execution of health care, and would mark a departure from the current separation of these responsibilities. Similarly, the single-service options would streamline budgetary authority by vesting all such authority with a designated military service Secretary. Personnel control – determines which entity has management and supervisory responsibility over the personnel working within the MHS. Historically, military services have exercised command and control over their own medical personnel, and Task Force members told us that control over the medical personnel was a sensitive issue in their discussions. DOD’s preferred option of a defense health agency with military hospitals remaining under the control of the services would allow the services to maintain control over their own personnel. An alternative potential option of a defense health agency with the military hospitals under the agency’s control would allow the agency to take control of personnel not assigned to deployable units. The unified medical command options vary on the level of authorities retained over personnel, as some options assign control of all forces to the unified medical command, and others assign control of only some personnel to the unified medical command. The single service options provide some level of control of all personnel to the designated single military service in charge of the MHS, with variations of this option related to the assignment of the deployable medical personnel assigned to their respective military services. DOD’s potential governance options would have different effects on multi- service medical market governance, which is the management of medical care in a geographic area where more than one service operates Military Treatment Facilities through a common business plan and coordination of resources. For example, a single service option would make such coordination unnecessary, while it still might be required under a unified medical command. In its report, DOD cites the main weakness of the current governance of multi-service markets as the failure to fully leverage the medical capabilities across service boundaries in a given market to achieve efficiencies. According to DOD officials, DOD’s preferred option of a defense health agency would allow the department to implement an enhanced management structure for the multi-service medical markets that would drive such efficiencies while avoiding complex changes to long-established military chains of command. According to DOD’s report, the authorities of the multi-service market managers would be expanded to include responsibility for developing a 5- year business plan, budgetary authority for the entire medical market, and the authority to direct personnel to work in other locations within the market on a short-term basis, among other authorities. However, DOD’s current effort is not its first attempt at improving multi-service medical market governance. As its report notes, DOD has experimented with different approaches to multi-service medical markets over the past 25 years, including the 2003 establishment of Senior Market Managers responsible for coordinating the development of a single business plan for all Military Treatment Facilities in each such market. In 2006, the Deputy Secretary of Defense approved the implementation of an alternative to wholesale changes in the structure of the MHS, which included seven targeted governance initiatives. Among other things, the initiatives included the establishment of governance structures for two multi-service medical markets, San Antonio and the National Capital Region, as well as the creation of governance structures that consolidate command and control of military treatment facilities in other multi-service medical markets. As we reported in 2012, DOD established these structures in San Antonio and the National Capital Region, but had made no changes to the governance structures of other multi-service markets. Several senior DOD officials noted that while they recognize there are efficiencies to be gained in multi-service markets, they expressed reservations concerning the details of DOD’s plans for reforming such markets. One senior DOD official highlighted challenges that may arise in their operation, such as the control of medical personnel to support deployments and other missions and coordination of market business plans with the services’ priorities. DOD did not present a business case analysis for proceeding with its shared services concept common to all of the proposed governance structures, including an estimate of costs to merge shared services functions, operational savings to be accrued, or the likely timeframe in which this service consolidation would achieve savings. As we have previously reported, a business-case analysis can provide a data-driven rationale for why an agency is undertaking a consolidation initiative, such as a shared services concept since consolidation is beneficial in some situations and not in others, and so a case-by-case analysis is necessary. DOD has twice proposed this shared services concept in the past, which would consolidate areas such as information technology, contracting, and public health under one entity. DOD first proposed implementation of a shared services concept in 2006 as part of a series of seven different incremental governance initiatives adopted as an alternative to wholesale changes in the structure of the MHS. Specifically, implementation of the initiative would have created a Joint Military Health Service Directorate under a joint senior flag officer reporting to the Assistant Secretary of Defense (Health Affairs), a structure not unlike DOD’s preferred option of a Defense Health Agency for its current review of MHS governance. At the time, we recommended that DOD needed to demonstrate a sound business case, including an analysis of benefits, costs, and risks, for proceeding with its seven initiatives, and DOD concurred with our recommendation. Further, we later reported that DOD had not developed such estimates, and the Assistant Secretary of Defense (Health Affairs) had not provided guidance on how and when to complete these initiatives. In 2012, we also reported that in the prior calendar year, DOD approved a plan to reorganize the TRICARE Management Activity and establish a shared services division with a new Military Health System Support Activity as part of the former Secretary of Defense’s effort to increase efficiencies and reduce costs with the department. As a result of this effort, DOD reduced the fiscal year 2012 Defense Health Program budget request anticipating the establishment of the Military Health System Support Activity, but the initiative was put on hold pending the results of the Task Force report. In addition to a data-driven analysis, our body of work on organizational mergers, acquisitions, and other transformations has shown that agencies should apply essential change management practices such as active, engaged leadership of top leaders and a dedicated implementation team to ensure the continued attention needed for a transformation to be sustained and successful, among others. the implementation of its 2006 governance initiatives, including efforts to establish a shared services directorate, DOD did not establish an effective and ongoing communication strategy, did not establish a dedicated implementation team, and top leadership did not provide the sustained direction needed to maintain progress. Moreover, we also have previously reported on the challenges that other federal agencies have faced in attempting to specifically implement shared services. For example, in December 2005, the Department of Homeland Security (DHS) halted its eMerge program, which was expected to integrate financial management systems across the entire department and address financial management weaknesses, after DHS had spent about $52 million, according to officials. We noted our concern that moving forward, DHS did not have a fully developed financial management strategy and plan for the integration of its financial management systems and shared services, such as information technology hosting, business process services, and application management services. GAO, Highlights of a GAO Forum: Mergers and Transformation: Lessons Learned for a Department of Homeland Security and Other Federal Agencies, GAO-03-293SP (Washington, D.C.: Nov. 14, 2002). this concept, it did not estimate potential savings from consolidating common services. According to DOD officials, DOD’s preferred option of a defense health agency is a significant change for the MHS because it would allow the department to implement shared services in order to drive the adoption of common business and clinical practices and achieve efficiencies while not requiring complex changes to long-established military chains of command. However, under the current governance structure, the Assistant Secretary of Defense (Health Affairs) has the broad authority that could allow for the implementation of shared support services across the MHS. As noted above, DOD has developed proposals to exercise such authority in the past, but such proposals have never been implemented. Further, DOD has not developed a business case analysis for its shared services concept since it was first proposed in 2006. Until DOD develops a more detailed business case analysis, it lacks a data-driven rationale and a strategy for proceeding with the implementation of this concept. CNA Center for Naval Analyses, Cost Implications of a Unified Medical Command, 2006. National Capital Region, the study provides several categories for potential savings beyond headquarters personnel. For example, health care operations savings could be accrued through administrative consolidation between large medical facilities which perform responsibilities on behalf of smaller clinics in the same geographic area. CNA reported that by bridging current service administrative boundaries which require the smaller clinics to report to larger facilities of the same service, potential savings could be accrued by designating a single facility in a geographic area to perform technical, legal, and administrative functions on behalf of all nearby clinics, regardless of service affiliation. In addition, governance reorganization may provide an opportunity to reduce infrastructure costs, and as the CNA report notes, the timeline for realizing cost savings could influence the amount of possible short- and long-term savings. The difference in the areas of cost savings considered and methodological approaches could explain the varying results of CNA and DOD’s analyses. For example, DOD estimated a net cost increase for a unified medical command option, while CNA estimated a net cost savings for the same option in its 2006 study. As noted earlier, although DOD’s preferred option assumes that DOD would achieve some personnel efficiencies due to greater use of shared services, DOD did not estimate the operational savings it expects, such as savings from consolidated contracts. DOD used several potentially flawed assumptions in estimating headquarters personnel savings. DOD used several potentially flawed assumptions in estimating its headquarters personnel savings, and therefore it cannot be assured that DOD’s methods to estimate such savings produced reliable results. First, DOD estimated the size of a unified medical command by using Joint Task Force National Capital Region Medical (JTF CapMed) headquarters as an example of a unified medical command on a small scale. DOD estimated that given this command performs 10 percent of MHS operations with 150 personnel, a unified medical command would require a minimum of 1,500 personnel. However, DOD did not present evidence that 150 personnel is the most efficient number of staff for JTF CapMed, and the assumption concerning the relationship between the number of staff at JTF CapMed and a unified medical command is questionable because economies of scale could create efficiencies, therefore requiring fewer personnel. Moreover, the report assumes such economies of scale in its estimate of personnel savings from shared services functions. Second, DOD used the services’ execution of the Defense Health Program’s Operations and Maintenance budget to determine the most efficient staffing requirements for service support and regional commands. While these commands currently execute only their respective services’ portion of the budget, DOD estimated the number of full-time equivalents each of these commands would require if charged with executing the entire budget. However, in its report, DOD undermines the credibility of this method by citing numerous weaknesses, and characterized this approach as “not a credible predictor of staffing requirements.” Third, DOD determined the cost of the potential options’ personnel requirements by multiplying an average of TRICARE Management Activity civilian compensation by the number of staffing requirements. However, this figure excluded military personnel, whose compensation is markedly different from that of civilian personnel. Task Force officials stated that the internal 90-day deadline required by the Deputy Secretary of Defense for the Task Force to complete its report did not allow for a detailed analysis of implementation costs or a more thorough review of possible cost savings, and that this time period also limited the practicality of more detailed analysis. However, the National Defense Authorization Act for Fiscal Year 2012, which required a report on MHS governance options to be submitted to the congressional defense committees, was passed approximately 3 months after the Task Force completed its review and contained no specific deadline for DOD to submit its report. DOD chose to submit the report developed by the Task Force in response to the Deputy Secretary of Defense’s direction, along with an additional cost analysis in response to the statutory requirement. However, DOD could have conducted additional analysis before submitting its report to the congressional defense committees. Given the concerns outlined above, DOD has not comprehensively assessed the net costs of the various governance options. As we reported in 2007, such information is critical to making data-informed decisions about the structure of the MHS, especially in light of the nation’s current fiscal challenges. Past transformation experiences, such as the BRAC process, and prior reports on MHS governance, such as the 2006 CNA study, could provide a starting point for DOD in the consideration of possible implementation costs, cost savings areas, and methods of estimating such cost data. DOD has selected its preferred structure, a Defense Health Agency with the Military Treatment Facilities remaining under the services, without the benefit of an inclusive cost analysis which explores these areas. Table 1 provides possible implementation cost and cost savings categories from BRAC and the 2006 CNA study and available estimates for DOD’s preferred defense health agency governance option as provided in their report. Prior attempts to proceed with MHS reorganization without the benefit of reliable estimates of implementation costs and cost savings demonstrate the effects of such an approach. In 2007, DOD did not conduct a comprehensive cost-benefit analysis, including an analysis of benefits, costs, and risks, for proceeding with its preferred medical governance concept at that time, which consisted of seven different incremental governance initiatives. At the time, DOD concurred with our recommendation that they develop such an analysis, but we reported in 2011 that it had not done so. Additionally, in 2012, we reported that DOD had documented estimated financial savings for only one of those seven governance initiatives while at least one other one had an estimated cost increase.analysis, DOD’s current effort may produce similar results. DOD used a qualitative process to support its assessment of the strengths and weaknesses of the 13 potential governance options presented in its report, but did not balance this support with quantitative data. We recognize the use of quantitative data is a key component of study quality, and DOD’s criteria calls for assessing the options based on quantitative data. Also, DOD did not mention in its report some of the criteria it identified as most important for assessing the governance options because they asserted that no option that adversely affected these two criteria would be recommended. DOD used a deliberative and qualitative approach to assess the strengths and weaknesses of the 13 potential governance structures presented in its report that included developing and applying criteria to each of the 13 In establishing the MHS review, the governance options it developed.Deputy Secretary of Defense prescribed that the review should assess potential governance options based on their fulfillment of the following criteria: Provision of high-quality, integrated medical care for servicemembers Maintenance of a trained and ready deployable medical force to support combatant commanders; Achievement of significant cost-savings through, for example, elimination of redundancies, increased interoperability, and other means of promoting cost-efficient delivery of care. The Deputy Secretary of Defense noted that the Task Force members could consider additional criteria in their review. As such, the Task Force members collectively decided to split the criteria provided by the Deputy Secretary of Defense into separate criteria and add two new criteria, for a total of seven criteria used to assess the MHS governance options (see Figure 6). The Task Force members also collectively defined each of the criteria and added a weight to each based on their expert opinion of the relative importance of the criteria. The definitions of the seven criteria used by DOD—while mostly qualitative—included elements for certain criteria that called for quantitative data. For example, DOD defined the Ease of Implementation criterion as “The alternative short- should be implementable taking into account Title 10 equities,term costs and long-term savings, and decisions required inside and outside of the DOD.” DOD defined the Achieve Significant Cost Savings through Reduction in Duplication and Variation criterion as “The alternative should result in a reduction of the system operating costs.” As we have previously reported, the quality of assessments can be strengthened by using a mixed approach that includes both qualitative and quantitative information to remove concerns about bias in one data DOD’s review obtained a great deal of qualitative information source.from stakeholders and internal experts and used this information to support its assessment of the strengths and weaknesses of the governance options, but it did not balance this assessment with quantitative data. Specifically DOD’s assessments of the strengths and weaknesses did not provide data to support the quantitative elements specified in its own criteria as indicated in the following two examples. DOD did not attempt to estimate the range of either short- or long- term costs or savings data associated with the governance structures in the 14 instances where Ease of Implementation was listed either as a strength or a weakness. Instead, the DOD report cited qualitative information such as “this action would represent a significant departure in governance for all existing organizations” or “this will entail a large scale reorganization to include re-mapping of service medical personnel to operational platforms and there is no known precedent or example where this approach has been tested in other military medical organizations worldwide” as support for the assessment of this criterion. As one leading industry official told us, statements about how hard it would be to change and that the change would be too disruptive to actually implement is not sufficient evidence for avoiding necessary change. DOD did not provide cost data as support for the 11 instances where Achieve Significant Cost Savings through Reduction in Duplication and Variation was listed as a strength or a weakness. During the Task Force meetings, members expressed concern that no business case was presented for the governance options, and that the support presented for the assessment of strengths and weaknesses were “descriptive statements.” According to the meeting minutes, it was understood by the Task Force that “deeper analytical work will be required following the submission of the report.” Furthermore, DOD’s report listed the Medical Readiness criterion as a weakness for five options but did not provide supporting examples or quantitative data for this assessment. DOD defined Medical Readiness as “The alternative should maintain or enhance the ability to provide medically ready warfighters.” As support for the assessments where this criterion was listed as a weakness, DOD stated that coordination between Service Chiefs and Military Department Secretaries would be required under governance options where medical personnel were still “owned” by their service components. For governance options that included a split between unified medical command and military-led defense health agency, DOD stated that these structures would effectively split the readiness sustainment between the higher command and the services, thereby making the development and sustainment of the medical readiness forces more complex. However, DOD did not specifically identify the types of complexities or provide supporting examples in which such organizational issues have resulted in a negative impact on medical readiness. In July 2010, we reported that the services’ collaborative planning efforts regarding requirements determination for medical personnel working in fixed military treatment facilities have been limited, and recommended that DOD develop and implement cross-service medical manpower standards for common medical capabilities. DOD did not address how the potential governance structures they presented would affect such issues. Similarly, DOD did not discuss or provide support for how the governance structures would impact the other MHS priorities—population health, experience of care, and per capita cost— even though quantitative data that measure the performance of these priorities for the current governance structure are available. In addition, DOD’s assessment of the strengths and weaknesses was often unclear. Specifically, 10 of the 13 assessments of governance options listed at least one criterion as support for both a strength and a weakness, without coming to a conclusion as to whether the criterion was a strength or a weakness on balance. For example, in its assessment of its preferred governance option (Defense Health Agency with Service Medical Treatment Facilities), DOD listed the Enhance Interoperability criterion as both a strength and a weakness for the option without coming to a final conclusion as to the net effect of this assessment. Task Force members we met with told us that for options where the same criterion was listed as both a strength and a weakness, each Task Force member would make their own judgment as to which was a more important characteristic and vote accordingly – taking into account the perspective of their organization or service and the weighting of the criteria. This approach to assessing the strengths and weaknesses of the options illustrates the subjective nature of DOD’s analysis, and highlights an area where additional support, specifically quantitative data, would have improved the clarity and robustness of DOD’s conclusions about the strengths and weaknesses of the governance options. DOD weighted its criteria according to relative importance but DOD’s assessment of strengths and weaknesses did not mention two of the three criteria weighted as most important. As noted earlier, DOD assigned various weights to the seven criteria used to assess the governance options as shown in figure 6. The DOD report stated that the weighting system was developed to reflect the relative importance of the criteria. However, two of the top three criteria with the greatest assigned weight— Trained and Ready Medical Force and Quality Beneficiary Care—were not mentioned in DOD’s assessment of the strengths and weaknesses. See figure 7. DOD officials told us that these two criteria were not discussed in the report because the Task Force members agreed that each governance option presented in the report would meet both of these criteria. However, DOD did not provide an explanation or justification as to how each governance option would satisfy the two criteria in question. The officials added that there was a general understanding among the Task Force members that no option that adversely affected these two criteria would be recommended. However, five of DOD’s options presented medical readiness of the active duty force, a related and similarly important concept, as a weakness. Because DOD’s report does not discuss the criteria they identified as the most important for their assessment of the strengths and weaknesses—including providing support for why each option equally satisfied the Trained and Ready Medical Force and Quality Beneficiary Care criteria—DOD and Congress lack assurance that these criteria were sufficiently considered in DOD’s assessment. As a result, decision makers may not have well-supported, data-driven information about the strengths and weaknesses of the potential MHS governance options. Transforming the governance structure of the MHS represents a potential opportunity to implement more efficient ways of doing business while maintaining a ready and trained medical force as well as continuing to meet the needs of military personnel, retirees, and their dependents. Reliable and comprehensive information, including implementation and other associated costs, is needed to provide a data-driven rationale for why DOD may be undertaking consolidation initiatives, and a clearly presented business-case or cost-benefit analysis can justify the benefits of such action. DOD has repeatedly studied options to transform its governance structure, but has relied on implementing “interim steps” or incremental changes toward an unknown final governance structure, often without the benefit of a clear understanding of the costs and benefits of its actions. DOD risks repeating this pattern without full knowledge of the costs, strengths, and weaknesses of each of the options. As DOD moves forward with its plans to transform its governance structure, it is imperative that officials benefit from full and complete information to be assured that they choose the best alternative and that their efforts yield necessary improvements and achieve maximum efficiencies. To provide decision makers with more complete information on the total cost impact of the various governance structures to help determine the best way forward, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense (Health Affairs) to: Develop a comprehensive cost analysis for the MHS governance structures including estimates of implementation costs and cost savings in additional areas such as health care operations and infrastructure changes as well as an improved estimate of personnel savings, Develop a business case analysis and strategy for the implementation of its shared services concept, Improve its evaluation of the potential governance structures’ strengths and weaknesses by including quantitative data when available, and a specific assessment of the degree to which the options meet the criteria Trained and Ready Medical Force and Quality Beneficiary Care. In written comments provided in response to our draft report, DOD concurred with one of our recommendations and did not concur with the remaining two recommendations. DOD’s written comments are reprinted in appendix III of this report. In concurring with our recommendation that DOD develop a business case analysis and a strategy for the implementation of its shared services concept, DOD agreed, but on the premise that it can and should occur in the context of its ongoing implementation planning effort for the creation of a defense health agency. DOD reiterated that all of the potential governance options under consideration include a shared services concept and noted that as part of its implementation planning, it will ascertain which shared services, functions, and activities will be consolidated. Additionally, DOD stated it will produce a detailed implementation timeline for the transfer of each such service to the defense health agency. We agree this effort to identify which services will be consolidated and to develop a timeline for the migration of these services is important. DOD states that shared services and common business practices will realize savings, but we are concerned that DOD is moving forward in implementing its shared services concept without knowledge of implementation costs or an estimated return on its investment. Further, with respect to DOD’s governance decisions on the multi-service markets and the National Capital Region, DOD states that our recommendations for additional analysis did not apply to these reforms. We did not include these specific areas in our recommendation because the potential governance options for the multi-service markets and the National Capital Region were outside the scope of our mandated review. As a result, we did not address the extent to which DOD’s reform plans for the multi-service markets and the National Capital Region may or may not require additional analysis. However, several senior DOD officials noted during the course of our review that while they recognize efficiencies could be gained in multi-service markets, they expressed reservations concerning the details of DOD’s plans for reforming such markets. In its non-concurrence with our recommendation that DOD develop a comprehensive cost analysis for the MHS governance structures, including (1) estimates of implementation costs, (2) cost savings in additional areas such as health care operations and infrastructure changes, and (3) and an improved estimate of personnel savings, DOD noted that it recognizes that a more detailed and comprehensive cost analysis of governance options could be undertaken. However, DOD states that further cost analysis will not help to materially distinguish among the options. We disagree with DOD and believe that more comprehensive cost analysis will help to distinguish the differences among the costs and benefits of the options. First, DOD did not estimate implementation costs for any of its 13 governance options. As we reported, significant implementation costs are a key element of a comprehensive cost analysis, as illustrated in the case of the Base Realignment and Closure (BRAC) process, in which DOD’s one-time implementation costs increased 53 percent over the BRAC Commission’s original estimate. Second, we continue to believe that further analysis of cost savings areas beyond personnel cost savings, such as health care operations or reduced infrastructure, could help DOD to materially distinguish among the governance options. Third, DOD’s estimate of personnel cost savings used several potentially flawed assumptions, and as a result, we determined DOD’s estimate to be unreliable. In its decision to move forward with implementation of the defense health agency, DOD not only lacks any estimate of implementation costs and cost savings in areas such as healthcare operations and reduced infrastructure, but also reliable estimates of personnel cost savings. DOD also stated in its non-concurrence that its decision to affect incremental change through the implementation of a defense health agency enjoys the consensus of the most senior military leaders. However, the decision for this option is based on incomplete and potentially flawed data. Absent such information, we continue to believe that DOD lacks a sound basis upon which to make its decision about the future of MHS governance. In its non-concurrence with our recommendation that DOD include quantitative data as available in its assessment of the strengths and weaknesses of the potential overall governance structures and conduct a specific assessment of the degree to which the options meet the criteria for Trained and Ready Medical Force and Quality Beneficiary Care, DOD stated that the work of the MHS governance task force provided DOD senior leaders with sufficient information to make decisions among near- term medical governance reform options based on a variety of criteria, many of which are inherently qualitative in nature and would not significantly benefit from the sort of quantitative data we recommended that DOD include. However, DOD’s own criteria for the assessment of the potential governance structures called for the inclusion of quantitative information, as we reported. Further, DOD’s response to our draft report did not specifically address the portion of our recommendation that a specific assessment of the degree to which the potential governance options meet the criteria Trained and Ready Medical Force and Quality Beneficiary Care—two of the three most important criteria according to DOD task force members. Without inclusion of these criteria coupled with the lack of quantitative data, it remains unclear how DOD senior leaders have sufficient information to make decisions regarding near-and long- term medical governance reform options. Therefore, we believe our recommendation that DOD improve its evaluation of the potential governance structures’ strengths and weaknesses by including quantitative data in its assessment and to determine the impact on a trained and ready medical force and the quality of beneficiary care remains valid. In its comments, DOD noted that it is committed to the MHS governance changes agreed to by the Department leadership in 2012 that are presented in its report in response to Section 716 of the National Defense Authorization Act for Fiscal Year 2012. As we noted, Section 716 required DOD to submit a report to the congressional defense committees to include, among other things, a description of the alternative MHS governance options developed and considered by the Task Force; an analysis of the strengths and weaknesses of each option; and an estimate of the cost savings, if any, to be achieved by each option. DOD stated that to undertake the additional evaluation we recommended would not only be time-consuming but also inherently speculative and imprecise, and that additional analysis would not alter its conclusion about which governance reforms to pursue in the near term. We are not suggesting that DOD not reap the benefits of certain desirable, near-term reforms, such as the development of a business-case analysis for its shared services followed by its implementation, and we recognize that implementing these near-term reforms can provide some insight into the potential benefits of further transformation efforts. As we noted in our report, under the current governance structure, the Assistant Secretary of Defense (Health Affairs) has the broad authority that could allow for the implementation of shared support services across MHS, and DOD has had an opportunity to develop a supporting business case analysis since this concept was first proposed in 2006. However, given the complex and costly nature of MHS, we continue to believe that changes to its overall governance should be well thought out and analyzed to ensure that there are significant, measurable benefits before being implemented. In addition, the need to improve the evaluation of potential governance options by considering critical information such as the cost of DOD’s reforms, their possible cost savings, and a thorough discussion of the options’ strengths and weaknesses would benefit DOD’s decision-making process. DOD has repeatedly studied options to transform its governance structure, but has relied on implementing “interim steps” or incremental changes toward an unknown final governance structure, often without the benefit of a clear understanding of the costs and benefits of its actions. Prior attempts to proceed with MHS reorganization without the benefit of such information demonstrate the effects of such an approach. In 2007, DOD did not conduct a comprehensive cost-benefit analysis, including an analysis of benefits, costs, and risks, for proceeding with its preferred medical governance concept at that time, which consisted of seven different incremental governance initiatives. At the time, DOD concurred with our recommendation that they develop such an analysis, but we reported in 2011 that it had not done so. We reiterate that DOD risks repeating this pattern if it does not develop full knowledge of the costs, strengths, and weaknesses of each of the options under consideration. DOD noted that it is currently planning for the implementation of its governance reforms, and that it expects the defense health agency to reach an initial operating capability by 2013, with full operating capability within 2 years. We will continue to monitor DOD’s efforts to reform MHS governance. We are sending copies of this report to interested congressional committees, the Secretary of Defense, the Deputy Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Assistant Secretary of Defense (Health Affairs), the Surgeon General of the Air Force, the Surgeon General of the Army, and the Surgeon General of the Navy. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To determine how the Department of Defense’s (DOD) preferred governance option, and the other options presented in its report, changes the current structure of the MHS, we first obtained documentation describing key changes in the Military Health System (MHS) governance structure since 1991 by reviewing relevant DOD directives, legislation, and interviewing knowledgeable DOD officials. Using this as the basis for what constitutes the current MHS governance system, we then reviewed DOD’s description of each of the proposed changes to the current governance options in the Task Force report. Upon review of all of the historical as well as proposed changes to the MHS governance structure, we identified three common governance elements among them all: the overall control of policy-making authority, budgetary authority, and control of medical personnel. As a guide for developing our comparison of the changes, we used these elements to describe the differences among the various governance options and key changes that have shaped the current structure. We defined these three governance elements to encompass the following activities: Overall Control of Policy-Making Authority: Who controls the overall MHS? Who heads its various entities? Who reports to the Secretary of Defense? What is the command and control structure? Who establishes MHS policy for the Office of the Secretary of Defense, the Services, and joint entities? What are the roles of senior leaders? Budgetary Authority: Who controls the Defense Health Program appropriation? Control of Personnel: Who manages and supervises the Military Treatment Facilities and multi-service medical markets? Who controls the MHS mission and administrative support personnel among the Office of the Secretary of Defense, the Military Departments, and/or joint entities? To review DOD’s shared services concept, we reviewed the information presented in DOD’s report and interviewed DOD officials concerning their analysis of this concept. We compared this information to our prior work on business case analyses in the context of management consolidations,and leveraged our prior work on efforts by DOD and other federal agencies to establish shared services to provide context for DOD’s current efforts. DOD’s cost savings estimate, we interviewed officials concerning their estimating methods and reviewed supporting documentation, noting where we identified shortcomings in the Task Force’s approach. We were unable to rely on DOD’s cost savings estimates because the estimates and their supporting data were insufficient in the key data elements needed to completely and accurately develop them as discussed in the findings section of this report. GAO-07-536 and GAO-12-224. To determine the extent to which DOD’s assessment of the strengths and weaknesses of its potential governance options is well-supported and data-driven, we obtained and analyzed Task Force documents including meeting minutes, briefing slides, and voting templates. We then used this analysis to determine the criteria and process used to formulate the strengths and weaknesses of the options. We then assembled a list of the 78 strengths and weaknesses cited in the task force report and used a semi-structured interview process to collect information from Task Force officials regarding the process and inputs used to formulate each assessment. We then conducted a content analysis of the information provided by the officials to identify and categorize the inputs that the officials cited as contributing to the assessments of strengths and weaknesses. The categorization of the information was conducted by one analyst and confirmed by a second analyst to ensure the analysis was adequately supported by the evidence. In addition, we interviewed officials from one multi-service market and a health administration expert to obtain their opinions on the process used by DOD to formulate the strengths and weaknesses. For each of our objectives, we limited our review to the potential overall governance structures that the Task Force presented in its report. We did not specifically review the proposed changes to DOD’s multi-service medical markets or to the governance structure in place within the National Capital Region as presented in the Task Force report because we determined that these proposed changes were outside the scope of our mandate. We conducted this performance audit from March 2012 through September 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. DOD’s task force report provided the following detailed description, cost savings estimates, and strengths and weaknesses of the governance structures2 it identified as potential options for its Military Health System (MHS). As we noted earlier in our report, we found DOD’s cost savings estimates to be unreliable because the estimates and their supporting data were insufficient in the key data elements needed to completely and accurately develop them. As a result, using DOD’s data as presented below may lead to an incorrect or unintentional result. DOD’s preferred option would create a new defense health agency that would assume the responsibilities of the TRICARE Management Activity and additional possible areas of savings known as shared services. The new agency would be a combat support agency headed by a 3-star general or flag officer who would report to the Assistant Secretary of Defense (Health Affairs), but with oversight from the Chairman of the Joint Chiefs of Staff. The services would maintain their surgeons general, service support commands, and intermediate headquarters. These structures are in addition to the current MHS structure, which the Task Force included in the 13 options presented in the report. DOD’s Estimate of Projected Net Savings: $46.5 million per year. Lines of Authority: The services control the hospital and deployed health care; eliminates the Assistant Secretary of Defense role as the Director of the TRICARE Management Activity. Enhance Interoperability: The defense health agency would be focused on the shared and consolidated services. Ease of Implementation: This would require minimal change to the current service organizational structure. Enhance Interoperability: Eliminates the Joint Hospitals in the National Capital Region as well as San Antonio. Ease of Implementation: This option would require the Joint Task Force National Capital Region Medical to transition to a different structure. The services’ cultures could limit the implementation of common services and processes. Similar to the previous option, this structure would create a defense health agency combat support agency led by a 3-star general or flag officer, but would place Military Treatment Facilities under the authority, direction, and control of the agency. Military personnel not assigned to a deployable unit would be under the direction of the defense health agency, but the services would continue to own their personnel, and all civilian personnel would be under the direction of the agency. DOD’s Estimate of Projected Net Savings: $87.4 million per year. Management of all medical treatment facilities under one authority (Director, Defense Health Agency); the Defense Health Agency Director would report directly to the Assistant Secretary of Defense (Health Affairs). model may elevate management disputes to the highest levels of the DOD, as local line command disputes with the defense health agency command structure may need to be adjudicated at the level of the Secretary of the Military Department /Assistant Secretary of Defense (Health Affairs) level. Achieve Significant Cost Savings Through Reduction in Duplication and Variation: The defense health agency would be focused on the most common theme emphasized by the Task Force – an organizational model that would accelerate implementation of shared services models that identify and proliferate best practices and consider entirely new approaches to delivering shared activities. Further, placement of medical treatment facilities under the defense health agency would allow for even more rapid implementation of unified clinical and business systems, which could create significant savings. Medical Readiness: Concerns were expressed that an organization this large with this many authorities could jeopardize services priorities. A comprehensive defense health agency could reduce command and leadership development opportunities. Ease of Implementation: Moving all medical treatment facilities to the defense health agency would be a major reorganization. Other: Could mix the defense health agency mission Other: Would align management of purchased care (TRICARE) and direct care (Medical Treatment Facilities) under one entity, creating potential for greater coordination and cost-effective distribution of resources between the two sources of care. between support of MHS-wide functions and direct operation of hospitals and clinics. The Military Department’s representatives on the Task Force believed that operation of the direct care system is a Military Department responsibility. This option would create a Defense Health Agency to exercise authority, direction, and control over the Military Treatment Facilities. However, service intermediate headquarters would be replaced by a single defense health agency-run organization. DOD’s Estimate of Projected Net Savings: $21.4 million per year. Lines of Authority: This organizational construct would have clear lines of authority and there would be central control of the Military Treatment Facilities. Enhance Interoperability: This option would allow for single processes for key functions. to the highest levels. Ease of Implementation: This option would be more of a “civilianized” model which may be difficult to implement in the current military structure. It may also reduce command leadership opportunities and professional growth. A unified medical command with service components option would create a tenth combatant command led by a 4-star general or flag officer, with forces supplied by service components. Service intermediate headquarters would manage the Military Treatment Facilities, but personnel not assigned to deployable units would be assigned to the unified medical command. A Joint Health Support Command would manage the TRICARE health plan and shared services. DOD’s Estimate of Projected Net Cost: $203.6 million per year. DOD’s Assessment of Weaknesses Dispute Resolution/Lines of Authority/Accountability: The of authority would be established. Achieve Significant Cost Savings Through Reduction in Duplication and Variation: There would be central control of common business and clinical processes, and implementation would be achieved more readily with command and control throughout the medical structure to ensure compliance. Ease of Implementation: Joint Task Force National Capital Region Medical, if retained in its current form, could be addressed as a region directly reporting to the Commander, U.S. Medical Command. current structure of civilian authority over components of the MHS (the Assistant Secretary of Defense (Health Affairs)) and Military Department Secretaries) would not be maintained; the first civilian official in the authority chain would be the Secretary of Defense. Achieve Significant Cost Savings Through Reduction in Duplication and Variation: In any unified medical command structure that maintains service Components (the common model for all unified commands) the overall management headquarters overhead would increase above “As Is” and all other organizational models. Ease of Implementation: This action would represent a significant departure in governance for all existing organizations (Assistant Secretary of Defense (Health Affairs), TRICARE Management Activity, Military Department Secretaries, Military Service Chiefs, Service Medical Departments). For the Air Force, this includes creating a medical component command for operation of Air Force medical treatment facilities; the Navy would need to redesign how garrison billets are mapped to operational requirements. This structure would create a tenth combatant command for medical care. However, the unified medical command commander would exercise control over personnel and the Military Treatment Facilities. Service intermediate headquarters would be replaced by a single organization. DOD’s Estimate of Projected Net Cost: $152.3 million per year. DOD’s Assessment of Strengths Dispute Resolutions and Lines of Authority: This organizational structure would have clear lines of authority and there would be central control of the Military Treatment Facilities. The shared services (i.e. education and training, research and development, health information technology, logistics) would be centrally managed. The TRICARE Regional Offices would be aligned with the Military Treatment Facilities in the same chain of command. Enhance Interoperability: This option would focus the development of common business processes. Ease of Implementation: The Joint Table of Distributions would eliminate any multi-service market issues because the unified medical command would control the multi-service markets. Achieve Significant Cost Savings Through Reduction in Duplication and Variation: Reduction in overhead personnel would be relative to the current MHS structure. Services would focus on deployable forces with the unified medical command as the platform for medical professional force development and benefit delivery. DOD’s Assessment of Weaknesses Lines of Authority: This would be a major change for the Service Surgeons General. Enhance Interoperability: Some required service assets would not be under service control — sourcing would be from the unified medical command. Ease of Implementation: This would be a massive change for the way the DOD does business. Hospital based and wartime medical forces would be split. An alternative is to embed deployable wartime forces in a Joint Table of Distribution in the unified medical command. Achieve Significant Cost Savings through Reduction in Duplication and Variation: The Command may be focused on effectiveness over costs. Unified Medical Command, HR 1540This option would create a tenth combatant command for medical care with forces supplied by service components. Subordinate service commands would manage the Military Treatment Facilities, but within the framework of a Healthcare Command led by a 3-star general or flag officer to manage the service components. DOD’s Estimate of Projected Net Cost: $238.8 million per year. DOD’s Assessment of Weaknesses Dispute Resolution/Lines of Authority/Accountability: Clear lines of authority would be established as well as central management of shared services (i.e. education and training, research and development, health information technology, logistics). Military Treatment Facilities would be centrally controlled. Enhance Interoperability: Allows for Joint Task Force National Capital Region Medical to be easily inserted into this structure as a regional or sub-regional command. Common business processes would be implemented across the Military Treatment Facilities. Ease of Implementation: The service component execution would minimize organizational change. Achieve Significant Cost Savings Through Reduction in Duplication and Variation: The Command would likely be focused more on effectiveness over costs. Dispute Resolution/Lines of Authority/Accountability: Some required service assets would not be under service control. There would be civilian oversight for budget located at the Secretary of Defense level which would bypass the Office of the Secretary of Defense Principal Staff Assistant. Enhance Interoperability: Hospital based and unit based medical forces would be split. Ease of Implementation: This would require all three services to significantly change, with the biggest impact on the Air Force. Dual-hatted surgeons general could face perception issues from home service and the unified medical command. A single military service Secretary would be assigned all headquarters management functions, such as management of the TRICARE health plan and shared services. The designated service also would control a Defense Healthcare System agency that would include the service component commands, which in turn would command the Military Treatment Facilities. DOD’s Estimate of Projected Net Savings: $94.4 million per year. DOD’s Assessment of Weaknesses Dispute Resolution/Lines of Authority/Accountability: Clear lines of authority would be established as well as central control of the Military Treatment Facilities and multi-service markets. Service readiness assets would be under service control. Achieve Significant Cost Savings Through Reduction in Duplication and Variation: There would be single processes for key functions Dispute Resolution/Lines of Authority/Accountability: This option would create a need for coordination of issues between the service Secretaries. Enhance Interoperability: This would split the warrior and beneficiary care systems. Under this structure, a single military service Secretary would be assigned all headquarters management functions, such as management of the TRICARE health plan and shared services. In addition, the designated service would command all of the Military Treatment Facilities, while all services would remain responsible for providing personnel. DOD’s Assessment of Weaknesses Dispute Resolution/Lines of Authority/Accountability: Clear lines of authority and chain of command from Secretary through the Military Treatment Facilities commander would be established. Achieve Significant Cost Savings Through Reduction in Duplication and Variation: With shared services, there would be one set of business and clinical processes and implementation would be achieved more readily with command and control in a single service. It also could eliminate the issues that arise with multi-service markets. This option would create the most significant savings in headquarters overhead of any organizational option. Medical Readiness: With medical personnel still “owned” by their service components, a requirement for coordination between Service Chiefs and Military Department Secretaries on readiness and personnel issues would remain. Ease of Implementation: There is no known precedent or example where this approach has been tested in other military medical organizations worldwide. The Navy/US Marine Corps medical support model does not have the mission for all of the DOD; however, it is representative of how a single service model could work. Additionally, this option would entail a large scale reorganization to include re-mapping of service medical personnel to operational platforms. Dispute Resolution/Lines of Authority/Accountability: Issues would be adjudicated at a higher level (Military Department Secretary). This option would create a tenth combatant command led by a 4-star general or flag officer, with forces supplied by service components, and service commands charged with management of the Military Treatment Facilities. However, shared services would be split, with the unified medical command in charge of readiness-focused areas and a defense health agency charged with beneficiary health care and clinical quality. DOD’s Estimate of Projected Net Cost: $225.3 million per year. DOD’s Assessment of Weaknesses Dispute Resolution/Lines of Authority/Accountability: This option would align command and control forces under a military chain of command. It would also align the Assistant Secretary of Defense (Health Affairs) role to policy and oversight with execution delegated to the unified medical command commander and the defense health agency director. Ease of Implementation: This option would maintain service structures as component commands in the unified medical command. It would also support the Joint Task Force National Capital Region Medical structure. Medical Readiness: Service readiness functions would be located in the unified medical command. Dispute Resolution/Lines of Authority/Accountability: The unified medical command commander would report directly to the Secretary of Defense. It could be difficult to adjudicate disagreements between the unified medical command and the defense health agency at the Deputy Secretary of Defense level. Achieve Significant Cost Savings: The execution of the shared services and common processes would require unified medical command and defense health agency agreement. Similar to the above, this option would pair a tenth combatant command with a defense health agency with shared services divided between the two organizations. However, the defense health agency through Regional Directors, not service components, would manage the Military Treatment Facilities. DOD’s Estimate of Projected Net Cost: $238.8 million per year. DOD’s Assessment of Weaknesses Dispute Resolution/Lines of Authority/Accountability: This option would align command and control forces under a military chain of command. It would also align the role of the Assistant Secretary of Defense (Health Affairs) to policy and oversight with execution delegated to the unified medical command commander and defense health agency director. Achieve Significant Cost Savings: The execution of the shared services and common processes would require unified medical command combatant command agreement. Medical Readiness: Service readiness functions would be located in the unified medical command. Dispute Resolution/Lines of Authority/Accountability: The unified medical command commander would report directly to the Secretary of Defense. It could be difficult to adjudicate disagreements between the unified medical command and defense health agency at the Deputy Secretary of Defense level. Similar to the above two options, this option would pair a tenth combatant command with another organization, a Defense Healthcare System in charge of all of the Military Treatment Facilities managed by one military service. Shared services also would be divided between the two organizations. DOD’s Estimate of Projected Net Cost: $238.8 million per year. DOD’s Assessment of Strengths Dispute Resolution/Lines of Authority/Accountability: This option would establish clear lines of authority for administrative, operational, and tactical control of forces with each being vested in a different structure. It would also create central control of the Military Treatment Facilities. Ease of Implementation: In this option, the multi-service markets are addressed and joint facilities would be maintained. Enhance Interoperability: This option would allow for single processes for key functions. DOD’s Assessment of Weaknesses Medical Readiness: This would split the warrior care and the beneficiary care systems. Dispute Resolution/Lines of Authority/Accountability: This option would create different responsible agents for administrative, operational, and tactical control of forces. The defense health agency would be led by a 3-star general or flag officer who would report directly to either a Service Secretary, the Assistant Secretary of Defense (Health Affairs) or a combatant commander. The agency would control the TRICARE health plan. Additionally, a Medical Operations Support Command would be created to control the education and training, research and development, and public health. Finally, the individual military departments would continue to manage the Military Treatment Facilities, albeit through Service designated regional enhanced multi-service market offices instead of their current medical commands. DOD’s Estimate of Projected Cost/Saving: None presented. DOD’s Assessment of Strengths None provided. DOD’s Assessment of Weaknesses None provided. In addition to the contact named above, Lori Atkinson, Assistant Director; Edward Anderson, Jr., Rebecca Beale, Grace Coleman, Foster Kerrison, Charles Perdue, Carol Petersen, Terry Richardson, Adam Smith, and Karen Nicole Willems made key contributions to this report. GAO, Defense Health Care: Applying Key Management Practices Should Help Achieve Efficiencies within the Military Health System, GAO-12-224 (Washington, D.C.: April 12, 2012). GAO, 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue, GAO-12-342SP (Washington, D.C.: Feb. 28, 2012). GAO, Follow-up on 2011, Status of Actions Taken to Reduce Duplication, Overlap, and Fragmentation, Save Tax Dollars, and Enhance Revenue, GAO-12-453SP (Washington, D.C.: Feb. 28, 2012). GAO, Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue, GAO-11-318SP (Washington, D.C.: Mar. 1, 2011). GAO, Defense Health Care: DOD Needs to Address the Expected Benefits, Costs, and Risks for Its Newly Approved Medical Command Structure, GAO-08-122 (Washington, D.C.: Oct. 12, 2007).
Over the past decade, the cost of the MHS has grown substantially and is projected to reach nearly $95 billion by 2030 according to the Congressional Budget Office. As health care costs consume an increasingly large portion of the defense budget, current DOD leadership and Congress have recognized the need to better control these costs. Section 716 of the National Defense Authorization Act for Fiscal Year 2012 required DOD to submit a report analyzing potential MHS governance options under consideration, and also required GAO to submit an analysis of these options. In response to this mandate, GAO determined the extent to which DOD's assessment provides complete information on cost implications and the strengths and weaknesses of potential MHS governance options. To conduct this review, GAO analyzed DOD's governance report along with supporting documents, and interviewed Task Force members. The Department of Defense's (DOD) assessment of potential governance options for its Military Health System (MHS) did not provide complete information on the options' total cost impact and their strengths and weaknesses. As part of DOD's assessment, it identified 13 potential governance options for the MHS and included a limited analysis of the options' estimated costs savings and their strengths and weaknesses. All of the options would create a shared services concept to consolidate common services, such as medical logistics, acquisition, and facility planning, under the control of a single entity. DOD selected an option that would create a defense health agency to, among other things, assume the responsibility for creating and managing shared services, and leave the longstanding military chain of command intact with the services in control of the military hospitals. The National Defense Authorization Act (Act) for Fiscal Year 2012 required DOD to submit a report to congressional committees that would, among other things, estimate the cost savings and analyze the strengths and weaknesses of each option. Using key principles derived from federal guidance, including cost estimating and economic analysis documents, GAO determined that DOD could have provided more information on cost implications and strengths and weaknesses in its report to Congress. Specifically, DOD did not (1) estimate implementation costs and comprehensive cost savings; (2) include a business case to support consolidating common services; or (3) include supporting quantitative data in its analysis of the options' strengths and weaknesses. DOD's cost analysis for its potential MHS governance options was limited In that it did not include implementation costs and only estimated personnel costs savings based on some potentially flawed assumptions, such as not using representative salaries to estimate personnel savings. DOD did not develop a business case analysis and an implementation strategy for its proposed shared services concept. A business case analysis would, among other things, define the services to be consolidated, cost to implement and efficiencies to be achieved and could support DOD's assertion that implementing shared services could achieve efficiencies. DOD approved a shared services concept two other times since 2006, but it has yet to develop a business case analysis that would provide a data-driven rationale for implementing the concept. DOD used a qualitative process with input from internal experts to assess the strengths and weaknesses of the potential governance structures. However, it did not balance this support with quantitative data as its criteria for assessing the strengths and weaknesses specified. DOD officials stated that they did not provide comprehensive cost estimates or quantitative analysis of the options because an internal 90-day deadline to report back to the Deputy Secretary of Defense did not allow enough time. However, the act requiring DOD to report to Congress was enacted subsequent to DOD's own internal assessment and did not establish a specific deadline. As a result, DOD could have taken time to conduct a more comprehensive analysis before submitting its report. GAO recommends that DOD develop (1) a comprehensive cost analysis for its potential MHS governance options, (2) a business case analysis and strategy for implementing its shared services concept, and (3) more complete analyses of the options' strengths and weaknesses. DOD concurred with developing a business case analysis for its shared services concept. DOD did not concur with the other 2 recommendations, stating that further analysis would not alter its conclusions. GAO disagrees and believes that more comprehensive analysis will help to distinguish the differences among the costs and benefits of the options.
SCSEP evolved from Operation Mainstream, which trained and employed chronically unemployed adults under the Economic Opportunity Act of 1964. In 1965, Operation Mainstream provided funding to the Green Thumb organization, at the time a nonprofit affiliate of the National Farmers Union, to conduct a pilot training and employment program for economically disadvantaged older workers in several rural areas. Green Thumb was thus the first of the 10 nonprofit national sponsors that today administer most of the SCSEP funds. During the next 13 years (1965-1978), legislative and administrative actions instituted most of the basic aspects of today’s SCSEP: responsibility for the program was moved to the Department of Labor; the program was made part of the OAA and given the goal of providing subsidized employment in community service organizations to economically disadvantaged older Americans; all grantees were asked to attempt to place at least 10 percent of their program enrollees in unsubsidized jobs (the goal has been 20 percent since 1985); and 8 of the eventual 10 national sponsors, as well as most state governments, were made grantees for the program. Of the current 10 national sponsors, 5 were added because of OAA amendments and other congressional guidance to Labor, which directed that Labor add sponsors oriented toward certain ethnic groups with high concentrations of the elderly poor. Such direction explains Labor’s funding, as national sponsors, two African American organizations (1978) and three other organizations: one representing Hispanic Americans (1978); one, American Indians (1989); and one, Asian Americans (1989). The legislation, however, requires all sponsors to provide all SCSEP applicants an equal opportunity to participate in the program regardless of race or nationality. The OAA contains several provisions for Labor’s allocation of SCSEP funds. The hold harmless provision requires the Secretary of Labor to reserve for the national sponsors a funding amount sufficient to maintain the 1978 activity level. Any balance of the appropriation over the hold harmless amount is to be distributed to the sponsors and state governments mainly on an “equitable distribution” basis—that is, in accordance with the state-by-state distribution of persons 55 years old or older, adjusted for per capita income. A minor limitation on such a distribution is the requirement for a minimum allocation for each state, a provision designed to protect the smaller states. Another provision requires that the portion of any appropriation that exceeds the 1978 funding level in subsequent years will be split—55 percent for states and 45 percent for the national sponsors.However, the “55/45” provision—designed to provide state governments more parity with the national sponsors—has never been implemented. Every year since 1978, appropriations acts have overridden the 55/45 provision. These statutes have required that no more than 22 percent of the SCSEP appropriation be allocated to the state governments. At least 78 percent must be allocated to the national sponsors. A third provision that also still applies is the requirement for an equitable distribution of funds among areas within each state. The SCSEP appropriation for the 1994 program year ($410.5 million) accounted for about 28 percent of all OAA funds. All but two of the OAA programs are administered by the Department of Health and Human Services. Labor administers SCSEP through its Employment and Training Administration (ETA). Like other OAA programs, SCSEP’s authorization expired at the end of fiscal year 1995. The Congress is reviewing proposals for reauthorization. To receive a SCSEP grant, a national sponsor or state government must agree to provide a match, in cash or in kind, equal to at least 10 percent of the grant award. Many state governments make their match in the form of cash contributions. The national sponsors, on the other hand, normally provide in-kind matches in the form of donated office space, staff time, equipment, and the like. The in-kind matches for most national sponsors come not from the sponsors’ own resources but from those of the community service host agencies, where the SCSEP enrollees actually work. These host agencies typically are local libraries, nutrition centers, parks, and similar public service entities. National sponsors and state governments use the SCSEP grants to finance SCSEP part-time jobs in host agencies. The cost of such a job, or enrollee position—which generally must include at least 20 hours of work a week—is the amount determined sufficient to fund (1) an enrollee’s minimum wages, benefits, training, and incidental expenses for up to 1,300 hours a year in the program and (2) the associated administrative expenses. This cost amount, termed the “unit cost” by Labor, is adjusted periodically by Labor in consultation with the Office of Management and Budget (OMB). The unit cost is currently $6,061. Labor divides each year’s SCSEP appropriation by the unit cost amount to determine how many positions are available. Program enrollees, who must be 55 or older and earn no more than 125 percent of the federal poverty level, are paid the federal or local minimum wage—whichever is higher. For the 1994 program year, funding permitted the establishment of about 65,000 positions nationwide. An enrollee may leave a program position for such reasons as illness or acceptance of an unsubsidized job. Thus, during the 1994 program year, about 100,000 enrollees occupied the 65,000 positions; about three-quarters of these enrollees were women. Often, in the administration of SCSEP grants, entities other than the national sponsors and state governments participate as intermediaries between the sponsors and the host agencies. Some of these entities are municipalities; many are Area Agencies on Aging, organizations the state designates to plan and provide services to the elderly. These intermediaries sometimes enter into agreements with states and national sponsors as subgrantees to find specific host agencies for program enrollees. Of the 1994 program year appropriation, Labor allocated the national sponsors $320.2 million (78 percent) and the states and territories $90.3 million (22 percent). The 10 national sponsors that received grant awards were, as in previous years, the following: American Association of Retired Persons (AARP), Associacion Nacional Pro Personas Mayores (ANPPM), Green Thumb, National Asian Pacific Center on Aging (NAPCA), National Caucus and Center on Black Aged (NCCBA), National Council on Aging (NCOA), National Council of Senior Citizens (NCSC), National Indian Council on Aging (NICOA), National Urban League (NUL), and U.S. Forest Service (USFS). National sponsors operate locally through (1) subgrant agreements with local organizations, such as agencies on aging or community groups, and (2) local affiliates. Appendix II provides a short profile of the SCSEP activities of each national sponsor. Whenever the SCSEP program has a new appropriation level, Labor conducts with the national sponsors a meeting known as the “melon cutting.” At these meetings, Labor makes known its allocations to each of the national sponsors and presides over discussions in which national sponsors often trade enrollee positions in various areas. Sometimes, a representative from the National Association of State Units on Aging (NASUA) is invited to express states’ concerns, but the states have no formal control over the distribution of positions. As seen in figure 1, program year 1994 grant amounts to the national sponsors varied widely: the $102.5 million Green Thumb grant was the largest, and the $5.1 million grants each to the NICOA and NAPCA were the smallest. This variation partially reflects the differences in time that these organizations have participated in the program. With the exception of Alaska, Delaware, and Hawaii—which operate their own SCSEP programs and have no national sponsors—each state has at least two national sponsors. Fourteen states have six or more national sponsors. The District of Columbia and Puerto Rico also have SCSEP programs and national sponsors, but none of the U.S. Territories has. (See fig. 2.) As seen in figure 3, four of the sponsors operate in over half of the states; five of the sponsors operate in 16 or fewer states. Labor’s regulations allow SCSEP funds to be provided to eligible organizations through grants, contracts, or other agreements pursuant to the purposes of title V of the OAA. Department officials have chosen to fund the program through noncompetitive grants. The regulations specify that grants are the “appropriate instrument when the Department does not need to exercise considerable direction and control over the project.” Labor provides annual grant applications only to national organizations that currently sponsor SCSEP. Labor’s action is consistent with the statute and with expressions of intent by the Senate Appropriations Committee. Labor officials rely on annual Appropriations Committee report language such as the following from a recent Senate Appropriations report that seems to indicate support for the current sponsors: “It is the intent of the Committee that the current sponsors continue to build upon their past accomplishments.” In addition, the OAA, although it permits awards to other entities, creates a specific preference for awards to “national organizations and agencies of proven ability in providing employment services . . .” Labor’s procedures require that noncompetitive grants over $25,000 be included in an annual procurement plan that is forwarded for approval by the responsible Assistant Secretary to the Procurement Review Board (PRB). The PRB, whose members include designees of the Chief Financial Officer and the Solicitor, as well as the Director of the Division of Procurement and Grant Policy, is “to serve as a senior level clearinghouse to review proposed noncompetitive and major acquisitions.” The PRB advises whether competition is appropriate for each acquisition and whether long-term relationships with the same organizations are consistent with Labor policies. However, Labor exempts title V awards and does not involve the PRB in reviewing the program’s annual grant renewal decisions. Labor officials did not adequately explain the reason for this exemption. The hold harmless provision of the OAA’s title V, in effect, severely limits Labor’s ability to allocate funds among states in a way that ensures equitable distribution, that is, in accordance with the state-by-state distribution of persons 55 years old and older, adjusted to give greater weight to economically disadvantaged areas and persons. The result is a pattern of too many SCSEP positions in some states and too few in other states relative to their eligible populations. In addition, within states, Labor’s administrative inaction has permitted a continuing pattern of overserved and underserved counties. In applying the OAA’s hold harmless provision, Labor officials establish a reserve amount from each year’s SCSEP appropriation, delineated by state subtotals, to finance the 1978 level of national sponsor positions in each state. So, if the national sponsors together administered 100 positions in a certain state in 1978, they would receive thereafter, from a Labor set-aside of appropriated funds, enough funds to finance at least 100 positions in that state, assuming that the appropriation level is high enough to finance the 1978 total number of positions. Because the 1978 distribution of SCSEP positions did not, and still does not, correspond to the size of each state’s economically disadvantaged elderly population, the hold harmless provision in effect prevents a fully equitable distribution. For the 1994 program year, for example, $234.5 million of the total appropriation of $410 million was subject to the hold harmless provision and distributed accordingly. Had the $234.5 million been distributed in accordance with current age and per capita income data, every state would have received a different allocation and, in many cases, the increase or decrease would have been substantial. A total of 25 states would have gained or lost at least $500,000 each; in 13 of those states, the amount would have been over $1 million. Florida would have gained the most, $4.2 million, and New York would have lost the most, $3.9 million.(See app. III.) The hold harmless provision could be modified in two ways. The relevant provision states that the Secretary of Labor will reserve for the sponsors’ grants or contracts sums necessary to maintain at least their 1978 level of activities “under such grants or contracts.” Labor interprets this provision to require a state-by-state distribution of positions based on the sponsors’ 1978 activities. One option is to amend the hold harmless provision to specifically authorize Labor to base the distribution on the national sponsors’ 1978 total positions nationwide, rather than on the levels in each state. If the hold harmless provision were so amended, Labor would still be required to provide sufficient grants to the national sponsors to finance their 1978 number of total positions. But it would not necessarily be bound to the 1978 number of sponsor positions in any state. With the amendment, Labor could distribute all of the SCSEP dollars in accordance with the pattern of need, as measured by each state’s 55 and older population size and per capita income. Another approach would be to repeal the entire hold harmless provision. This would remove the authorizing legislation’s protection of the national sponsors’ historic base of positions and permit Labor officials to allocate funds according to need. Such a change could significantly shift funding from the national sponsors to the states. In some states, SCSEP positions may not be distributed among areas according to the equitable distribution provision of the OAA’s title V.Though the national sponsors administer about 80 percent of the enrollee positions, both states and national sponsors are responsible for equitably distributing enrollee positions. Deficiencies in equitable distribution, however, are evident in many cases when comparing a county-by-county pattern of SCSEP positions in a state with the county-by-county pattern of state residents who are eligible for participation as SCSEP enrollees. For such a comparison, we reviewed the states’ equitable distribution reports for 1989 and 1994. For example, in California, Illinois, and New York, we found that most counties had either too many or too few positions compared with the number that the distribution of eligible people would indicate. In California, for example, for program year 1994, 51 of the 59 counties had too many or too few positions. In some cases, the excess or shortfall was five positions or fewer, but, in several cases, the amount was greater. Fourteen of the counties had excesses or shortfalls of at least 15 positions. Orange County had a shortfall of 70 positions. Humboldt and San Francisco Counties each had an excess of 32 positions. State government and national sponsor officials offer several explanations for the sponsors’ not always distributing their SCSEP positions within a state strictly according to the equitable distribution guidance. First, the national sponsors are sometimes restricted geographically. In New York state, USFS, for example, does not enter such underserved areas as Brooklyn and the Bronx because they are urban communities and the Forest Service restricts its activities to national forests. Second, national sponsors with an ethnic focus are reluctant to serve areas that do not have significant numbers of their constituent ethnic group. Third, certain national sponsors, to save on administrative costs, may prefer to concentrate SCSEP positions in fewer locations, increasing the ratio of program enrollees to administrators. Fourth, certain national sponsors may be reluctant to shift positions from an overserved area where they have had long working relationships with subgrantees. In the case of the states, some have distributed their positions through existing administrative structures, without sufficiently considering the distribution of eligible people. Also, some states may have tried to achieve an equitable distribution among political jurisdictions rather than among eligible populations. Finally, some states have not adequately staffed their SCSEP program efforts or were not sufficiently active in coordinating distribution activities with national sponsors. In most states, the state government as well as several national sponsors operate SCSEP programs. Thirty-six states have four or more sponsors; while 14 states have six or more. In our talks with officials in 28 state governments, several expressed concern about duplicative national sponsor programs in certain areas, some of which also overlapped state government SCSEP programs. For example, in a northeastern state where eight national sponsors had been operating, a ninth sponsor was allowed to begin a SCSEP project in an area that, according to state officials, was already overserved. In addition, the state officials said, some national sponsors in the area were already using television spots to attract people to the program. In a southern state, state officials could not dissuade two national sponsors from operating in a city’s downtown area already served by the state’s SCSEP office. National—and some state—sponsors defend their remaining in overserved locations, citing many reasons for being in the communities where they are. However, Labor officials acknowledge that one consequence of several grantees operating in the same area is that program enrollees in proximity may receive different wages and benefits depending on the policies of the grantee organization. In a mid-Atlantic state, for example, the state unit on aging administers its own SCSEP positions as well as those of a national sponsor. The program that the enrollee is placed in—whether state or nationally sponsored and, consequently, the benefits package the enrollee receives—can depend on the time of day the enrollee applied for the program. For example, a morning applicant might be placed in the state program with a benefits package including federal holidays, sick leave, and annual leave benefits; the afternoon applicant might be placed in a national sponsor’s program with a different benefits package. Labor endorses an unwritten agreement among national sponsors that is intended to prevent enrollees from different sponsors from working at the same local host agency. The agreement is to help avoid situations in which host agencies or sponsors must explain why enrollees performing the same job tasks are compensated with different benefits and, perhaps, even wages. The drawback of this agreement, however, is that an applicant may be denied access to a particular host agency that could provide the best job and training experience for that person. Labor requires states and the national sponsors to ensure efficient and effective coordination of programs under this title. One goal of this coordination is to promote an equitable distribution of in-state funds. National sponsors are required to notify relevant state government officials of their plans to establish projects; state officials are to review and comment on such plans; and Labor is to review proposed project relocations and the distribution of projects within states. As part of its overview authority, Labor also has required states to compile annual distribution reports showing which of their counties are overserved or underserved, according to the size of their eligible populations. Most importantly, Labor is to make—limited by the OAA’s hold harmless and minimum funding provisions of title V—an equitable distribution of funds among and within states. It appears that Labor has taken few actions to more equitably distribute national sponsor activities within the states. The 1994 problems of underserved and overserved counties in California, Illinois, and New York were essentially the same ones that those states experienced 5 years earlier, in 1989. Labor officials acknowledge that they stop short of forcing the national sponsors to reallocate their positions, preferring instead to encourage sponsors to shift positions to underserved areas when enrollees vacate positions in overserved areas. State officials repeatedly pointed out that they lack the authority, under law or Labor regulation, to require the national sponsors to reallocate their positions to underserved counties. Labor could do more to encourage more equitably distributed national sponsor activities within a state. In extreme cases, Labor could increase national sponsors’ funding levels, rewarding sponsors willing to establish positions in underserved areas. Such encouragement would not contradict the hold harmless provision, which only applies among the states rather than within a state. Indeed, such encouragement could increase the effectiveness of the national sponsor role in the program. Another option for more equitably distributing SCSEP positions within the states is to increase the percentage of funds dedicated to state governments from each year’s appropriation from the current 22 percent to a higher percentage. If the Congress were to stop enacting the 22-percent limit on state funding, the OAA provision requiring that state governments receive 55 percent of all funding above the 1978 hold harmless amount would take effect. At our request, Labor ran a simulated allocation of the program year 1994 funding formula without the “78/22” cap in place. Under that simulation, the funds available to the states for program year 1994 would have increased from $90 million to about $155 million. National sponsor funding would have decreased from $320 million to $255 million (see app. IV). With their statewide administrative structures and additional funds, state governments might have more flexibility in serving their eligible populations or a greater incentive than the national sponsors to administer positions in underserved areas. In the three states where the state government administers 100 percent of the SCSEP grant money, comparatively few counties are underserved or overserved. For program year 1994, each of Delaware’s three counties had an equitable distribution of positions; each of Hawaii’s five counties had an equitable number of positions; and Alaska’s six geographic areas used for the program had close to equitable numbers. For example, one area of Alaska had 46 positions instead of the equitable number of 43; another had 34 instead of 36. These three states, however, are not typical in their geographic and population features. Increasing the states’ share of the SCSEP funds would most likely not result in a dramatically different profile of enrollees by ethnicity or sex. In the state programs, on average, the percentages of enrollees by ethnicity and sex were about the same as those in the national sponsor programs for the reporting period ending in June 1994. For example, in the state programs, 22 percent of the enrollees were black and 23 percent were male; the comparable percentages in the national sponsor programs were 24 percent black and 29 percent male. Congressional hearings earlier in the program’s history questioned national sponsors’ spending on their administration. In our review, we found that, in program year 1994, eight of the national sponsors shifted some administrative costs to another cost category, and therefore the true administrative costs exceeded the 15-percent statutory limit. This problem appears to be less widespread in the state-administered SCSEP programs. Each of the national sponsors has its own approach to administration. Some of the sponsors perform all of the administrative functions of the program directly. Others subcontract or delegate aspects of administration to other organizations or state agencies. In addition, all of the sponsors fund at least a portion of national headquarters operations from SCSEP grant funds. In 1994, to support about 850 full-time administrative positions, national sponsors budgeted about $6 million for travel and more than $9 million for rental and other office expenses. The 1976 SCSEP regulations permit sponsors to spend their SCSEP grant funds in three categories: administration, enrollee wages and benefits, and other enrollee costs. The OAA has established a 13.5-percent limit for administrative expenses. This limit may increase to 15 percent with a waiver from the Secretary of Labor. “. . . salaries, wages and fringe benefits for project administrators; costs of consumable office supplies used by project staff; costs incurred in the development, preparation, presentation, management and evaluation of the project; the costs of establishing and maintaining accounting and management information systems; costs incurred in the establishment and maintenance of advisory councils; travel of project administrators; rent, utilities, custodial services and indirect costs allowable to the project; training of staff and technical assistance to subproject sponsor staff; costs of equipment and material for use by staff; and audit services.” “enrollee physical examinations; transportation; enrollee training; special job or personal counseling for enrollees; and incidental expenses necessary for enrollee participation, such as work shoes, safety eyeglasses, uniforms, tools, and similar items.” Using application documents that grantees submitted for Labor’s approval—updated with some actual expense data not initially available for the period under review—we examined national sponsors’ budget documents for program year 1994 to see (1) how costs were apportioned among the categories and (2) whether administrative cost limits were being adhered to. We also discussed administrative cost matters with Labor staff and national sponsor officials. The results showed that eight of the sponsors had budgeted administrative expenses in excess of the limit by over $20 million, by classifying some administrative expenses as other enrollee costs and not including them under administrative expenses. The following case illustrates this practice: One national sponsor’s budget documents showed about $14 million for administrative expenses, placing the organization under the 13.5-percent limit. However, our examination identified other amounts, classified in the documents as other enrollee costs, that should have been treated as administrative costs. The sponsor classified as other enrollee costs, rather than as administrative costs, all salaries and benefits paid to its own field staff, including area supervisors, managers of field operations, and program development specialists ($5.9 million), and field staff’s travel ($1.8 million). If combined with the $14 million in acknowledged administrative costs, these expenses would raise total administrative costs for this grantee to more than 20 percent of the grant amount. We similarly recomputed the administrative costs for the other sponsors who understated these expenses (by classifying some as other enrollee costs). We found that the administrative percentages of the eight national sponsors that exceeded the 15-percent administrative expense limit ranged from 16.8 to 23 percent. Appendix V details the administrative expenses of each national sponsor for the 1994 program year. We also reviewed the other enrollee costs average percentages for the state governments in the SCSEP program and compared them with the national sponsors. For the state governments, the average, as a percentage of total grant amount, was about 6 percent in the 1994 program year; for the national sponsors, the comparable figure was about 8 percent. However, 23 state governments recorded other enrollee costs ranging from 7.0 to 13.2 percent. Labor’s SCSEP officials could better identify such administrative expense problems if Labor required that grantees provide better documentation of their administrative expenses, particularly those in the category of other enrollee costs. Because of grantees’ limited or vague reporting, Labor officials cannot adequately explain the other enrollee cost entries in the grantees’ application materials. For example, one grantee provided grant documentation that included an item shown as “other” in the category of other enrollee costs. This item, totaling $1,084,049, was delineated as $55,799 for the sponsor and $1,028,250 for a subgrantee, with no further information provided. At our request, Labor asked the sponsor for further documentation of this item. This documentation indicated that the sponsor and subgrantee expenses included costs that Labor could question for not being classified as administration, including $51,170 for postage, $132,874 for telephone service, and $522,494 for rent. From 1985 through the first half of 1995, the sponsors relied on grant provisions that incorporated proposed regulations instead of the 1976 regulations. These proposed regulations, published in July 1985, and never finalized, expanded the definition of other enrollee costs to permit several categories of costs that the 1976 regulations did not permit. These included expenses for orientation of host agencies, development of appropriate community service employment assignments, and “the costs associated with providing those functions, services, and benefits not categorized as administration or enrollee wages and fringe benefits.”Labor officials acknowledge that Labor operated the SCSEP program without formally amending the 1976 regulations. After the 1987 amendments to the OAA included the 1976 regulations’ 15-percent administrative expense limit as part of the law, Labor’s decision—to use as criteria the 1985 draft regulations—permitted sponsors to improperly characterize administrative expenses as other enrollee costs. Labor’s regulations permit sponsors to include in their administrative costs “. . . indirect costs allowable to the project.” A sponsor may use SCSEP grant money to pay for some of its general operating expenses provided that the sponsor can demonstrate that a part of those expenses indirectly supports SCSEP activities. Although our review concentrated on administrative issues other than indirect costs, Labor’s Office of Inspector General (OIG) has identified a continuing problem of improper indirect cost charges in the program. Under the policy of OMB Circular A-122, Labor’s Office of Cost Determination periodically negotiates indirect cost rates with the national sponsors. Each sponsor’s rate is the percentage of defined general operating costs—termed the “base”—that may be charged against its SCSEP grant as a SCSEP-related administrative expense. The categories of general operating expenses that may be included in the base are defined in each sponsor’s grant agreement with Labor. These categories vary somewhat among sponsors, but they typically include such expenses as executive salaries, payroll, accounting, personnel, depreciation, telephone, travel, and supply expenses. For example, one sponsor’s grant agreement with Labor specified that a rate of 35.21 percent may be applied against the sponsor’s base, defined as “Total direct costs excluding capital expenditures . . . membership fund costs, flow-through funds and program participant costs.” This means that 35.21 percent of the sponsor’s base expenses may be funded with SCSEP money, as long as that amount does not exceed the overall limit on the use of SCSEP grant money for administrative expenses. As shown in table 1, for the 1994 program year, the eight national sponsors that charge indirect costs have approved rates that ranged from 4.95 to 108.1 percent. However, exact comparisons of the rates may not be meaningful because these rates are applied to the sponsors’ different bases. SCSEP grantees have sometimes used the grant funds to pay for questionable indirect costs. One national sponsor charged to the grant more than $21,000 in indirect costs “. . . to promote employee morale and productivity including birthday, holiday and other cards, flowers, and expenses related to the company picnic and other employee morale events.” This was in addition to approximately $32,000 budgeted from direct costs for “. . . the purchase of refrigerators, microwaves, toaster ovens, and other appliances reasonably necessary to promote a positive work environment, and the purchase of bottled water for employees to promote health . . .” OMB guidance allows reasonable expenditures for such items, and we found no record of Labor’s objection to these expenditures. Sometimes, the use of SCSEP dollars for indirect costs involves considerably larger sums. On more than one occasion, Labor’s OIG questioned the propriety of a national sponsor’s use of SCSEP funds to pay for some of its operating expenses. One OIG report stated that the sponsor “. . . improperly charged to its indirect cost pool salaries and fringe benefits of employees of those divisions and offices responsible for [the national sponsor’s] own activities, such as fundraising and membership, and other non-Federal projects.” The questioned costs for program years 1988 to 1990 totaled over $700,000. The OIG stated, and program officials acknowledged, that if the amounts were upheld as improper, the national sponsor had no way of paying the money back. Yet for 3 years, while the dispute advanced through an administrative appeals process, Labor continued to award the sponsor SCSEP grants, with only a small modification to the sponsor’s indirect cost rate. A Labor official explained that the Department wanted to continue the funding while the matter was being adjudicated. However, the national sponsor and Labor decided to settle the matter before final adjudication: they agreed, early in 1995, that the sponsor would pay $400,000 (in full settlement of the $700,000 of disallowed costs) to Labor, without interest, over a 4-year period. The $400,000 is to be repaid from the sponsor’s nonfederal income in fixed quarterly installments: four payments of $12,500 in year 1, $18,750 in year 2, $31,250 in year 3, and $37,500 in year 4. At no time during the dispute did Labor’s program officials impose a cutback in total administrative spending, even a small one. Audits for additional program years are in process. Along with SCSEP’s goals of providing training and subsidized jobs, Labor has set for each sponsor a goal of placing at least 20 percent of the enrollees in unsubsidized jobs each program year. During our review, we noted that Labor had not clearly stated in any of its regulations the meaning of an unsubsidized placement. This made it virtually impossible for Labor to know how successful the sponsors are in achieving that objective. Without such a definition, the sponsors may interpret unsubsidized placement in many ways. One sponsor has defined it as one in which a program enrollee spends a specified minimum time and then moves into a paying, non-SCSEP job and holds it for a specified minimum time. Other sponsors have had no time requirements for post-SCSEP job retention or for program participation for claiming an unsubsidized placement. Labor officials agreed that determining SCSEP job placement success was a problem and initiated efforts to produce a useful definition. As we were concluding our review, Labor issued a directive defining unsubsidized placement for SCSEP purposes. States’ populations of those 55 years of age and older have changed since 1978. The statutory hold harmless provision locks in 1978 funding levels that do not correspond to each state’s eligible 55 and older population, adjusted by income; this limits Labor’s ability to equitably distribute SCSEP positions among the states. Consequently, some states in the SCSEP program are overserved and some are underserved. Labor could more equitably distribute SCSEP funds among states if the OAA’s title V hold harmless provision were amended or eliminated. Amending it to permit Labor to hold harmless only the sponsors’ 1978 nationwide total number of positions, rather than the 1978 funding level in each state, would enable Labor to (1) depart from the 1978 state-by-state pattern and (2) allot the funds so as to correct the problem of overserved and underserved states. Repealing the hold harmless provision, although an option, could significantly change the program’s character if it resulted in major shifts of funding allocations from national sponsors to state governments. Similarly, within states, the distribution of SCSEP funds leaves some counties overserved and some underserved. National sponsors are required by law to notify state governments and Labor of their plans for SCSEP positions in each state, but only Labor has the authority to effect a different pattern of positions among a state’s counties. Labor could adjust national sponsors’ funding levels to reward those willing to establish positions in underserved counties. Another step that might improve the distribution of funds within states would be legislative action to increase the percentage of positions funded by grants to state governments from the current 22 percent imposed by appropriations restrictions. The distribution patterns in the three states solely responsible for SCSEP activities were comparatively equitable. If these appropriations limitations did not exist, the share, over the hold harmless amount, going to the state governments would increase to 55 percent under the 55/45 provision of the authorizing legislation. The SCSEP program also has administrative expense problems. In the 1994 program year, we estimate that the national sponsors’ budgeted administrative expenses collectively exceeded by over $20 million the limit set by the OAA. This occurred because Labor’s 1985 draft regulations rather than the 1976 regulations guided the national sponsors’ cost allocations. Under the 1985 draft regulations, expenditures that we believe to be administrative expenses may be charged to other enrollee costs. Labor failed to require specific and useful reporting by grantees of their other enrollee costs. Therefore, sometimes, Labor could not readily identify what kinds of expenses were included in that category. The 1995 SCSEP regulations, which took effect in July 1995, allow a broad interpretation of other enrollee costs. Unless modified, these new regulations will permit the continuing allocation of administrative expenses. These funds could otherwise be spent to finance additional program positions. Labor’s use of a modified noncompetitive process for making SCSEP grants essentially results in continuing to offer grant applications only to organizations already in the program. However, in SCSEP’s case, Labor does not follow its normal procedure for noncompetitive grants, in which the PRB reviews grant decisions. If followed, PRB reviews can advise whether competition is appropriate for each acquisition and whether long-term relationships with the same grantees are consistent with Labor’s policies. Labor officials did not adequately explain the program’s exemption from this review, and we see no justification for it. If the Congress wishes to ensure equitable distribution of SCSEP funds among states, it should consider amending or eliminating the title’s hold harmless provision. Such an amendment would authorize Labor to hold harmless only the 1978 nationwide level of national sponsor positions. The Department would not be required to hold harmless the 1978 state-by-state levels. If the hold harmless provision were eliminated, (1) the national sponsors could experience reduced funding levels and (2) Labor could distribute the funds on the basis of the most current demographic data available. If the Congress wishes to better meet the OAA’s title V goal of equitably distributing SCSEP funds within states, it should consider increasing the portion of SCSEP grant funds allocated to state governments from the current 22 percent. One way to do that would be to forgo appropriations act language limiting the state governments to 22 percent of the annual appropriation. We recommend that the Secretary better meet the OAA’s title V goal of equitably distributing SCSEP funds within states. To do this, the Secretary should (1) require greater cooperation among national sponsors and states in equitable distribution matters and (2) adjust, as necessary, sponsors’ funding levels to reward sponsors that are willing to establish positions in underserved counties. In addition, we recommend that the Secretary revise the 1995 regulations to adopt the definition of administrative costs set out in the 1976 regulations. We also recommend that the Secretary enforce the statutory limit on administrative expenses and be prepared to reduce the funds available for administration of any grantee exceeding the legal limit by improperly categorizing costs or incurring improper indirect costs. Finally, we recommend that the Secretary no longer permit title V grants to be exempt from Labor’s normal review process and subject these grants to the same review as other noncompetitive grants. We provided copies of our draft report, for comment, to the Department of Labor and, through Labor, to the national sponsors. We met with Labor officials several times to discuss their concerns as well as those of the national sponsors. Where appropriate, we revised the report to include information provided by, and through, Labor. Labor’s comments and our detailed responses appear in appendix VI. Labor generally agreed with our recommendations that it (1) apply its normal noncompetitive review process to SCSEP grants and (2) require national sponsor grantees to cooperate more with states in the equitable distribution process. Specifically, Labor agreed to (1) have PRB review of SCSEP grant awards and (2) prepare procedures to enhance the role of states in the annual equitable distribution meetings. Labor also agreed to implement a process to ensure that it is apprised of disagreements on equitable distribution. Although Labor officials agreed to examine the matter more closely, they disagreed with our estimate that for the 1994 program year budget funds of over $20 million in administrative expenses were improperly allocated to the category of other enrollee costs. Citing recent audits of national sponsor organizations that did not disclose noncompliance, Labor and several of the national sponsors questioned our (1) use of budget data from grant applications and (2) criticism of criteria used for determining what costs should be allowed in the category of other enrollee costs. First, budget data submitted by the national sponsors were the only data available for the period we examined. More importantly, however, decisions by Labor officials on the appropriateness of expenses to be charged for the SCSEP program are made on budget data rather than actual expenses. Thus, our use of budget numbers that Labor uses seems appropriate. Second, with regard to Labor’s questioning of our criticism of the cost criteria used, during the period covered by our review, only the 1976 regulations had been formally promulgated. Because of Labor’s written comments about other enrollee costs, we discussed the issue with officials of Labor’s OIG and its contract auditors. Labor’s OIG staff told us that they measure grantee performance against the grant agreement. Since ETA’s program staff had incorporated the 1985 draft regulations into the grant agreements, the OIG staff had reviewed the grantees’ performance against those criteria and had not focused on this issue. However, OIG contract auditor staff agreed that administrative costs appear to have been shifted to the category of other enrollee costs after the 1985 draft regulations became part of the grant agreements. Those discussions and Labor’s position led us to recommend that the Secretary of Labor review the SCSEP regulations implemented in July 1995. Copies of this report are being sent to the Secretary of Labor and interested congressional committees. We will make copies available to others on request. Please call me on (202) 512-7014 if you have any questions concerning the report. Other major contributors are listed in appendix VII. To identify Senior Community Service Employment Program (SCSEP) grants for program years 1993-94, we reviewed grant applications, the Older Americans Act (OAA), and Labor’s regulations that relate to grant awards and to title V. We also reviewed prior studies, audits, and reports on SCSEP, including those by Labor’s Office of Inspector General (OIG). We interviewed officials in the Employment and Training Administration’s (ETA) divisions of Older Workers Programs and Acquisition and Assistance (the “Grant Office”) and in Labor’s Office of Cost Determination and Office of Procurement. We also interviewed the OIG staff currently involved in program audits and several contract auditors engaged in audits of the SCSEP national sponsors. To learn about Labor’s oversight, coordination among sponsors, subsidized placements, and the effects of administrative practices on program goals, we interviewed officials from the 10 national sponsor organizations; 28 of the state units that administer or have the opportunity to administer other organizations with an interest in SCSEP, including, the National Association of State Units on Aging (NASUA), the National Association of Area Agencies on Aging, and the U.S. Administration on Aging; and several organizations operating as subgrantees for national sponsors and state agencies. To learn about equitable distribution requirements and Labor’s implementation of the OAA’s hold harmless provision, we interviewed staff from ETA’s Office of the Comptroller and reviewed the data used in the funding allocation process. We also reviewed states’ equitable distribution reports for 1989 and 1994 to check compliance with and progress over time in meeting the OAA’s equitable distribution provision. To trace the evolution of SCSEP, we reviewed several legislative histories, from the program’s beginning as a pilot project to its present status. We also interviewed former congressional staff who had interests in SCSEP authorization, appropriations, and oversight. To select states for review, we tried to obtain a balanced perspective in geography, size, and degree of direct involvement with SCSEP. Our selection was not random. In discussing administrative and other enrollee costs for states or the national sponsors, unless otherwise noted, we used amounts budgeted in the grants rather than costs actually incurred. Labor acts on the budget information in the sponsors’ grant application packages during its approval process. Although we reviewed audits by Labor’s OIG and others, we did not personally audit the grantees or examine specific sponsor expenditures. We did not try to assess (1) the outcomes of training offered by national sponsors, states, or U.S. Territories; (2) the 502 (e)(1) section of the OAA allowing Labor to use small amounts of SCSEP funds to conduct experimental projects that involve placing enrollees in private business concerns; or (3) the relative performance in administering SCSEP of individual states and territories or individual national sponsors. We did not attempt to independently verify the accuracy of the data provided to us. We conducted our review between April 1994 and April 1995 in accordance with generally accepted government auditing standards. Senior Community Service Employment Program (SCSEP) national sponsor projects operate locally under two general approaches: (1) by subgrant agreements with local organizations, such as agencies on aging or community groups, and (2) through local affiliates of the national sponsor. National sponsor decisions on where they will administer their enrollee positions—based on how they choose to operate and the constraints that they operate under—alter the distribution of program resources within states. A profile of each national sponsor along with grant information for program year 1993 (the most recent complete year for which performance data were available) follows. (The number of staff shown as funded by the grant is based on grant application materials. The number of staff funded through the indirect cost portion of the grant may not be readily identifiable.) Year first provided funds: 1969 Administration: 10 area supervisors responsible for state projects run by AARP staff and enrollees in administrative positions Number of grant-funded employees: 144 Number of enrollees used in SCSEP administration: 502 (7 percent) States operating in: 34 (33 and Puerto Rico) State slots administered: Florida (342), North Dakota (15) Year first provided funds: 1978 Administration: 13 regional offices, one subgrantee operates SCSEP as Project Ayuda Number of grant-funded employees: 38 (estimate) States operating in: 10 (9 states and District of Columbia) (Puerto Rico added in program year 1994) Number of enrollees used in SCSEP administration: 45 (2.6 percent) State slots administered: Florida (23) Slots granted to states: none Benefits to enrollees: FICA, workers’ compensation, sick leave, vacation, paid holidays, and Liberty Mutual Insurance Definition of unsubsidized placement: Placement must have occurred in the same fiscal year that a person was a SCSEP enrollee. Person must stay on the job long enough to receive “a couple of paychecks.” Follow-up is at 60 days. Year first provided funds: 1965 Administration: 30 SCSEP state offices serving one or more states coordinate Green Thumb employees and enrollees used in administration Number of grant-funded employees: 417 States operating in: 45 (44 and Puerto Rico) Number of enrollees in used in SCSEP administration: 439 (2.6 percent) State slots administered: Montana, South Dakota, Ohio, Florida Slots granted to states: none Benefits to enrollees: FICA, workers’ compensation, personal leave (up to 50 hours maximum), bereavement leave (up to 3 days), sick leave, jury duty benefits, plus other fringe benefits in accordance with Green Thumb policy Definition of unsubsidized placement: Enrollee must have received job orientation, assessment, and counseling. Placement must be expected to last at least 90 days, must last at least 30 days. Job must have been procured within 90 days of leaving enrollee status and pay a wage equal to or greater than what they received as an enrollee. Year first provided funds: 1989 Administration: Los Angeles and Seattle projects supervised by headquarters staff, two subprojects Number of grant-funded employees: 14 States operating in: three (increases to eight in program year 1994) Number of enrollees in SCSEP administration: 26 (7.6 percent) State slots administered: none Slots granted to states: none Benefits to enrollees: FICA, workers’ compensation, up to 13 holidays, 4 hours per month sick leave, 1 personal day, 3 days bereavement leave, 10 days jury duty Definition of unsubsidized placement: Must go directly to the job from enrollee status. No minimum time on the job is required. Year first provided funds: 1978 Administration: NCCBA staff operate state projects—no subcontracts Number of grant-funded employees: 43 States operating in: 11 (10 states and District of Columbia) Number of enrollees used in SCSEP administration: 68 (3.7 percent) State slots administered: Florida Slots granted to states: none Benefits to enrollees: FICA, workers’ compensation, sick leave, annual leave, 11 paid holidays Definition of unsubsidized placement: Enrollee must have come from program directly with jobs preferred to last at least 30 continuous days. Job must have minimum hourly rate at least equal to $4.25. Follow up at 30, 60, and 90 days. No minimum time as an enrollee required. Year first provided funds: 1968 Administration: 3 regional offices, 63 subsponsor agencies, direct management of Los Angeles project Number of grant-funded employees: 77 States operating in: 21 Number of enrollees used in SCSEP administration: 188 (2.9 percent) State slots administered: (Arizona, New Jersey, Florida) Slots granted to states: (Arizona, New Jersey, Virginia) Benefits to enrollees: FICA, workers’ compensation, unemployment insurance (where required), as well as benefits consistent with host agency environment Definition of unsubsidized placement: Any job not federally funded or volunteer. No time limits in effect. Year first provided funds: 1968 Administration: All projects subcontracted to municipal, charitable, local, or state organizations. NCSC staff involved in training and subproject supervision. Number of grant-funded employees: 65 States operating in: 28 (27 and the District of Columbia) Number of enrollees used in SCSEP administration: 275 (2.7 percent) State slots administered: Alabama, Florida Slots granted to states: Maryland, District of Columbia Benefits to enrollees: FICA, workers’ compensation, 8 paid holidays, optional small hospital policy, 2 hours per pay period of leave Definition of unsubsidized placement: A job with pay equal to or better than that of the enrollee position. No time requirements exist on how long the placement must last or on how long the enrollee must have been out of the program. Year first provided funds: 1989 Administration: State coordinators in three states, one subproject Number of grant-funded employees: 10 States operating in: six (increased to 16 in program year 1994) Number of enrollees used in SCSEP administration: one (0.3 percent) Year first provided funds: 1978 Administration: subcontracts with 23 NUL affiliates in urban areas Number of grant-funded employees: 76 States operating in: 16 Number of enrollees used in SCSEP administration: 100 (4.5 percent) State slots administered: Florida Slots granted to states: none Benefits to enrollees: FICA, workers’ compensation, and unemployment compensation where applicable (New York and Michigan) Definition of unsubsidized placement: Placement in a position not funded by another government grant found within 30 days after leaving enrollee status; must have been an enrollee at least a week and must remain on the job at least 30 days. Year first provided funds: 1972 Administration: 225 projects at various USFS locations within the eight Forest Service regions, nine regional experimental stations, and headquarters; two subcontracts Number of grant-funded employees: 287 (4 full time, 283 part time) States operating in: 40 (38 states, the District of Columbia, and Puerto Rico) Number of enrollees used in SCSEP administration: 1 (0 percent) State slots administered: Florida Slots granted to states: New Hampshire, Vermont Benefits to enrollees: FICA, workers’ compensation, one hour of paid leave for every 20 hours worked, up to $35 allowance for annual physical exam Definition of unsubsidized placement: USFS has no required minimum for placement duration or separation from the program. Program year 1994 by state (continued) The amounts of the SCSEP grants have been affected by appropriations language that distributes the grant funds between the national sponsors and states in a way that differs from the language in the OAA. For program year 1994, the national sponsors received about $320 million (78 percent of the funds), and the states received about $90 million (22 percent of the funds). At our request, Labor ran a simulated allocation of the program year 1994 funding formula without the 78/22 appropriations language limit in place. Under that simulation, the funds in excess of the 1978 appropriation would have been split 55 percent for the states and 45 percent for the national sponsors. Of the $410.3 million appropriated for program year 1994, funds for the state sponsors would have increased by $65 million to about $155 million; national sponsor funding would have decreased by that amount to $255 million. The first three columns of the simulation (see table IV.1) represent simulated program year 1994 funding for the state sponsors. Column 1 shows each state’s 1978 funding level; column 2 shows the additional funds, in excess of the 1978 level, that would have been distributed to states on the basis of the “55-45” split; and column 3 is the sum of these first two columns. The national total for the state sponsors, including territorial allocations, is more than $155 million. The next three columns represent simulated funding for the national sponsors. Column 4 shows the amount of national sponsor funding in each state in 1978; column 5 shows the additional funds, in excess of the 1978 level, that would have been distributed to the national sponsors on the basis of the 55-45 split; and column 6 is the sum of columns 4 and 5. The national total for the national sponsors is about $255 million. Columns 7 to 9 combine the state sponsor and national sponsor funding. Column 7 is the sum of columns 1 and 4. Nationally, column 7 totals about $201 million, the amount of the 1978 allocation for the program. Column 8 is the sum of columns 2 and 5. Nationally, column 8 totals over $209 million and represents the funds for program year 1994 that exceed of the 1978 appropriation. Column 9 is the total of columns 7 and 8; nationally, column 9 totals the $410.3 million appropriation for program year 1994. 55%/45% state sponsors 2,236,065 55%/45% national sponsors 55%/45% total (continued) For program year 1994, most of the national sponsors allocated administrative costs to the category of other enrollee costs rather than the administrative category, which has an Older Americans Act (OAA) limit of 15 percent. Officials at Labor and some of the national sponsor organizations justified this practice because the costs included support of enrollee training or assessment activities or the costs of providing these services, expenditures allowed under the 1985 proposed SCSEP regulations. However, because the 1985 proposed regulations were never published in final form, they never superseded the 1976 legally promulgated regulations. Labor, while defending the 1985 draft definition of other enrollee costs, could not specifically explain how many of these allowed costs for program year 1994 related directly to the enrollees—nor did most of the documents provided by the national sponsors in response to Labor’s request to provide explanatory data. Some grantees provided the results of internal surveys of staff activity taken in 1985 or earlier to support their budget allocations. Others provided only their stated reliance upon Labor’s 1985 proposed regulation language as the basis for their including such administrative costs as other enrollee costs. When actual costs for program year 1994 were provided, we reviewed them and, where appropriate, included them in the tables. Tables V.2 to V.11 delineate grant costs for administration and other enrollee costs (1) from the individual national sponsor grant agreements and (2) as we identified them. Our delineation identifies costs allocated to other enrollee costs that, in our judgment, were administrative costs. All costs that could be attributed directly to enrollee training, special job-related or personal counseling, incidentals, or other direct support were excluded from the following tables. A combined total of (1) administrative costs from the grant agreement and (2) additional administrative costs identified by GAO from the other enrollee costs category is also shown for each grantee organization. A combined percentage for administration is computed as well. When actual cost data were provided by the grantees, those costs are shown in an “actual costs” column. In these instances, actual costs were added to the acknowledged administrative costs from the grant to derive totals and percentages. In cases where no actual cost data were provided, the actual cost column is blank. Using the budget data from the grant applications and, where available, actual cost data provided by the grantees, we found that administrative costs for most of the sponsors were higher than the 15-percent limit in the OAA. For program year 1994, the administrative costs labeled as other enrollee costs exceeded $18 million. (Using budget data alone, the total exceeded $20 million.) Table V.1 summarizes the additional administrative costs for all the national sponsors. Additional administrative cost (budget) Additional administrative cost (with actual) Percent of grant for all administration (with actual) American Association of Retired Persons (AARP) Association Nacional Pro Personas Mayores (ANPPM) National Asian Pacific Center on Aging (NAPCA) National Caucus and Center on Black Aged (NCCBA) National Council on Aging (NCOA) National Council of Senior Citizens (NCSC) National Urban League (NUL) National Indian Council on Aging (NICOA) U.S. Forest Service (USFS) AARP: SCSEP federal grant for program year 1994 ($49,894,391) Subtotal (A) AARP-identified administration (B) GAO total of administration (A) + (B) ANPPM: SCSEP federal grant for program year 1994 ($12,570,219) 15 (with waiver) Subtotal (A) ANPPM-identified administration (B) GAO total of administration (A) + (B) 22.4 Green Thumb: SCSEP federal grant for program year 1994 ($102,509,745) Subtotal (A) Green Thumb- identified administration (B) GAO total of administration (A) + (B) NAPCA: SCSEP federal grant for program year 1994 ($5,067,315) Subtotal (A) NAPCA-identified administration (B) GAO total of administration (A) + (B) NCCBA: SCSEP federal grant for program year 1994 ($12,298,332) 154,143 (est.) Subtotal (A) NCCBA-identified administration (B) GAO total of administration (A) + (B) No actual costs provided for these categories. NCOA: SCSEP federal grant for program year 1994 ($37,442,704) Subtotal (A) NCOA-identified administration (B) GAO total of administration (A) + (B) NCSC: SCSEP federal grant for program year 1994 ($62,845,065) Contingency for local project administration Subtotal (A) NCSC-identified administration (B) GAO total of administration (A) + (B) NICOA: SCSEP federal grant for program year 1994 ($5,066,911) Subtotal (A) NICOA-identified administration (B) GAO total of administration (A) + (B) NUL: SCSEP federal grant for program year 1994 ($14,341,274) Subtotal (A) NUL-identified administration (B) GAO total of administration (A) + (B) USFS: SCSEP federal grant for program year 1994 ($26,844,903) Subtotal (A) USFS-identified administration (B) GAO total of administration (A) + (B) The following are GAO’s comments on the Department of Labor’s letter dated July 31, 1995. 1. Concerning the appropriateness of the draft report title, SCSEP: Significant Changes Needed, the purpose of our review was not to question the need for the program or its results. We were asked to examine SCSEP’s administration; therefore, we have changed the title to Department of Labor: Senior Community Service Employment Program Delivery Could Be Improved Through Legislative and Administrative Actions to reflect that focus. More specifically, we found systemic flaws that may deny eligible people an opportunity to participate in the program and a cost allocation approach that allowed the improper budgeting and expenditure of millions of dollars, permitting national sponsors to exceed the statutory 15-percent limit on administrative costs. 2. SCSEP is a grant program for which applicants apply annually. Labor has the authority to decrease or deny altogether the funding amount sought if it has concerns about an applicant’s future performance. Therefore, Labor had a choice in funding the national sponsor in question. For this national sponsor, Labor had sufficient reason, on the basis of its Office of Inspector General (OIG) reports, to (1) be concerned about future performance and (2) consider a change to that grantee’s funding. 3. The statement, which we have rewritten to avoid the inference mentioned, seeks to explain program funding by identifying contributors and the differences between cash and in-kind contributions. 4. The report has been changed to include Labor’s updated data that reflect the proper proportion of women in the SCSEP program. 5. Although funding amounts are related to the time sponsors have participated in SCSEP, the wide variation in sponsors’ funding has been cited as a problem by several national sponsors as well as several states. Some of the smaller, ethnically targeted national sponsors have tried to serve targeted groups, these sponsors said, but have been thwarted by a reluctance on the part of some large national sponsors to leave areas they served. According to some state officials, the significant disparity between the funding they received and that received by some national sponsors left the states in a relatively powerless position in disputes over equitable distribution. 6. Noncompetitive grant awards that total several hundred million dollars a year are sufficiently sensitive to warrant the Procurement Review Board’s review. Further, constraints on the Board members’ time is not justification for weakening internal control measures. An independent review of grant award decisions, although administratively established and not explicitly required by law, is an important internal control. 7. In objecting to our views on the inadequacy of attempts to achieve equitable distribution of enrollee positions, Labor raised several issues. Concerning the issue of responsibility by states, we have revised the report to ensure that it clearly points out the responsibility that states, as well as national sponsors, have in achieving equitable distribution of enrollee positions. Concerning the issue of administrative efficiency related to the goal of equitable distribution, Labor cited our 1979 report, The Distribution of Senior Community Service Employment Positions (GAO/HRD-80-13, Nov. 8, 1979). Labor quoted that report on the approach taken by sponsors—and particularly national sponsors—to achieve equitable distribution. The report noted that, relative to the administrative limits required of the program, the national sponsors’ efforts to become cost effective did have merit. But the relationship between national sponsors’ and Labor’s efforts to achieve equitable distribution is more fully detailed in a later report, Information on the Senior Community Service Employment Program and the Proposed Transfer to the Department of Health and Human Services (GAO/HRD-84-42, Mar. 12, 1984). This 1984 report (p. 22) noted the following: “A Labor official stated that the distribution of enrollee positions within the states may not be equitable since some national sponsors established large clusters of enrollee positions early in the development of SCSEP, and these have been carried forward.” According to the 1984 report, Labor, in February 1979, asked for SCSEP sponsors in each state to (1) discuss and agree upon a rationale for distributing SCSEP funds, (2) identify areas that showed inequitable distribution, (3) establish plans for eliminating inequities without displacing current enrollees, and (4) send these plans to Labor. Labor officials said they did not receive many plans. In 1981, following up on that request, Labor asked national sponsors and state agencies, as a group effort, to report on the progress made toward achieving equitable distribution. Labor said that it received reports from approximately 90 percent of the states. As also noted in our 1984 report, Labor officials established a panel of representatives—from Labor, national sponsors, and state agencies—to review the equitable distribution reports and determine which states were making progress. The panel examined the state reports, but, according to our 1984 report, “The results were never formalized by Labor, and no general feedback was provided to the sponsors.” Labor did suggest to the program sponsors that they use the reports during their next planning sessions. In January 1984, Labor once again requested another equitable distribution report. According to our 1984 report, “while such cooperative efforts by national and state sponsors are directed toward equitable distribution, Labor does not know that such distribution has occurred.” When we began the review leading to this latest report, we asked Labor officials if they knew the status of equitable distribution in the states compared with its status 5 years earlier. Labor officials did not know for certain which states had progressed in equitable distribution. Concerning Labor’s complaint about census data, any comparison of distribution of positions between 1989 and 1994 is, necessarily, skewed. This is because the 1989 distribution of enrollee positions was made on the basis of 1980 census data, and the 1994 distribution was made on the basis of 1990 census data. The introduction of 1990 census data in the 1994 equitable distribution reports may have obscured progress made between 1989 and 1994 in some areas and exaggerated progress in others. 8. The number of enrollee positions available depends on the level of SCSEP funding, not on the hold harmless provision. When funding levels decline, past performance indicates that sponsors—state and national—leave some positions unfilled to ensure that enrollees in other positions may continue in the program. In receiving enrollee positions formerly available to a national sponsor under the hold harmless provision, a state sponsor would have the option of (1) administering these positions itself or (2) subcontracting the administration to others, including the original national sponsor. In addition, the forward funding nature of the program (see footnote 7) would give all parties concerned ample time to adjust to a change in sponsors. Therefore, it is not likely that removing the hold harmless provision would “place many enrollees ‘on-the-street’ without alternatives.” 9. Sponsors that emphasize different activities and target different groups may, nevertheless, serve the same people. All sponsors must provide enrollees positions and training that corresponds to their aptitudes and preferences—just as all sponsors, regardless of their ethnic focus, must accept potential enrollees only on the basis of age and income criteria, not on ethnicity or sex. 10. The unwritten agreement mentioned allows national sponsors to avoid situations that might provoke dissension because of differences in salaries or benefits of enrollees participating through different sponsors. This policy could possibly deny enrollees access to the type of training best suited to their needs. Whether such a denial is permanent or not is irrelevant. The policy serves the interests of SCSEP’s national sponsors rather than those of the elderly poor, for whom the program exists. 11. Several national sponsors—the U.S. Forest Service is an example—have geographic constraints on their decisions on areas to serve. Other national sponsors have a preference for serving specific ethnic or minority groups (whose languages and cultures may require specialized knowledge), which guides some of their decisions on areas to serve. States are not likely to face such constraints or preferences. In addition, some states with small populations have said (1) their level of effort in SCSEP has been curtailed by the minimal funding they receive and (2) more funding would allow them to increase their SCSEP efforts. 12. Labor raised two issues: (1) our use of budgeted rather than actual expense data in assessing administrative and other enrollee costs and (2) our interpretation of acceptable administrative costs in the SCSEP program. Regarding the first issue, during our review, we obtained from Labor’s SCSEP staff the data relevant to SCSEP grant awards. When we discussed actual cost data, staff described the separation of Labor’s program and fiscal oversight activities and the limited use of actual cost data in program planning and new grants approval. Actual cost data are not normally available until well after the grant year is completed. When we received Labor’s enclosure, indicating its revised view on the use of actual cost data, we asked when these data would be available. Actual data would not be available for 3 months or longer, Labor said. By that time, program year 1995 allocations had already been made. We also asked for any additional data the national sponsors had used to justify their budgeted costs for the 1994-95 program year. Labor said it did not have these data but would request them from the national sponsors. Nine of the national sponsors provided data or information intended to explain and support their allocation of costs to the category of other enrollee costs. In instances in which these data indicated that the expenses had directly supported other enrollee cost services, we have revised the totals we had originally developed using budgeted amounts and noted the revisions in the actual costs columns of tables V.2 through V.11. However, little of the cost data adequately distinguished other enrollee costs as being in direct support of the enrollees rather than general administrative operations. Ultimately, the relevance of the budgeted versus actual costs issue is questionable because Labor’s SCSEP program officials historically have based their application approval and oversight decisions primarily on budgeted costs, which should be supported by up-to-date and accurate cost data. Most of the data Labor received from the national sponsors did not directly support the budgeted costs they were asked to support. Regarding the second issue, Labor’s other response to our findings of misallocations questions our interpretation of acceptable administrative costs. Labor cites the authority of the 1985 SCSEP draft regulations and their incorporation into the grant agreements. As noted earlier in this report, these draft regulations have no legal authority. In 1976, Labor published the only formal regulations in effect for SCSEP before program year 1995. Labor’s proposed amended SCSEP regulations, published in 1985, remained in draft form. Because these regulations never became final, they never gained the force and effect of law. Between 1976 and 1995, the only regulations in effect that pertained to SCSEP were the 1976 regulations. Through its comments, Labor has (1) downplayed the existence of the legally promulgated 1976 regulations and (2) interpreted the draft 1985 regulations as having the force and effect of law, when, in fact, they do not. Labor officials have not provided us with an acceptable legal basis for using the 1985 draft regulations instead of the legally promulgated regulations of 1976. Finally, these officials also suggested that other Labor programs under other legislative authority may permit a different interpretation of cost allocations. This may be true, but with respect to SCSEP, the regulations and related provisions of the Older Americans Act (OAA) speak for themselves. A brief discussion of the context of the other enrollee costs issue may help in understanding it. The national sponsors have repeatedly criticized Labor’s refusal to recalculate the unit cost to administer an enrollee position. Labor officials have informally acknowledged that the administrative costs associated with a placement have increased significantly over time. They also have acknowledged that some expenses that have been allocated to the category of other enrollee costs by national and state sponsors would have been more appropriately included in the administrative cost category. Through the introduction of the 1985 SCSEP draft regulations, Labor, in effect, used the category of other enrollee costs as a way to provide sponsors, most national and some state, with additional funds to cover administrative expenses. The purpose of our review was not to determine whether the present level of funding for administrative expenses is adequate but to identify whether administrative expenses have been properly allocated under existing law and regulations. We continue to conclude that in many instances administrative expenses have not been properly allocated. Finally, the July 1995 regulations, which became final as we concluded our review, will allow many of the cost allocations of the type that violated the 1976 regulations to continue. We believe that Labor’s interpretation of these new regulations is inconsistent with the OAA’s 15-percent limit on administrative costs. This belief has prompted our recommendation that the Secretary of Labor clearly delineate the expenses allowable as other enrollee costs and adopt the definition of administrative costs set out in the 1976 regulations. 13. Labor’s OIG officials and contract auditors have told us that significant concern has existed about grantee indirect costs for several years. These costs have been the focus of most of Labor’s OIG audit activity for several of the grantee organizations. 14. While the Office of Management and Budget guidance allows reasonable expenditures for “employee morale activities,” we questioned the use of scarce program funds for such activities. In the report example Labor cited, one of the grantee organizations budgeted about $57,000 for items to promote staff morale and for recognition of staff achievement. We have changed the report to reflect the fact that $25,000 of the budgeted amount was from indirect costs and $31,944 was from direct costs. The grantee organization in question provided actual cost data showing that program year 1994 expenditures from its employee morale account were $21,347.27 rather than the budgeted amount of $21,821. 15. We have changed the report to reflect that reporting of only legitimate unsubsidized placements is the responsibility of states as well as national sponsors. Laurel H. Rabin, Communications Analyst Stefanie G. Weldon, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO examined the Department of Labor's (DOL) Senior Community Service Employment Program (SCSEP), focusing on: (1) DOL process for awarding SCSEP grants; (2) the extent to which DOL equitably distributes SCSEP funds; and (3) SCSEP administrative costs. GAO found that: (1) in order to maintain 1978 activity levels, the Older Americans Act (OAA) requires DOL to award SCSEP grants to national sponsors and those with proven track records; (2) of the $410 million in SCSEP appropriations for program year 1994, $234.5 million was distributed under the 1978 activity level provision; (3) DOL's use of the 1978 allocation pattern severely limited its ability to achieve equitable distribution among states; (4) appropriations statutes have overriden the title V funding provision to require that no more than 22 percent of SCSEP appropriations be allocated to state governments; and (5) in program year 1994, national sponsors' administrative costs exceeded the 15-percent limit due to administrative expenses being charged to another cost category.
In 2011, there were an estimated 150,000 Native American veterans, representing less than 1 percent of the entire veteran population, Additionally, according to a VA according to the U.S. Census Bureau.report that analyzed Census Bureau data, American Indian and Alaska Native veterans were significantly more likely to be unemployed, as compared to all other veterans.American Indian and Alaska Native veterans were, in descending order, California, Oklahoma, Arizona, and New Mexico; Alaska had the highest proportion of Native Americans among its veteran population. The states with the largest numbers of Tribal lands vary dramatically in type, size, and demographics. purposes of this report, we define tribal lands to include Indian reservations, Alaska Native villages, and Hawaiian Home Lands. They range in size from the Navajo Nation, with a population of over 176,000 American Indian residents spread out over an area of about 24,000 square miles, to some tribal lands with fewer than 50 Indian residents or an area of less than 1 square mile. In addition, most tribal lands are rural or remote, although some are near metropolitan areas. Communities located on tribal land also often lack basic infrastructure, such as water and sewer systems, and sufficient technology infrastructure, such as telecommunications lines that are commonly found in other American communities. Additionally, in 2008, the Census Bureau reported that American Indians and Alaska Natives had almost twice as high a percentage living in poverty compared with other Americans—27 percent compared with 15 percent. While Native Americans reside in every state, the Census Bureau estimated in 2012 that about 20 percent of American Indians and Alaska Natives live on tribal lands and that 16 percent of Native Hawaiians live on Hawaiian Home Lands. See GAO, Indian Issues: Observations on Some Unique Factors that May Affect Economic Activity on Tribal Land, GAO-11-543T, (Washington, D.C.: Apr. 7, 2011). Since 1979, government-to-government relationship with those tribes.BIA has regularly published a list of federally recognized Indian tribes in the Federal Register. As of May 2013, there were 566 federally recognized tribes, including 225 in Alaska. By contrast, Native Hawaiians do not have federal recognition or a government-to-government relationship with the United States. However, some federal grant programs target Native Hawaiians as a group, in some cases along with other populations. Native American veterans can be served by a range of employment- and training-related federal programs that are administered by DOL and other agencies (see fig. 2 for selected examples). Within DOL, two organizations administer programs that serve Native American veterans: the Veterans’ Employment and Training Service (VETS), and the Division of Indian and Native American Programs (DINAP). VETS has responsibility for a number of programs that serve veterans. Among these Services funded is the Jobs for Veterans State Grants program (JVSG).through the JVSG program are available to all eligible veterans, including those with disabilities and other barriers to employment. JVSG grants provide funding to states to support two types of staff positions, Disabled Veterans’ Outreach Program specialists (DVOPs) and Local Veterans’ Employment Representatives (LVERs), who are generally employees of state workforce agencies. DVOPs and LVERs provide services including skills assessment, job search assistance, and outreach to employers on behalf of veteran jobseekers, among other services. JVSG grants are awarded to states, and although Native American tribes are not eligible grantees under the program, Native American veterans who are otherwise eligible can be served by DVOPs and LVERs. In general, in order to provide services on tribal land, DVOPs and LVERs schedule their visits to tribal land on a case-by-case basis. JVSG funding is allocated to each state in proportion to the number of veterans seeking employment within the state. Additionally, the JVSG program is a mandatory partner in the Workforce Investment Act system of American Job Centers, or one- stops, where services are provided by a range of employment and In 2012, there were 2,600 training programs in a single location.American Job Centers nationwide, and according to DOL, 12 are located on or affiliated with tribal land. DINAP, within DOL’s Employment and Training Administration, serves Native American veterans under Section 166 of the Workforce Investment Act (WIA) of 1998. This program serves all eligible Native Americans— veterans and non-veterans alike—including American Indians, Alaska Under the program, in 2012, DINAP Natives, and Native Hawaiians. administered grants to Native American entities in all 50 states, including Alaska and Hawaii. These grants support comprehensive workforce investment activities for Native Americans, including employment and training services for jobseekers. The Jobs for Veterans Act established a priority of service requirement for veterans in qualified DOL job training programs, including WIA programs.therefore give priority to veterans over others seeking employment and training services. WIA Section 166 grantees must Other federal agencies also administer employment and training-related programs that serve Native American veterans, among other groups. These programs include the Department of the Interior’s Bureau of Indian Affairs’ Indian Job Placement and Training program; Department of Veterans Affairs programs including Vocational Rehabilitation for Disabled Veterans; and the Department of Education’s Vocational Rehabilitation Services for American Indians with Disabilities, among other federal programs. In addition, the Department of Defense (DOD), in partnership with VA, DOL, and other agencies, administers the Transition Assistance Program, which provides mandatory, multiple-day transition briefings at military bases to facilitate servicemembers’ transition to civilian life, including a focus on preparation for employment. Additionally, the Small Business Administration (SBA) administers programs to facilitate self-employment for veterans and Native Americans, including American Indians, Alaska Natives, and Native Hawaiians. DOL’s 2010 report was prepared in response to a provision in the Veterans’ Benefits Improvement Act of 2008. This provision required DOL, in consultation with VA and DOI, to submit a report assessing the employment needs of Native American veterans living on tribal lands to the House and Senate Committees on Veterans’ Affairs. The Act further required that the report include recommendations for improving employment and job training opportunities for such veterans, and a review of current and prior government-to-government relationships between VETS and tribal organizations. DOL contracted with a consulting firm, which issued a report to DOL in March 2010, and relied on interviews and surveys of federal and state officials, tribal organizations, surveys of Native American veterans, and focus groups. The contractor’s report focused on VETS and the JVSG program, but also considered WIA Section 166 and other relevant programs. The contractor found little or no attention by the JVSG program to the needs of Native American veterans, and although it noted some progress in relationships between DOL and tribal governments, it found little awareness of VETS programs among tribal organizations. Based upon the contractor’s report, DOL issued its final report, including six recommendations, to Congress in October 2010. DOL’s report included recommendations for short- and long-term actions DOL could take in key areas, but did not include a specific timetable for implementation. DOL is in the early stages of implementing several of the report’s recommendations, but implementation of the remaining recommendations has not occurred. The agency has begun to take initial steps to respond to three of the report’s six recommendations: improve interagency collaboration, create an advisory committee subgroup for Native American veterans, and conduct a needs assessment. DOL has taken little to no action on the remaining three recommendations: increase outreach, pursue program flexibility, and boost economic development. DOL officials told us that leadership transitions and budget challenges have significantly contributed to their current position and limited endorsement of these recommendations. For detailed information on the status of each of these three recommendations from DOL’s 2010 report, see appendix II. Collaboration. To respond to the recommendations to increase collaboration with federal and state agencies and outreach to tribal governments, DOL has begun collaborating with agencies that serve veterans, including VA and BIA, to learn more about how to better serve Native American veterans and has conducted several listening sessions with tribal leadership. Advisory subcommittee. The 2010 report also recommended that DOL create an advisory committee subgroup on Native American veterans’ employment issues to the DOL Advisory Committee on Veterans’ Employment, Training and Employer Outreach (ACVETEO). DOL officials told us that although VETS may offer proposals to the committee, the ACVETEO members decide—because the ACVETEO operates independently—whether to establish a new subgroup. DOL is currently developing a proposal, to be presented to ACVETEO, to establish a subgroup for Native American veterans, and is also considering appointing a representative from the Native American veterans’ community to join ACVETEO. Needs assessment. DOL officials told us that they have no plans to conduct a needs assessment to develop better data on Native American veterans living on tribal land, citing potentially prohibitive costs to conducting a comprehensive assessment. However, DOL has identified a potential source for data within DOD that provides race and ethnicity as well as address information for returning veterans. This information could help DVOPs and LVERs target their visits to tribal land, but DOL’s plans to acquire these data are still being negotiated and could take time to finalize. In the interim, DOL has identified VA data that, while not comprehensive, in the short term, might provide an estimate of the number of Native American veterans on tribal land. Outreach. The 2010 report recommended that DOL launch a communications program focused on outreach to Native American veterans on tribal land by establishing a public information campaign to share information about VETS programs through the use of the web, direct mail, newspapers, and social media. However, DOL officials told us that they have not initiated such a campaign due to limited funding. Program flexibility. A DOL official said that the agency has not pursued the recommendation to provide additional program flexibility, explaining that under the JVSG program, states already have some flexibility under current law to permit them to build solutions to fit their needs. The report recommends that DOL identify, on a program-by-program basis, available flexibilities to meet the needs of Native American veterans living on tribal lands. While DOL has indicated that it is open to identifying any such opportunities that exist under current law, to date it has taken little action in this area. Economic development. With regard to economic development, the 2010 report noted that programs to support economic development— such as financing and support for new businesses—lie outside VETS’s authority. However, the report also noted that economic development should be a major focus across federal programs addressing employment concerns for Native American veterans. To date, DOL officials told us they had taken no action on this issue and emphasized that, with regard to economic development, DOL’s focus is on the role that skills can play. Since delivering the report in 2010, DOL has not developed a strategy that specifically establishes roles and responsibilities, goals, and time frames for implementation of the report’s recommendations. Our past work identified key practices for implementing organizational change efforts to improve an agency’s programs, including establishing a coherent mission and integrated strategic goals to guide the effort; setting a timeline to build momentum and show progress from day one; and dedicating a team to manage the implementation process. Although there is no established set of requirements for developing a plan to implement a new effort, components of sound planning are important because they define what organizations seek to accomplish and identify specific activities to obtain desired results. According to senior DOL officials, DOL has prepared a draft internal memorandum that describes some of the findings from listening sessions conducted with tribal leaders and contained a few recommended action items that address some of the 2010 report recommendations. However, according to DOL officials, the memorandum does not address each of the six 2010 report recommendations. It also does not identify costs, roles and responsibilities, or project time frames. DOL could expand on its efforts to implement the 2010 report’s recommendations to improve employment service delivery to Native American veterans on tribal land, even within its constrained budget environment. For detailed information on opportunities for DOL to better implement the 2010 report recommendations, see appendix II. Collaboration. DOL could expand its collaboration efforts to include other federal agencies that administer programs that serve Native American veterans, such as the SBA, and the Departments of Education and Health and Human Services (HHS). For example, working with SBA could provide DOL access to information that might aid Native American veterans in remote areas with limited employment opportunities by providing access to training and support for veterans interested in self- employment. An official from SBA agreed that additional collaboration would improve self-employment training opportunities for Native American veterans. In addition, DOL can leverage state level collaborations with other agencies and tribal governments. For example, during our site visits we learned that some DVOPs and LVERs have leveraged other federal and state agencies’ resources to help serve Native American veterans. For instance, in one state, a DVOP that serves Native American veterans in remote communities joined VA’s mobile units to provide both health and employment services to Native American veterans to reduce travel time and reach additional veterans. However, to date, DOL has not identified promising practices for possible dissemination to other JVSG grantees. Advisory subcommittee. We have stated in past work that federal agencies have used a variety of mechanisms to implement interagency collaborative efforts, including supporting the establishment of interagency work groups.veterans on the Advisory Committee on Veterans’ Employment, Training and Employer Outreach (ACVETEO), DOL could appoint qualified individuals from other agencies that serve Native American veterans, such as BIA and HHS. Both BIA and HHS administer programs that may be used to provide employment training to Native American veterans on tribal land. Including BIA and HHS on the ACTEVEO could provide additional opportunities to leverage resources and share knowledge that could result in improved access to employment and training services for veterans on tribal land. In addition to including Native American Needs assessment. Options may exist for DOL to develop or obtain better data to assess the needs of Native American veterans returning to tribal land. Currently, no mechanism that we could identify exists to share data with tribes on the number and timing of Native American veterans returning to tribal communities. In the absence of timely data on the population of Native American veterans who need employment and training services, state agencies and JVSG staff must try to gauge and meet needs in other ways. DOL could develop or obtain better data on Native American veterans returning to tribal land by coordinating with DOD and VA or other agencies that maintain contact information on recently separated veterans. Specifically, according to a DOD official, DOD data on separating servicemembers are shared with VA, which modifies the data for its own purposes and then shares the data with state veterans’ agencies. According to this official, the data include fields for ethnicity, including Native American ethnicities. As a result, the data that state veterans agencies receive from VA may be helpful to DOL in identifying recently separated Native American veterans. Outreach. DOL can leverage the outreach activities conducted by state and local entities. For example, JVSG staff in several states have conducted training sessions in tribal communities focused on veterans’ transition to civilian life, while other JVSG staff have built on a VA initiative that trains volunteers in tribal communities to educate veterans about employment and training services available to them. In addition, some tribal officials and Native American veterans told us that some veterans prefer in-person outreach. To this end, veterans outreach staff in some states have been leveraging libraries and tribal colleges to conduct outreach to tribal veterans. Program flexibility. Although few tribal officials and Native American veterans we spoke with identified opportunities for additional program flexibility in the JVSG program, DOL could consider ways to use existing program flexibility to better serve veterans and enhance service delivery. Specifically, DOL is considering plans to ask states to explicitly describe their efforts to serve Native American veterans in their required JVSG 5- year grant operating plans, which could help ensure that states consider the needs of Native American veterans when preparing their plans. However, to date, DOL has not established a time frame for completion. Encouraging grantees to describe their plans to address the needs of Native American veterans in their state could increase the likelihood that grantees might design a service delivery strategy that takes into account the unique needs of the various tribal communities they serve. Economic development. DOL has the potential to better understand and support economic development by reviewing its existing grants and guidance on this topic and identifying lessons learned to disseminate to JVSG state and WIA Section 166 grantees. In the past, DOL, through its Employment and Training Administration (ETA), has awarded grants that have aligned employment and training grants with economic development needs. Recent grantees have included tribal entities. For example, ETA awarded a grant to a tribal college to train workers, foster business development and promote entrepreneurship on tribal land. DOL has an opportunity to review such grants to disseminate lessons learned on the extent to which Native American veterans participated in and benefited from grant-funded activities, as well as opportunities to incorporate economic development principles into existing grants. For example, DINAP recently visited an Arizona tribe to identify lessons learned on its progress in implementing a grant that aims to increase job opportunities for veterans and others in occupations that reflect employers’ need for skills, according to ETA officials. However, DINAP’s grant review activities to date have not involved VETS, nor has there been any technical assistance developed in connection with these activities, according to a DINAP official. The 2010 DOL report provides the foundation upon which to develop a strategy for achieving the recommendations’ goals. However, DOL has not yet developed a strategy to implement all of its recommendations to improve employment and training services for Native American veterans living on tribal land. Until DOL develops an overall strategy that establishes the roles and responsibilities, costs, and time frames for implementation, it will be difficult to determine whether DOL is on track for effectively implementing the recommendations. Under current budget limitations, DOL must approach the needs of Native American veterans in tribal communities with a response that is both proportionate to the needs of this community and cost-effective. It is essential for DOL to properly target its efforts to ensure that those most in need of services receive assistance. Without current information on the number of Native American veterans living on tribal land and a fuller understanding of their needs, DOL will be unable to appropriately identify and target those services. DOL has taken some initial steps to conduct outreach to tribal communities and has begun collaboration with other agencies. However, without conducting additional outreach and collaboration with other organizations within DOL and other federal, state, and tribal agencies that serve Native American veterans, DOL may miss the opportunity to leverage the knowledge and resources of those agencies that are already serving veterans in tribal communities. Without adequate job opportunities, employment and training services alone may simply frustrate Native American veterans seeking employment. While economic development is not DOL’s primary mission, the agency has an opportunity to better understand and support economic development on tribal land by reviewing its existing grants and guidance on this topic and identifying lessons learned to disseminate to its grantees that provide these critical employment and training services to Native American veterans on tribal land and forming partnerships with other agencies that can support these efforts. Without action from DOL to provide improved access to employment and training services, many Native American veterans that have honorably served their country will remain at risk of being underserved and unemployed. To strengthen DOL’s efforts to respond to the 2010 report recommendations to improve employment services and training opportunities for Native American veterans on tribal land, we recommend that the Secretary of Labor undertake the following three actions: 1. Ensure it has a written strategy to position the agency to efficiently and effectively respond to the 2010 recommendations, including the identification of roles and responsibilities as well as the goals, costs, and time frames to complete their implementation. 2. Identify and disseminate lessons learned and promising practices from DOL and other agencies’ efforts. To identify such lessons or practices, DOL could: a. Review efforts by JVSG grantees to improve DVOP and LVER outreach, such as the use of mobile units to conduct outreach on tribal lands; and b. Review DOL’s portfolio of employment and training grants and guidance related to economic development for application to JVSG and WIA Section 166 grantees. 3. Expand collaboration with other agencies to leverage agency resources. This effort could include working through the ACVETEO and other efforts, strengthening relationships with agencies that also serve Native American veterans, such as DOD, VA, SBA, and BIA, as well as building relationships with other agencies that serve Native American veterans, such as Education and HHS. DOL Comments. We provided a draft of this report to DOL for review and comment. DOL provided a written response to this report (see app. IV). DOL agreed with our recommendations and identified actions it has taken to implement the recommendations of the 2010 report. Regarding our first recommendation, DOL noted that it is working to develop a written strategy to implement the 2010 recommendations. However, DOL noted that it must first develop a better understanding of the extent of unemployment and training gaps on reservations nationwide, among other pieces of information, to establish a more comprehensive plan of action, and cited the need for an update to the Bureau of Indian Affairs (BIA) report American Indian Population and Labor Force Report—last published in 2005—as key to developing its plan. To the extent that there are obstacles to implementing the 2010 recommendations, such as incomplete data, it is important that DOL proceed with developing a written strategy that explicitly identifies such obstacles and proposes ways to mitigate them. One mitigation strategy, consistent with our findings and recommendations, could be to further leverage collaborative relationships with other federal agencies. Regarding our recommendation to identify and disseminate lessons learned and promising practices from its own and other agencies’ efforts, DOL agreed to review efforts by the JVSG state grantees and their DVOPs and LVERs who serve Native American veterans. DOL noted that its Division of Indian and Native American Programs (DINAP), located within the Employment and Training Administration (ETA), already reviews employment and training grants related to economic development for application to DINAP’s WIA Section 166 grantees, and will continue to do so. DOL added that it held national conferences for the WIA Section 166 grantees, which provided a platform for sharing best practices. While these actions are useful, in order to fully leverage the department’s efforts, dissemination of lessons learned and promising practices should extend beyond these grantees to the JVSG program and the DVOPs and LVERs whose efforts are supported by the JVSG grants. For example, information about ETA grants that include Native American veterans could help DVOPs and LVERs learn about promising practices and inform the Native American veterans they serve about additional training opportunities. Finally, another way to disseminate lessons learned and promising practices is through technical assistance. However, a DINAP official acknowledged, as we indicate in this report, that no technical assistance has been developed to date in connection with its efforts to review ETA grants for lessons learned. Regarding our recommendation to expand collaboration with other agencies, DOL agreed and noted that collaboration with other agencies has been effective in addressing the needs of Native American veterans living on tribal land. DOL also noted that it will propose that the ACVETEO establish a subcommittee to advise its members on issues and services related to Native American veterans on tribal lands. VA and BIA Comments. We provided a draft of this report to VA and BIA for review and comment. Both provided technical comments, which we incorporated, as appropriate. VA generally agreed with our findings. VA noted that under the auspices of a Memorandum of Understanding between VA and the Department of Health and Human Services’ Indian Health Service signed on October 1, 2010, there are shared opportunities for coordination, collaboration, and resource-sharing for workforce development, and cited six action steps reported in FY 2013.that it was an outstanding report. Tribal Groups. Additionally, we provided selected tribes, with whom we met during site visits, with a draft of pertinent excerpts and incorporated their technical comments, as appropriate. As agreed with your staff, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to interested congressional committees; the Secretary of Labor, the Secretary of Veterans Affairs; the Assistant Secretary-Indian Affairs; and other interested parties. The report also is available at no charge on the GAO website at http://www.gao.gov. Please contact me on (202) 512-7215 or at sherrilla@gao.gov if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Appendix I: Groups Interviewed Department of Labor (DOL), Veterans’ Employment and Training Service, including senior agency officials, and State Directors for Veterans’ Employment and Training (DVETs) from Arizona, New Mexico, North Carolina, Alaska, Hawaii, Washington, North Dakota, and Montana Department of Labor (DOL), Division of Indian and Native American Programs (DINAP) Department of Veterans Affairs (VA), Office of Tribal Government Relations Department of the Interior, Bureau of Indian Affairs (BIA) Small Business Administration (SBA) Defense Manpower Data Center (DMDC) State veterans coordinators from Arizona, New Mexico, North Carolina, Alaska, Hawaii, North Dakota, and Montana Disabled Veterans’ Outreach Program specialists (DVOPs) and Local Veterans’ Employment Representatives (LVERs) from Arizona, New Mexico, North Carolina, Alaska, Hawaii, and Montana Zuni Tribe of the Zuni Reservation, New Mexico Navajo Nation, Arizona, New Mexico, and Utah Eastern Band of Cherokee Indians Confederated Salish and Kootenai Tribes of the Flathead Reservation Spirit Lake Tribe, North Dakota White Earth Band of the Minnesota Chippewa Tribe, Minnesota Papa Alo Lokahi (Uncles and Aunties initiative) Recommendation Summary The 2010 report found that there was a need for increased collaboration across federal and state agencies with tribal governments in improving employment for Native American veterans. The report suggested that agencies that address the employment needs of Native American veterans, such as DOI and VA, could work with DOL to help address barriers that Native American veterans often face in accessing and maintaining meaningful employment. Closer collaboration between these agencies could boost focus on the veterans’ needs and help to leverage agency resources to provide more coordinated service delivery. DOL Recommendation Status DOL is in the early stages of implementing the report’s recommendations related to improving collaboration with federal and state agencies and tribal governments. In October and November of 2012, a DOL official conducted several listening sessions with tribal leaders in Arizona and with a tribal workforce administrator in California to learn more about the needs of Native American veterans. In preparation for conducting its listening sessions, DOL Veterans’ Employment and Training Service (VETS) staff met with officials from other organizations within DOL and other agencies that serve Native American veterans. VETS staff received guidance from DOL’s Division of Indian and Native American Programs, VA’s Office of Tribal Government Relations, and the Department of the Interior’s (DOI) Bureau of Indian Affairs (BIA) to identify potential sites and to inform how DOL should structure its listening sessions. One DOL official told us of preliminary plans to meet quarterly with both VA and BIA to discuss issues related to employing Native American veterans. The report also proposed that offering services in true partnership with tribal governments brings local knowledge, local relationships, and cultural understanding. To better serve Native American veterans, the report recommended the following specific actions: leverage agencies’ resources; and governments. Possible Expansion of DOL Efforts There may be additional opportunities for DOL to expand its proposed approach to improving collaboration with other agencies and tribal governments. At the federal level, DOL has reached out to initiate collaboration with VA and BIA, but it currently has no plans to initiate collaboration with other agencies that offer employment and training and other support services to Native American veterans, such as Education, HHS, SBA, or DOD. For example, our site visits indicated that some Native Americans veterans received employment services from Education’s Vocational Rehabilitation Services for American Indians with Disabilities program. DOL can consider partnering with this program to jointly serve Native American veterans. At the tribal level, DOL’s efforts to reach out to selected tribes has focused on tribal leadership to date, but has not included tribal veterans agencies. In addition to efforts at the federal level, at the state level, there have been efforts to collaborate with other agencies and tribal governments. However, to date, DOL has not made efforts to identify promising practices to improve service delivery and disseminate them to other state grantees. For example, during our site visits we learned that some DVOPs and LVERs had made their own efforts to leverage agency resources and partner with leaders in tribal communities: In North Carolina, a LVER established relationships with the Eastern Band of Cherokee Indians’ veterans’ service organization to begin to bring services to veterans in the community. In Montana, a DVOP established a partnership with the mobile VA Vet Center to provide both health and employment services to Native American veterans in remote areas (see fig.). Possible Expansion of DOL Efforts (cont.) Improve Agency Collaboration (cont.) JVSG staff in Montana visit the Crow Agency using a VA Vet Center mobile unit, May 7, 2013. During our site visits, we also brought together tribal workforce and veterans’ agencies, and these meetings often served to initiate discussions for future collaboration between them in specific areas, such as the use of incentives for employers to hire veterans and assistance in preparing resumes that explain veterans’ military backgrounds and skills. However, DOL has issued no guidance or technical assistance to its Native American WIA Section 166 grantees to encourage collaboration within tribal governments or to help them develop their own outreach plans. DOL officials told us they expect to provide additional guidance in this area beginning in 2014. Alaska, New Mexico, and North Dakota offer cultural competency training to staff serving veterans, which some JVSG staff said was beneficial in their efforts to serve Native American veterans. In addition, some Native American veterans emphasized the importance to them of culturally- sensitive services. DOL Recommendation Status The 2010 report recommended that DOL create an advisory committee subgroup on Native American veterans’ employment issues.DOL officials told us that although VETS may offer proposals to the committee, the ACVETEO members decide—because the ACVETEO operates independently—whether to establish a new subgroup. According to agency officials, DOL is currently in the process of developing such a proposal to present to the ACVETEO to establish an advisory committee subgroup on Native American veterans. In addition, DOL is considering appointing a representative from the Native American veterans’ community to participate on the DOL ACVETEO. However, DOL has not specifically identified who they might appoint for this position. Currently, the ACVETEO has members from VA, DOL, DOD, and SBA, but does not have members from other agencies that provide employment services to Native American veterans, like BIA and HHS. Possible Expansion of DOL Efforts Our past work has stated that federal agencies have used a variety of mechanisms to implement interagency collaborative efforts, including establishing interagency work groups. In addition to including Native American veterans on the ACVETEO, DOL has the opportunity to appoint qualified individuals from other agencies that serve Native American veterans, such as BIA, HHS, and Education. Members from other agencies that also support employment and training services on tribal land could provide the opportunity for DOL to coordinate efforts, share knowledge, and leverage agency resources to provide services to Native American veterans living on tribal land. that the subgroup meet periodically and include representatives from the VA, DOD, DOL, DOI, and other departments, as appropriate. As DOL develops its proposal to create an ACVETEO subgroup, it has the opportunity to encourage the committee to form the group in a way that enhances and sustains its collaborative efforts. Our past work suggests that to establish effective work groups or other mechanisms for collaboration, agencies should: define and articulate a common outcome; establish mutually reinforcing or joint strategies; identify and address needs by leveraging resources; agree on roles and responsibilities; establish compatible policies, procedures, and other means to operate develop mechanisms to monitor, evaluate, and report on results; reinforce agency accountability for collaborative efforts through agency plans and reports; and reinforce individual accountability for collaborative efforts through performance management systems. Recommendation Summary The 2010 report recommended conducting a needs assessment by reviewing existing federal and state programs and services available to Native American veterans, in order to identify unmet needs and eliminate inefficiencies with the aim of improving service delivery. DOL Recommendation Status DOL has no plans to conduct a formal needs assessment of the employment and training needs of Native American veterans. The cost and complexity of such a project would likely be prohibitive, according to a DOL official. However, regarding the recommendation to estimate the number of Native American veterans on tribal land, a DOL official told us that the agency is in the early stages of identifying potential data sources. Specifically, DOL has identified a potential source for data within DOD that provides race and ethnicity as well as address information for returning veterans that could help DVOPs and LVERs target their visits to tribal land, but its plans to acquire these data are still being negotiated and could take time to finalize. In the interim, DOL has identified data from VA that, while not comprehensive, might in the short term, provide an estimate of the number of Native American veterans on tribal land. Regarding outcome data, DOL does not collect data on veterans served by the JVSG program by race or ethnicity. However, according to a VETS official, DOL is planning a comprehensive review of its approach to tracking veterans’ data. Although DOL does collect data on Native American veterans served by its WIA programs, it does not track those served on tribal land. transition to civilian life by providing them with a job placement and training plan and with a list of available resources. With regard to the recommendation on job placement and training plans, currently, all servicemembers develop such Individual Transition Plans through mandatory participation in the Transition Assistance Program (TAP), which is currently being revised. However, although DOL provides information through TAP that could help veterans returning to their communities, such as how to access DVOPs and LVERs at American Job Centers, as of July 2013, due to changes in agency priorities, DOL had no plans to include contact information for WIA Section166 grantees, which could help Native American veterans returning to tribal land. According to a DOL official, providing this level of information during TAP sessions, which are applicable to all veterans, would be impractical. Additionally, as of July 2013, a DOL official acknowledged that there are no specific changes to TAP that focus on Native American veterans. However, DOL told us that there will be overarching changes to the TAP program that will aid all transitioning servicemembers. Regarding service delivery, DOL requested and received information in November 2012 about DVOPs’ and LVERs’ services to Native American veterans, but to date, has not fully reviewed this information to identify promising practices that could improve services for these veterans. Assess Needs (cont.) Possible Expansion of DOL Efforts (cont.) employment and training services. Furthermore, both tribal administrators and officials at the state level identified various data sources that they used to estimate the number of veterans living on tribal land, including 2002 Census data, and tribal veterans and health agencies’ data. Such data, however, may not reflect the need for employment and training services. For example, tribal health agency data are likely to reflect Native American veterans’ needs for medical care, not necessarily for employment and training services, and not necessarily the needs of recently separated veterans. In the absence of timely data on the population of Native American veterans who need employment and training services, state agencies and JVSG staff must try to gauge and meet needs in other ways. For example, in North Dakota, the state workforce agency provides pamphlets on JVSG services to the state veterans’ agency for inclusion in welcome packets for returning veterans. In another state, a LVER who has recently begun working with one tribe told us that he plans to gauge demand for his services based on attendance when he visits the tribe. DOL has an opportunity to develop or obtain better data on Native American veterans returning to tribal land by working with other federal agencies. Specifically, according to a DOD official, DOD data on separating servicemembers are shared with VA, which modifies the data for its own purposes and then shares the data with state veterans’ agencies. According to this official, the data include fields for ethnicity, including Native American ethnicities. As a result, the data that state veterans agencies receive from VA may be helpful in identifying recently separated Native American veterans. However, until DOL reaches a data- sharing agreement with either DOD or the VA, better data will not be available to state workforce agencies or the DVOPs and LVERs they employ. Recognizing the potential benefits, the DVET for Montana said he is seeking access to these data from the state’s veterans’ agency, through a proposed Memorandum of Understanding. DOL also has an opportunity to identify and disseminate lessons learned about states’ efforts to provide short transition briefings tailored to the needs of Native American veterans. DVOPs and LVERs have identified opportunities to deliver such customized transition briefings for Native American veterans in at least two states. In New Mexico and North Dakota, DVOPs and LVERs have provided abbreviated transition briefings to tribal communities. In our site visits, when we asked about the potential value of such customized transition briefings, some Native American veterans expressed interest in such an approach. Additionally, in one state, DVOPs and LVERs try to reach Native American veterans through welcoming ceremonies, according to that state’s DVET. DOL Recommendation Status The 2010 report recommended that DOL launch a communications program focused on outreach to Native American veterans on tribal land by establishing a public information campaign to share information about VETS programs through the use of the web, direct mail, newspapers, and social media. However, DOL officials told us that they have not initiated such a campaign due to budget constraints, but are currently exploring low cost options for conducting outreach. Although DOL officials report participating in tribal conferences and tribal events at the state level, many of the tribal veterans and workforce program officials and veterans we spoke with were unaware or had limited awareness of services offered by the JVSG program. Possible Expansion of DOL Efforts There may be additional opportunities for DOL to expand its proposed approach to outreach without the use of a large scale public information campaign. DOL has an opportunity to take a more comprehensive strategy and expand outreach efforts that leverage existing state and local level efforts. Expanded efforts could help to raise awareness of VETS programs for Native American veterans on tribal land while minimizing additional costs. DOL may have an opportunity to learn from the activities of DVOPs and LVERs in certain states who have conducted targeted outreach. For example, in one state, a LVER shared information about services available to veterans through an article in a local newspaper. Several states we visited make employment and other resources available to veterans and employers accessible through a website. For example, according to state officials, New Mexico provides access for jobseekers, including those living in remote areas, by creating a virtual one-stop that offers certain services online. facilitate the sharing of information and data on available resources among tribal leadership and other government entities engaged in workforce development programs on tribal lands throughout the country. Some tribal officials and Native Americans veterans told us that some veterans prefer in-person outreach. To this end, veterans outreach staff in some states have been leveraging libraries and tribal colleges to conduct outreach to tribal veterans. Conversely, other Native American veterans we spoke with suggested that in order to reach younger veterans, DOL should consider employing social media to raise awareness of employment and training services among veterans on tribal land. DOL has an opportunity to encourage states to leverage VA’s Tribal Veterans Representatives (TVRs), who are volunteers based on tribal land trained by VA to provide veterans with information about available benefits, to enhance the outreach efforts of DVOPs and LVERs. For example, Alaska state officials said that DVOPs and LVERs coordinated with TVRs to schedule outreach visits. As a result, officials reported improved participation in outreach events and better relations with Alaska Native communities. A similar volunteer-based initiative exists in Hawaii. DOL Recommendation Status DOL has not pursued the report’s proposal to explore the possibility of DVOPs and LVERs living and working on tribal land, citing the flexibility of the JVSG program that states already have under current law. Specifically, states now employ DVOPs and LVERs as they deem appropriate and efficient, subject to DOL’s approval. Further, DOL guidance emphasizes that states have flexibility to assign and place DVOPs and LVERs to best address each state’s unique needs. The 2010 report suggested that DVOPs and LVERs live and work on tribal land and included the proposal that they focus their work on the tribal level, rather than the state level. However, JVSG grant amounts appear unlikely to support a full-time focus on tribal land, even for the largest tribes, and such an approach appears impractical for smaller communities. For example, in Alaska, JVSG funds support 4.5 DVOP and LVER staff, who are collectively responsible for outreach to 225 Alaska Native communities (an average of 50 each). In Alaska, each DVOP and LVER is assigned a region of the state, and they told us that they try to visit their assigned region at least twice a year. As in all other states, they also must balance these efforts with their efforts to serve other veterans because under current law, the JVSG grants flow to states to support activities on behalf of all eligible veterans in each state. A DOL official described the challenge as making optimum use of limited resources. Nevertheless, with regard to flexibility in general, a DOL official emphasized the importance of recognizing the substantial diversity among Native American communities. With regard to additional flexibilities that could help meet the needs of Native American veterans living on tribal lands, DOL has indicated that it is open to identifying any such opportunities that exist under current law, although to date it has taken little action in this area. explore the possibility of assigning DVOPs and LVERs to live and work on tribal land; and identify available flexibilities to meet the needs of Native American veterans living on tribal lands, including any additional flexibilities identified through consultation with Native American organizations and tribal councils. Possible Expansion of DOL Efforts During our site visits, regarding JVSG program flexibility, few state-level officials and tribal leaders identified opportunities for additional flexibility as a primary concern, except for officials and veterans from several tribes, who suggested that JVSG grant funds go to tribes as well as states, which is not possible under current law. Additionally, for example, two tribes favored establishing funding that would be specifically reserved for veterans within DOL’s WIA Section 166 program, one advocated for broadened eligibility to allow more veterans to participate in that program, and two suggested streamlining application procedures for DOL’s programs to raise participation. Some of these proposed changes might require statutory changes. However, a few state-level officials, tribal leaders and administrators, and Native American veterans also expressed the view that a one-size-fits-all approach would be generally ill-advised, given state and tribal differences. Allow Flexibility (cont.) Possible Expansion of DOL Efforts (cont.) Nevertheless, DOL has an opportunity to consider ways to use existing program flexibility to better serve veterans and encourage better outreach and service delivery. Specifically, DOL is considering plans to ask states to explicitly describe their efforts to serve Native American veterans in their JVSG 5-year grant operating plans, which could help ensure that states consider the needs of Native American veterans when preparing their plans. However, to date, DOL has not established a time frame for completion. Report Summary The 2010 report identified the need for economic development on tribal land as a major problem, and noted that, in the absence of job opportunities, providing employment and training services alone may simply frustrate jobseekers. DOL Status DOL officials told us they had taken no action on this issue and emphasized that, with regard to economic development, DOL’s focus is on the role that skills can play. While noting that economic development programs are outside VETS’s authority, the report noted that economic development should be a major focus across federal programs addressing employment concerns for Native American veterans. Possible Expansion of DOL Efforts DOL has an opportunity to better understand and support economic development on tribal land by reviewing its existing grants and guidance on this topic and identifying lessons learned to disseminate to JSVG state and WIA Section 166 grantees. DOL, through ETA, has taken steps to boost economic development generally by aligning some of its discretionary grants with local demand for skills, and by issuing guidance on economic development and entrepreneurship. In the past, DOL has awarded grants that have aligned employment and training with economic development. ETA has awarded some of its recent grants to Native American entities, and some grants included veterans (see appendix III). DOL has an opportunity to review such grants for lessons learned on how to incorporate economic development principles into existing grants for tribal veterans. For example: the Gila River Indian Community in Arizona was awarded a grant to develop career pathways, a sequenced training approach that supports career progression, including a career pathway for veterans; and success stories, share best practices, and provide benchmark information to other tribes; and United Tribes Technical College in North Dakota was awarded a grant to train workers, foster business development, and promote entrepreneurship in 19 tribes across 3 states, under a collaborative program that also involved the Small Business Administration (SBA) and the Department of Commerce. sponsor and fund training and education for business start-ups and entrepreneurs on tribal land. Additionally, in 2007, ETA issued guidance to highlight allowable and prohibited activities under WIA related to economic development, and in 2010, it issued guidance on the use of WIA funds to support entrepreneurship training. VETS and DINAP officials told us that they do not automatically receive notification of ETA grants to Native American entities or ETA guidance not addressed to their grantees. During our site visits, many officials and veterans alike, as well as others, identified economic development as a need, and some agreed that entrepreneurship could be a viable option for some Native American veterans. For example: In Montana, the Salish Kootenai College offers an entrepreneurship program; In North Carolina, Cherokee entrepreneurs can seek funding and coaching from the Sequoyah Fund, a Community Development Financial Institution; Possible Expansion of DOL Efforts (cont.) Boost Economic Development (cont.) Native Hawaiian veterans and their families built greenhouses to grow crops for supplemental income. In Hawaii, the Vets to Farmers initiative, partly funded by a DOL grant, trained some Native Hawaiian veterans in sustainable agriculture, provided marketing assistance, and continues to help them sell their produce to coastal resorts (see fig.); in addition, initiative leaders told us that participants could use VA education benefits to cover their training costs; and In Alaska, the Cook Inlet Tribal Council refers those interested in entrepreneurship to the SBA and other resources. Purpose of Grant Project To increase opportunities for advancement and employment, create partnerships, engage employers, and develop career pathways, in hospitality, construction, health care, and other occupations. To meet the region’s need for qualified environmental technicians and other professionals, accelerate business development opportunities for Native Americans, and reduce unemployment in economically distressed areas. To provide green industry and energy training in construction and sustainable manufacturing, to provide support services and to address Native Americans’ unique circumstances. To build on training models with proven success, address retention issues, and create a career ladder in construction, industrial maintenance, and health care. To increase the supply of workers with energy-efficiency skills to support energy-efficient end user technology and the geothermal, hydroelectric, wind turbine and biomass industries. To provide targeted training and job placement services for green energy industries. To develop green industries, provide training in green skills, such as home energy rating, construction, biofuels, and sustainable agriculture, and attract private investment in energy. The grantee is developing a career pathway model specifically for veterans, according to a Gila River official. Veterans were included in the target populations to be served by these grant projects. For the other grants shown, veterans were eligible to participate, according to grantee officials. The grant project summary identified Native American entities as partners. Grant funds supported an entrepreneurship component in which Native Hawaiian veterans are participating. In addition to the contact named above, Brett Fallavollita (Assistant Director), Michelle Bracy, and Christopher Morehouse made key contributions to this report. In addition, key support was provided by James Bennett, Sarah Cornetto, Holly Dye, Kathy Leslie, and Jean McSween. VA and IHS: Further Action Needed to Collaborate on Providing Health Care to Native American Veterans. GAO-13-354. Washington, D.C.: April 26, 2013. Entrepreneurial Assistance: Opportunities Exist to Improve Programs’ Collaboration, Data-Tracking, and Performance Management. GAO-13-452T. Washington, D.C.: March 20, 2013. Veterans’ Employment and Training: Better Targeting, Coordinating, and Reporting Needed to Enhance Program Effectiveness. GAO-13-29. Washington, D.C.: December 13, 2012. Regional Alaska Native Corporations: Status 40 Years after Establishment, and Future Considerations. GAO-13-121. Washington, D.C.: December 13, 2012. Managing for Results: Key Considerations for Implementing Interagency Collaborative Mechanisms. GAO-12-1022. Washington, D. C.: September 27, 2012. Indian Issues: Federal Funding for Non-Federally Recognized Tribes. GAO-12-348. Washington, D.C.: April 12, 2012. Temporary Assistance for Needy Families: HHS Needs to Improve Guidance and Monitoring of Tribal Programs. GAO-11-758. Washington, D.C.: September 15, 2011. Indian Issues: Observations on Some Unique Factors that May Affect Economic Activity on Tribal Lands. GAO-11-543T. Washington, D.C.: April 7, 2011. Veterans’ Employment and Training Service: Labor Could Improve Information on Reemployment Services, Outcomes, and Program Impact. GAO-07-594. Washington, D.C.: May 24, 2007. Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005. Results-Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations. GAO-03-669. Washington, D.C.: July 2, 2003.
The unemployment rate for all veterans has risen since the beginning of the economic downturn, but the unemployment rate for Native Americans living on tribal land has been higher. In addition, tribal land is frequently located in remote areas characterized by limited economic development, which can make finding a job challenging. DOL administers several grant programs that provide employment assistance to all eligible veterans, including Native Americans. In response to a statutory mandate, in October 2010, DOL submitted a report to Congress recommending that the agency take actions to increase employment and training opportunities for Native American veterans living on tribal lands. GAO assessed (1) the status of DOL efforts to implement the report's recommendations and (2) whether and how DOL can improve on its efforts to implement the report's recommendations. GAO reviewed federal laws, regulations, and DOL guidance; interviewed DOL, state, and tribal officials as well as Native American veterans; and conducted site visits to tribal lands in four U.S. regions. The Department of Labor (DOL) is in the early stages of implementing several of the 2010 report's recommendations, but implementation of the remaining recommendations has not occurred. The agency has begun to take steps to respond to three of the report's six recommendations: improve interagency collaboration, create an advisory subcommittee for Native American veterans, and conduct a needs assessment. To increase collaboration, DOL has conducted several listening sessions with tribal leadership and begun collaborating with agencies that serve veterans, including the Department of Veterans Affairs (VA) and the Department of the Interior's Bureau of Indian Affairs, to learn more about how to better serve Native American veterans. With regard to an advisory subcommittee, DOL is developing a proposal to establish a subgroup for Native American veterans on its existing veterans' employment and training advisory committee, and is considering appointing a representative from the Native American veterans' community to serve on that committee. To assess need, DOL has identified a potential source for data within DOD that provides race and ethnicity and address information for returning veterans that could help better target visits to tribal land, but its plans to acquire these data are still being negotiated and could take time to finalize. However, DOL has taken little to no action on recommendations to increase outreach, pursue program flexibility, and boost economic development. DOL officials told us that leadership transitions and budget challenges have contributed to their limited response to date. In addition, since delivering the report in 2010, DOL has not developed a strategy that specifically establishes roles and responsibilities, goals, costs, and time frames for implementation of the report's recommendations. DOL could build on its efforts to implement the report's recommendations, even in a constrained budget environment. For example, DOL could expand the collaboration it has begun with other agencies that serve Native American veterans on tribal land, such as the Department of Education (Education). GAO site visits indicated that some Native American veterans received employment services from a vocational rehabilitation program administered by Education. DOL can consider partnering with this program. DOL could also identify and disseminate lessons learned from states that have collaborated with other agencies and tribal governments. For example, a DOL program in Montana has leveraged other agency resources, such as collaborating with the VA Vet Center to provide both health and employment services to Native American veterans in remote tribal areas using mobile units, an approach that may be applicable in other states. To boost economic development, DOL could review information from its existing grants and guidance on economic development to disseminate to DOL grantees that serve Native American veterans. GAO recommends that DOL develop a written strategy to implement the 2010 recommendations that incorporates roles and responsibilities, goals, costs, and time frames. DOL should also expand collaboration with other agencies to leverage resources and identify and disseminate lessons learned from prior relevant efforts. DOL agreed with GAO's recommendations.
As part of our audit of the fiscal years 2014 and 2013 CFS, we considered the federal government’s financial reporting procedures and related internal control. Also, we determined the status of corrective actions Treasury and OMB have taken to address open recommendations relating to their processes to prepare the CFS that were detailed in our previous reports. A full discussion of our scope and methodology is included in our February 2015 report on our audit of the We have communicated each of the fiscal years 2014 and 2013 CFS.control deficiencies discussed in this report to your staff. We performed our audit of the fiscal years 2014 and 2013 CFS in accordance with U.S. generally accepted government auditing standards. We believe that our audit provided a reasonable basis for our conclusions in this report. During our audit of the fiscal year 2014 CFS, we identified three new internal control deficiencies in Treasury’s processes used to prepare the CFS. Specifically, we found that Treasury did not have (1) a sufficient process to work with key federal entities prior to the end of the fiscal year to reasonably assure that new or substantially revised federal accounting standards were consistently implemented by the entities to allow appropriate consolidation at the government-wide level, (2) procedures for determining whether entities and transactions for which it does not have audit assurance are significant in the aggregate to the CFS, and (3) sufficient procedures for (a) identifying significant increases or decreases in all CFS line items and disclosures from prior fiscal year reported amounts and (b) understanding the reasons for such changes. Treasury did not have a sufficient process to work with key federal entities prior to the end of the fiscal year to reasonably assure that new or substantially revised federal accounting standards were consistently implemented by the entities to allow appropriate consolidation at the government-wide level. For example, for the Financial Report of the United States Government (Financial Report), the Federal Accounting Standards Advisory Board’s (FASAB) Technical Bulletin 2011-1, Accounting for Federal Natural Resources Other Than Oil and Gas, requires a concise statement as part of required supplementary information (RSI) explaining the nature and valuation of federal natural resources. The statement is to encompass significant federal natural resources other than oil and gas under management by the federal government. For fiscal year 2014, only one federal entity (the Department of the Interior) provided to Treasury a discussion of significant federal natural resources under entity management, specifically related to coal leases. As a result, the RSI for federal natural resources other than oil and gas included in the fiscal year 2014 Financial Report reported only coal leases from the Department of the Interior. The RSI did not describe coal resources that are not currently under lease or certain other natural resources owned by the federal government. We communicated this matter to Treasury and OMB officials who revised the Financial Report before issuance, as appropriate. We found that Treasury has a process to work with federal entities when implementing new and revised federal accounting standards. Specifically, Treasury presents the standards for discussion at regularly scheduled monthly meetings—called Central Reporting Team meetings—that include financial reporting representatives from federal entities. Treasury also updates the Treasury Financial Manual, its financial reporting guidance for federal entities, to include new reporting requirements. This process was followed in implementing Technical Bulletin 2011-1 in fiscal year 2014. However, this process is not sufficient to reasonably assure that new or substantially revised federal accounting standards are consistently implemented by the entities. In a prior year, after we identified inconsistencies in the information reported to Treasury related to the implementation of FASAB Statement of Federal Financial Accounting Standards (SFFAS) No. 33, Treasury established a working group involving the key federal entities affected by the standard. The group met several times to discuss the standard and through such discussions was able to identify and resolve inconsistencies in the reporting of information for consolidation at the government-wide level. However, Treasury has not adopted a similar process for implementing subsequent standards. Federal financial statements are to be presented in accordance with applicable generally accepted accounting principles (GAAP). FASAB, the body designated as the source of GAAP for federal reporting entities, regularly issues new and revised standards, including two new federal accounting standards that are to be implemented in fiscal year 2015. Without a sufficient process to work with key federal entities to reasonably assure that new or substantially revised federal accounting standards are consistently implemented by the entities, there is an increased risk of misstatements in the financial statements or incomplete and inaccurate disclosure of information within the Financial Report. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to develop and implement a sufficient process to work with key federal entities prior to the end of the fiscal year to reasonably assure that new or substantially revised federal accounting standards are consistently implemented by the responsible entities to allow appropriate consolidation at the government-wide level. Treasury did not have procedures for determining whether entities and transactions for which it does not have audit assurance are significant in the aggregate to the CFS. Treasury’s standard operating procedure (SOP) entitled “Significant Entities” includes procedures for identifying federal entities with activity that is material to at least one financial statement line item or note disclosure. Each federal entity identified as significant is to submit an audited closing package to Treasury. The closing package methodology is intended to link federal entities’ audited consolidated department-level financial statements to certain statements of the CFS. Chief financial officers of significant federal entities are required to verify the consistency of the closing package data with their respective entities’ audited financial statements. In addition, entity auditors are required to separately audit and report on the financial information in the closing packages. However, the SOP did not include, and therefore Treasury did not perform, procedures for determining whether entities and amounts that were not included in a closing package, and thus for which Treasury did not have audit assurance, were significant to the CFS in the aggregate before the CFS was finalized. Specifically, we found that Treasury’s SOP did not include procedures for assessing the significance of aggregate amounts for which it does not have audit assurance, including amounts related to the following: non-significant entities, which are not required to submit audited closing packages; significant entities that did not submit audited closing packages; nonmaterial line items for significant calendar year-end entities; material line items for calendar year-end entities that did not submit audited closing packages; journal vouchers processed by Treasury that were not based on the closing packages; and uncorrected misstatements identified at the consolidated level, including uncorrected misstatements submitted by the significant entities with their closing packages. Standards for Internal Control in the Federal Government provides that control activities are the policies, procedures, techniques, and mechanisms that enforce management’s directives.provide that an entity should accurately record transactions and events— from initiation to summary records—and that control activities include procedures to achieve accurate recording of transactions and events. Without procedures for determining whether aggregate amounts for which Treasury does not have audit assurance are significant to the CFS, there is an increased risk of material misstatements in the financial statements. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to develop and implement procedures for determining whether entities and transactions for which Treasury does not have audit assurance are significant in the aggregate to the CFS. Treasury did not have sufficient procedures for (1) identifying significant increases or decreases in all CFS line items and disclosures from prior fiscal year reported amounts and (2) understanding the reasons for such changes. Treasury’s SOP entitled “Preparation of the Financial Report” includes procedures for performing an overall variance analysis at the consolidated line item level for the Balance Sheet, Statement of Net Cost, Statement of Operations and Changes in Net Position, and related notes to the Balance Sheet. The variance analysis compares the amounts for the current and prior years and provides an explanation for material changes.anomalies in the data that, if unexplained, could indicate misstatements in the data. However, the SOP did not include, and therefore Treasury did not perform, an overall variance analysis on the remaining CFS line items and disclosures where comparable amounts were presented. This includes the CFS budget statements, which consist of the Reconciliation of Net Operating Cost and Unified Budget Deficit and Statement of Changes in Cash Balance from Unified Budget and Other Activities; non- Balance Sheet notes; RSI; Required Supplementary Stewardship Information; and Other Information. Such variance analysis helps identify unusual trends or Standards for Internal Control in the Federal Government provides that control activities are the policies, procedures, techniques, and mechanisms that enforce management’s directives. The standards also provide that an entity should accurately record transactions and events— from initiation to summary records—and that control activities include procedures to achieve accurate recording of transactions and events. Without sufficient procedures for identifying and understanding significant changes from prior year reported amounts, there is an increased risk of misstatements in the financial statements or incomplete and inaccurate disclosure of information within the Financial Report. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to develop and implement procedures for identifying significant increases or decreases in all CFS line items and disclosures from prior fiscal year reported amounts and for understanding the reasons for such changes. At the end of the fiscal year 2013 audit, 31 recommendations from our prior reports regarding control deficiencies in the processes used to prepare the CFS were open. Treasury implemented corrective actions during fiscal year 2014 that resulted in significant progress in resolving certain of the control deficiencies addressed by our recommendations. For 7 recommendations, the corrective actions resolved the related control deficiencies, and we closed the recommendations. While progress was made, 24 recommendations from our prior reports remained open as of February 19, 2015, the date of our report on the audit of the fiscal year 2014 CFS. Consequently, a total of 27 recommendations need to be addressed—24 remaining from prior reports and the 3 new recommendations we are making in this report. Appendix I summarizes the status as of February 19, 2015, of the 31 open recommendations from our prior reports, including the status according to Treasury and OMB, as well as our own assessment and additional comments where appropriate. Various efforts are under way to address these recommendations. We will continue to monitor Treasury’s and OMB’s progress in addressing our recommendations as part of our fiscal year 2015 CFS audit. In oral comments on a draft of this report, OMB generally concurred with the findings and recommendations of this report. In written comments on a draft of this report, which are reprinted in appendix II, Treasury concurred with our three new recommendations. Treasury also provided details on its ongoing efforts to address the material weaknesses that relate to the federal government’s processes used to prepare the CFS. To address the material weakness related to intragovernmental transactions, Treasury stated that it will continue to devote significant resources toward resolving the material weakness and, in order to resolve systemic intragovernmental differences, Treasury has requested that federal agencies provide it with root cause information and corrective action plans. Regarding the material weakness related to the compilation process, Treasury stated that it continued to implement software and processes to automate and streamline the compilation of the Financial Report, and that in fiscal year 2015, it will focus on collecting critical information from reporting entities identified as significant to the Financial Report, including entities in the legislative and judicial branches. In addition, Treasury noted that it is continuing its efforts to validate material completeness of budgetary information included in the Financial Report, as well as the consistency of such information with agency reports. This report contains recommendations to the Secretary of the Treasury. The head of a federal agency is required by 31 U.S.C. § 720 to submit a written statement on actions taken on our recommendations to the Senate Committee on Homeland Security and Governmental Affairs and to the House Committee on Oversight and Government Reform not later than 60 days after the date of this report. A written statement must also be sent to the Senate and House Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. Please provide me with a copy of your responses. We are sending copies of this report to interested congressional committees, the Fiscal Assistant Secretary of the Treasury, and the Controller of the Office of Management and Budget’s Office of Federal Financial Management. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. We acknowledge and appreciate the cooperation and assistance provided by Treasury and OMB during our audit. If you or your staff have any questions or wish to discuss this report, please contact me at (202) 512-3406 or simpsondb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. related material weaknessNo. Count GAO-04-45 (results of the fiscal year 2002 audit) 1 02-22 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to perform an assessment to define the reporting entity, including its specific components, in conformity with the criteria issued by the Federal Accounting Standards Advisory Board (FASAB). Key decisions made in this assessment should be documented, including the reason for including or excluding components and the basis for concluding on any issue. Particular emphasis should be placed on demonstrating that any financial information that should be included but is not included is immaterial. (Preparation material weakness) Treasury developed a process to identify all reporting entities for inclusion in the Financial Report of the U.S. Government (Financial Report). The reporting entities can be found in appendix A of the Financial Report. Closed. Open. Fiscal Assistant Secretary, in coordination with the Controller of OMB, to provide in the financial statements all the financial information relevant to the defined reporting entity, in all material respects. Such information would include, for example, the reporting entity’s assets, liabilities, and revenues. (Preparation material weakness) Treasury was able to collect data from all reporting entities in the executive branch for fiscal year-end 2014. Partial data were also collected from the judicial and legislative branches, despite those entities not being statutorily required to report to Treasury. Treasury and OMB will continue to work with non-executive branch entities to collect all necessary information. 02-24 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to disclose in the financial statements all information that is necessary to inform users adequately about the reporting entity. Such disclosures should clearly describe the reporting entity and explain the reason for excluding any components that are not included in the defined reporting entity. (Preparation material weakness) Treasury revamped the wording in appendix A of the Financial Report to clearly demonstrate the reporting entities included in the Financial Report. In addition, appendix A clearly discloses the reasons for excluding certain reporting entities. Closed. Open. Fiscal Assistant Secretary, in coordination with the Controller of OMB, to help ensure that federal agencies provide adequate information in their legal representation letters regarding the expected outcomes of the cases. (Preparation material weakness) Treasury and OMB will work with the agencies to encourage the proper usage of the “unable to determine” category for the legal cases. In addition, Treasury will work to limit agency use of this category, require that additional information be reported when this category is selected, or both. No. 02-37 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies develop a detailed schedule of all major treaties and other international agreements that obligate the U.S. government to provide cash, goods, or services, or that create other financial arrangements that are contingent on the occurrence or nonoccurrence of future events (a starting point for compiling these data could be the State Department’s Treaties in Force). (Preparation material weakness) Per Treasury and OMB Treasury and OMB will develop a process to leverage the existing information from agencies, where possible, and assess methods and approaches for obtaining additional information from agencies. 02-38 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies classify all such scheduled major treaties and other international agreements as commitments or contingencies. (Preparation material weakness) See the status of recommendation No. 02-37. Per GAO Open. Until a comprehensive analysis of major treaty and other international agreement information has been performed, Treasury and OMB are precluded from determining if additional disclosure is required by generally accepted accounting principles (GAAP) in the CFS, and we are precluded from determining whether the omitted information is material. Open. See the status of recommendation No. 02-37. 02-39 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies disclose in the notes to the CFS amounts for major treaties and other international agreements that have a reasonably possible chance of resulting in a loss or claim as a contingency. (Preparation material weakness) See the status of recommendation No. 02-37. Open. See the status of recommendation No. 02-37. Per Treasury and OMB See the status of recommendation No. 02-37. Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies disclose in the notes to the CFS amounts for major treaties and other international agreements that are classified as commitments and that may require measurable future financial obligations. (Preparation material weakness) Per GAO Open. See the status of recommendation No. 02-37. 02-41 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies take steps to prevent major treaties and other international agreements that are classified as remote from being recorded or disclosed as probable or reasonably possible in the CFS. (Preparation material weakness) See the status of recommendation No. 02-37. Open. See the status of recommendation No. 02-37. 02-129 The Secretary of the Treasury should direct the Fiscal Assistant Secretary to ensure that the note disclosure for stewardship responsibilities related to the risk assumed for federal insurance and guarantee programs meets the requirements of Statement of Federal Financial Accounting Standards No. 5, Accounting for Liabilities of the Federal Government, paragraph 106, which requires that when financial information pursuant to Financial Accounting Standards Board standards on federal insurance and guarantee programs conducted by government corporations is incorporated in general purpose financial reports of a larger federal reporting entity, the entity should report as required supplementary information what amounts and periodic change in those amounts would be reported under the “risk assumed” approach. (Preparation material weakness) Treasury will continue to request this information from the agencies at interim and through year-end reporting requirements in the Treasury Financial Manual (TFM) 2-4700. In addition, Treasury will continue to participate on the FASAB Risk Assumed task force and implement any related changes corresponding to the issuance of revised or new federal accounting standards. Open. Treasury’s reporting in this area is not complete. The CFS should include all major federal insurance programs in the risk assumed reporting and analysis. Also, since future events are uncertain, risk assumed information should include indicators of the range of uncertainty around expected estimates, including indicators of the sensitivity of the estimate to changes in major assumptions. No. Count GAO-04-866 (results of the fiscal year 2003 audit) 11 03-8 The Director of OMB should direct the Controller of OMB, in coordination with Treasury’s Fiscal Assistant Secretary, to work with the Department of Justice and certain other executive branch federal agencies to ensure that these federal agencies report or disclose relevant criminal debt information in conformity with GAAP in their financial statements and have such information subjected to audit. (Preparation material weakness) Treasury and OMB will assess options as to what methodologies or approaches to use for obtaining the additional information needed from the agencies. Open. 03-9 The Secretary of the Treasury should direct the Fiscal Assistant Secretary to include relevant criminal debt information in the CFS or document the specific rationale for excluding such information. (Preparation material weakness) See the status of recommendation No. 03-8. Open. GAO-05-407 (results of the fiscal year 2004 audit) 13 04-3 The Secretary of the Treasury should direct the Fiscal Assistant Secretary to require that Treasury employees contact and document communications with federal agencies before recording journal vouchers to change agency audited closing package data. (Preparation material weakness) Treasury will continue to strengthen internal control procedures to ensure accurate and supported journal vouchers. Open. 04-6 The Secretary of the Treasury should direct the Fiscal Assistant Secretary to assess the infrastructure associated with the compilation process and modify it as necessary to achieve a sound internal control environment. (Preparation material weakness) Treasury continues to make improvements to its internal control infrastructure by updating and revising its standard operating procedures (SOP) and working to help ensure that key controls are in place at all critical areas. Open. GAO-07-805 (results of the fiscal year 2006 audit) 15 06-6 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management, to establish effective processes and procedures to ensure that appropriate information regarding litigation and claims is included in the government-wide legal representation letter. (Preparation material weakness) Treasury and OMB will develop a process to leverage the existing information from agencies, where possible, and assess options for approaches to use for obtaining additional information from agencies. Open. No. Count GAO-08-748 (results of the fiscal year 2007 audit) 16 07-9 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB’s Office of Federal Financial Management, to develop and implement effective processes for monitoring and assessing the effectiveness of internal control over the processes used to prepare the CFS. (Preparation material weakness) Treasury will start to implement an internal review, based on OMB Circular No. A-123 guidance, on the Financial Report in fiscal year 2016. Open. 07-10 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management, to develop and implement alternative solutions to performing almost all of the compilation effort at the end of the year, including obtaining and utilizing interim financial information from federal agencies. (Preparation material weakness) Treasury obtained and utilized interim financial information from federal agencies starting in fiscal year 2012. In fiscal year 2014, Treasury increased the number of key topics and completed subject matters to the extent possible before fiscal year-end. Closed. Open. Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to develop and implement procedures to provide for the active involvement of key federal entity personnel with technical expertise in relatively new areas and more complex areas in the preparation and review process of the Financial Report. (Preparation material weakness) 11-09 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to enhance the SOP entitled “Significant Federal Entities Identification” to include procedures for identifying any entities that become significant to the Financial Report during the fiscal year but were not identified as significant in the prior fiscal year. (Preparation material weakness) Treasury has involved key federal entity personnel with technical expertise during the interim and year-end preparation and review of the Financial Report. In fiscal year 2015, Treasury will continue to work collaboratively with the agencies to increase their involvement in preparing certain key disclosures and other information. Treasury implemented a process in fiscal year 2014 to identify all significant entities in the Financial Report. This process monitors the significant entity reporting throughout the fiscal year and is finalized before the publication date. Closed. Open. Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to enhance the SOP entitled “Significant Federal Entities Identification”to include procedures for obtaining audited closing packages from newly identified significant entities in the year they become significant, including timely written notification to newly identified significant entities. (Preparation material weakness) Treasury has enhanced the SOP to include notifying the significant entities in a timely manner and explaining the required significant entity reporting. Treasury and OMB will continue to work with the identified significant entities to obtain audit coverage over the required reporting. Treasury implemented a process in fiscal year 2014 to identify all significant entities in the Financial Report. This process monitors the significant entity reporting throughout the fiscal year and is finalized before the publication date. Closed. Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to enhance the SOP entitled “Significant Federal Entities Identification” to include procedures for identifying any material line items for significant calendar year-end entities that become material to the CFS during the current fiscal year but were not identified as material in the analysis using prior year financial information. (Preparation material weakness) 12-02 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to enhance the SOP entitled “Significant Federal Entities Identification” Treasury has enhanced the SOP to notify the significant entities in a timely manner and to explain the required significant entity reporting. Treasury and OMB will continue to work with the identified significant entities to obtain audit coverage over the required reporting. Open. Open. Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management, to establish and implement effective procedures for reporting amounts in the CFS budget statements that are fully consistent with the underlying information in significant federal entities’ audited financial statements and other financial data. (Budget statements material weakness) Treasury has established and begun implementing procedures to request agency submission of closing package data as a means of validating budget deficit data against agencies’ audited financial information. In addition, Treasury is examining the benefits of performing a complete audit on the general ledger data used for the budget statements. 12-05 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management, to establish and implement effective procedures for identifying and reporting all items needed to prepare the CFS budget statements. (Budget statements material weakness) Treasury will strengthen procedures to demonstrate that all material reconciling items are included on the budget statements. Open. Count GAO-14-543 (results from the fiscal year 2013 audit) 25 No. 13-01 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to include all key elements recommended by the Implementation Guide for OMB Circular A-123, Management’s Responsibility for Internal Control – Appendix A, Internal Control over Financial Reporting and fully consider the interrelationships between deficiencies in the corrective action plans. (Preparation material weakness) Treasury and OMB are working to develop a remediation plan to address issues that are impediments to auditability. Open. Closed. Fiscal Assistant Secretary to develop and implement procedures to sufficiently document management’s conclusions and the basis of such conclusions regarding the accounting policies for the CFS. (Preparation material weakness) Treasury implemented procedures in fiscal year 2014 related to documenting accounting policies. Treasury documented several accounting policies for the Financial Report in fiscal year 2014. 13-03 The Secretary of the Treasury should direct the Fiscal Assistant Secretary to improve and implement Treasury’s procedures for verifying that staff’s preparation of the narrative within the notes to the CFS is accurate and supported by the underlying financial information of the significant component entities. (Preparation material weakness) Treasury implemented a cross- reference guide in fiscal year 2014 to verify each line in the narrative of the Financial Report. Closed. Starting in fiscal year 2015, Treasury will require agencies to provide detailed root cause analysis documentation and corrective action plans (including target completion dates) through TFM 2-4700. Open. Treasury will complete the analysis of agency intragovernmental differences on a quarterly basis to determine if additional agencies should be integrated into the quarterly reconciliation process. Open. Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to expand the scorecard process to include intragovernmental activity and balances that are currently not covered by the process or demonstrate that such information is immaterial to the CFS. (Intragovernmental material weakness) No. 13-06 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to establish and implement policies and procedures for accounting for and reporting all significant General Fund activity and balances, obtaining assurance on the reliability of the amounts, and reconciling the activity and balances between the General Fund and federal entities. (Intragovernmental material weakness) Per Treasury and OMB Treasury is working to complete the General Fund general ledger and will continue to implement processes and controls in preparation for its auditability. Treasury has expanded the reciprocal categories for General Fund activity and balances to assist with reconciling intragovernmental differences with federal agency trading partners. Per GAO Open. Open. Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to establish a formalized process to require the performance of additional audit procedures specifically focused on intragovernmental activity and balances between federal entities to provide increased audit assurance over the reliability of such information. (Intragovernmental material weakness) Treasury and OMB will assess options to increase audit assurance over the reliability of agency intragovernmental activity and balances. Legend: CFS= consolidated financial statements of the U.S. government; OMB = Office of Management and Budget; Treasury = Department of the Treasury. The status of the recommendations listed in app. I is as of February 19, 2015, the date of our report on the audit of the fiscal year 2014 CFS. The recommendations in our prior reports related to material weaknesses in the following areas: Preparation: The material weakness related to the federal government’s inability to reasonably assure that the consolidated financial statements are (1) consistent with the underlying audited entities’ financial statements, (2) properly balanced, and (3) in accordance with U.S. GAAP. Budget statements: The material weakness related to the federal government’s inability to reasonably assure that the information in the Reconciliation of Net Operating Cost and Unified Budget Deficit and the Statement of Changes in Cash Balance from Unified Budget and Other Activities is complete and consistent with the underlying information in the audited entities’ financial statements and other financial data. Intragovernmental: The material weakness related to the federal government’s inability to adequately account for and reconcile intragovernmental activity and balances between federal entities. The title of this SOP changed to “Significant Entities” in fiscal year 2013.
Treasury, in coordination with OMB, prepares the Financial Report of the United States Government , which contains the CFS. Since GAO's first audit of the fiscal year 1997 CFS, certain material weaknesses and other limitations on the scope of its work have prevented GAO from expressing an opinion on the accrual-based CFS. As part of the fiscal year 2014 CFS audit, GAO identified material weaknesses and other control deficiencies in the processes used to prepare the CFS. The purpose of this report is to provide (1) details on the control deficiencies GAO identified related to the processes used to prepare the CFS, along with related recommendations, and (2) the status of corrective actions Treasury and OMB have taken to address GAO's prior recommendations relating to the processes used to prepare the CFS that remained open at the end of the fiscal year 2013 audit. During its audit of the fiscal year 2014 consolidated financial statements of the U.S. government (CFS), GAO identified control deficiencies in the Department of the Treasury's (Treasury) and the Office of Management and Budget's (OMB) processes used to prepare the CFS. These control deficiencies contributed to material weaknesses in internal control over the federal government's ability to adequately account for and reconcile intragovernmental activity and balances between federal entities; reasonably assure that the consolidated financial statements are (1) consistent with the underlying audited entities' financial statements, (2) properly balanced, and (3) in accordance with U.S. generally accepted accounting principles; and reasonably assure that the information in the Reconciliation of Net Operating Cost and Unified Budget Deficit and the Statement of Changes in Cash Balance from Unified Budget and Other Activities is complete and consistent with the underlying information in the audited entities' financial statements and other financial data. During its audit of the fiscal year 2014 CFS, GAO identified three new internal control deficiencies. Specifically, GAO found that Treasury did not have a sufficient process to work with key federal entities prior to the end of the fiscal year to reasonably assure that new or substantially revised federal accounting standards were consistently implemented by the entities to allow appropriate consolidation at the government-wide level, procedures for determining whether entities and transactions for which it does not have audit assurance are significant in the aggregate to the CFS, and sufficient procedures for (1) identifying significant increases or decreases in all CFS line items and disclosures from prior fiscal year reported amounts and (2) understanding the reasons for such changes. In addition, GAO found that various other control deficiencies identified in previous years' audits with respect to the processes used to prepare the CFS continued to exist. Specifically, 24 of the 31 recommendations from GAO's prior reports regarding control deficiencies in the processes used to prepare the CFS remained open as of February 19, 2015, the date of GAO's report on its audit of the fiscal year 2014 CFS. GAO will continue to monitor the status of corrective actions taken to address the 3 new recommendations made in this report as well as the 24 open recommendations from prior years as part of its fiscal year 2015 CFS audit. GAO is making three new recommendations to Treasury to address the control deficiencies identified during the fiscal year 2014 CFS audit. In commenting on GAO's draft report, Treasury and OMB generally concurred with GAO's recommendations.
Various factors challenge U.S. efforts to ensure proper management and oversight of U.S. development efforts in Afghanistan. Among the most noteworthy has been the “high-threat” working environment U.S. personnel and others face in Afghanistan, the difficulties in preserving institutional knowledge due in part to a high rate of staff turnover, and the Afghan government’s lack of capacity and corruption challenges. As we have previously reported, Afghanistan has experienced annual increases in the level of enemy-initiated attacks. Although the pattern of enemy-initiated attacks remains seasonal, generally peaking from June through September each year and then declining during the winter months, the annual “peak” (high point) and “trough” (low point) for each year have surpassed the peak and trough, respectively, for the preceding year since September 2005. This includes a rise in attacks against coalition forces and civilians, as well as Afghan National Security Forces. The high- threat security environment has challenged USAID’s and others’ ability to implement assistance programs in Afghanistan, increasing implementation times and costs for projects in nonsecure areas. For example, we found during our review of the U.S. road reconstruction efforts that a key road to the Kajaki dam was terminated after USAID had spent about $5 million after attacks prevented contractors from working on the project. In addition, U.S. officials cited poor security as having caused delays, disruptions, and even abandonment of certain reconstruction projects. For example, a project to provide Afghan women jobs in a tailoring business in southwest Afghanistan failed, in part, because of the threat against the female employees. The high-threat security environment has also limited the movement and ability of U.S. personnel to directly monitor projects. USAID has specifically cited the security environment in Afghanistan as a severe impediment to its ability to directly monitor projects, noting that USAID officials are generally required to travel with armored vehicles and armed escorts to visit projects in much of the country. USAID officials stated that their ability to arrange project visits can become restricted if military forces cannot provide the necessary vehicles or escorts because of other priorities. In 2009, USAID documented site visits for two of the eight programs included in our review (see fig. 1). We have experienced similar restrictions to travel beyond the embassy compound during our visits to Afghanistan. In the Mission’s 2008 and 2009 Federal Managers Financial Integrity Act of 1982 Annual Certifications, the Mission reported its efforts to monitor project implementation in Afghanistan as a significant deficiency. These reports raised concerns that designated USAID staff are “prevented from monitoring project implementation in an adequate manner with the frequency required” and noted that there is a high degree of potential for fraud, waste, and mismanagement of Mission resources. USAID further noted that the deficiency in USAID’s efforts to monitor projects will remain unresolved until the security situation in Afghanistan improves and stabilizes. The reports identified several actions to address the limitations to monitor project implementation, including, among others: placement of more staff in the field; use of Afghan staff—who have greater mobility than expatriate staff—to monitor projects; hiring of a contractor to monitor the implementation of construction projects and conduct regular site visits; and collecting of implementing partner video or photographs—including aerial photographs. Preserving institutional knowledge is vital to ensuring that new Mission personnel are able to effectively manage and build on USAID assistance efforts. We found, however, during our review of USAID’s road reconstruction efforts in 2008 and, most recently, our review of USAID’s agricultural development program that USAID had not taken steps to mitigate challenges to maintaining institutional knowledge. USAID did not consistently document decisions made. For example, staff working in Afghanistan had no documented assessments for modifications to the largest USAID-funded United Nations Office for Project Services (UNOPS) project in Afghanistan—Rehabilitation of Secondary Roads—even though these modifications increased the scope and budget of the program by more than ten times its original amount. Furthermore, USAID and other U.S. agencies in Afghanistan lack a sufficient number of acquisition and oversight personnel with experience working in contingency operations. This problem is exacerbated by the lack of mechanisms for retaining and sharing institutional knowledge during transitions of USAID personnel and the rate at which USAID staff turn over, which USAID acknowledged as hampering program design and implementation. In addition, the State Department Office of Inspector General noted in its February 2010 inspection of the U.S. Embassy to Afghanistan and its staff that 1-year assignments, coupled with multiple rest-and-recuperation breaks, limited the development of expertise, contributed to a lack of continuity, and required a higher number of personnel to achieve strategic goals. The USAID monitoring officials for the eight agricultural programs we focused on during our review of USAID’s agricultural development efforts in Afghanistan were in place, on average, 7.5 months (see table 1). Moreover, the length of time that a monitoring official was in place has declined. The two most recently initiated agricultural programs have had monitoring officials in place for an average of only 3 months each. USAID officials noted that the effectiveness of passing information from one monitoring official to another is dependent on how well the current official has maintained his or her files and what guidance, if any, is left for their successor. USAID officials noted that a lack of documentation and knowledge transfer may have contributed to the loss of institutional knowledge. We reported in April 2010 that USAID used contractors to help administer its contracts and grants in Afghanistan, in part to address frequent rotations of government personnel and security and logistical concerns. Functions performed by these contractors included on-site monitoring of other contractors’ activities and awarding and administering grants. While relying on contractors to perform such functions can provide benefits, we found that USAID did not always fully address related risks. For example, USAID did not always include a contract clause required by agency policy to address potential conflicts of interest, and USAID contracting officials generally did not ensure enhanced oversight in accordance with federal regulations for situations in which contractors provided services that closely supported inherently governmental functions. USAID has increasingly included and emphasized capacity building among its programs to address the government of Afghanistan’s lack of capacity to sustain and maintain many of the programs and projects put in place by donors. In 2009, USAID rated the capability of 14 of 19 Afghan ministries and institutions it works with as 1 or 2 on a scale of 5, with 1 representing the need for substantial assistance across all areas and 5 representing the ability to perform without assistance. The Ministry of Agriculture, Irrigation, and Livestock was given a rating of 2—needing technical assistance to perform all but routine functions—while the Ministry for Rural Rehabilitation and Development was given a rating of 4—needing little technical assistance. Although USAID has noted overall improvement among the ministries and institutions in recent years, none was given a rating of 5. USAID has undertaken some steps to address the Afghan ministries’ limited capacity and corruption in Afghanistan by including a capacity- building component in its more recent contracts. In 2009, the U.S. government further emphasized capacity building by pursuing a policy of Afghan-led development, or “Afghanization,” to ensure that Afghans lead efforts to secure and develop their country. At the national level, the United States plans to channel more of its assistance through the Afghan government’s core budget. At the field level, the United States plans to shift assistance to smaller, more flexible, and faster contract and grant mechanisms to increase decentralized decision making in the field. For example, the U.S. government agricultural strategy stresses the importance of increasing the Ministry of Agriculture, Irrigation, and Livestock’s capacity to deliver services through direct budget and technical assistance. USAID also recognized that, with a move toward direct budget assistance to government ministries, USAID’s vulnerability to waste and corruption is anticipated to increase. According to USAID officials, direct budget assistance to the Ministry of Agriculture, Irrigation, and Livestock is dependent on the ability of the ministry to demonstrate the capacity to handle the assistance. These officials noted that an assessment of the Ministry’s ability to manage direct budget assistance was being completed. The U.S. Embassy has plans under way to establish a unit at the embassy to receive and program funds on behalf of the Ministry while building the Ministry’s capacity to manage the direct budget assistance on its own. According to the Afghanistan’s National Development Strategy, Afghanistan’s capacity problems are exacerbated by government corruption, describing it as a significant and growing problem in the country. The causes of corruption in Afghan government ministries, according to the Afghanistan National Development Strategy, can be attributed to, among other things, a lack of institutional capacity in public administration, weak legislative and regulatory frameworks, limited enforcement of laws and regulations, poor and nonmerit-based qualifications of public officials, low salaries of public servants, and a dysfunctional justice sector. Furthermore, the sudden influx of donor money into a system already suffering from poorly regulated procurement practices increases the risk of corruption. In April 2009, USAID published an independent Assessment of Corruption in Afghanistan that found that corruption was a significant and growing problem across Afghanistan that undermined security, development, and democracy-building objectives. According to the assessment, pervasive, entrenched, and systemic corruption is at an unprecedented scope. The USAID-sponsored assessment added that Afghanistan has or is developing most of the institutions needed to combat corruption, but these institutions, like the rest of the government, are limited by a lack of capacity, rivalries, and poor integration. The assessment also noted that the Afghan government’s apparent unwillingness to pursue and prosecute high-level corruption, an area of particular interest to this Subcommittee, was also reported as particularly problematic. The assessment noted that “substantial USAID assistance already designed to strengthen transparency, accountability, and effectiveness—prime routes to combat corruption.” Additionally, we reported in 2009 that USAID’s failure to adhere to its existing policies severely limited its ability to require expenditure documentation for Afghanistan-related grants that were associated with findings of alleged criminal actions and mismanaged funds. Specifically, in 2008, a United Nations procurement taskforce found instances of fraud, embezzlement, conversion of public funds, conflict of interest, and severe mismanagement of USAID-funded the UNOPS projects in Afghanistan, including the $365.8 million Rehabilitation of Secondary Roads project. The USAID Office of Inspector General also reported in 2008 that UNOPS did not complete projects as claimed and that projects had defects and warranty issues, as well as numerous design errors, neglected repairs, and uninstalled equipment and materials—all of which were billed as complete. USAID’s Mission to Afghanistan manages and oversees most U.S. development assistance programs in Afghanistan and relies on implementing partners to carry out its programs. USAID’s Automated Directives System (ADS) establishes performance management and evaluation procedures for managing and overseeing its assistance programs. These procedures, among other things, require (1) the development of a Mission Performance Management Plan (PMP); (2) the establishment of performance indicators and targets; and (3) analyses and use of program performance data. USAID had generally required the same performance management and evaluation procedures in Afghanistan as it does in other countries in which it operates. However, in October 2008, USAID approved new guidance that proposed several alternative monitoring methods for USAID projects in high-threat environments. This guidance was disseminated in December 2009, but the Afghanistan Mission agricultural office staff did not become aware of the guidance until June 2010. The ADS requires USAID officials to complete a Mission PMP for each of its high-level objectives as a tool to manage its performance management and evaluation procedures. While the Afghanistan Mission had developed a PMP in 2006, covering the years 2006, 2007, and 2008, the Afghanistan Mission has operated without a PMP to guide development assistance efforts after 2008. According to USAID, the Mission is in the process of developing a new Mission PMP that will reflect the current Administration’s priorities and strategic shift to counterinsurgency. USAID expects the new PMP to be completed by the end of fiscal year 2010. The Mission attributed the delay in creating the new PMP to the process of developing new strategies in different sectors and gaining approval from the Embassy in Afghanistan and from agency headquarters in Washington. Overall, we found that the 2006-2008 Mission PMP incorporated key planning activities. For example, the PMP identified indicators and established baselines and targets for the high-level objectives for all USAID programs in Afghanistan, including its agricultural programs, which are needed to assess program performance. In addition, the PMP described regular site visits, random data checks, and data quality assessments as the means to be used to verify and validate information collected. The Mission PMP noted that it should enable staff to systematically assess contributions to the Mission’s program results and take corrective action when necessary. Further, it noted that indicators, when analyzed in combination with other information, provide data for program decision making. The 2006-2008 Mission PMP, however, did not include plans for evaluations of the high-level objective that the agricultural programs in our review supported. Under USAID’s current policies, implementing partners working on USAID development assistance projects in Afghanistan are required to develop and submit monitoring and evaluation plans that include performance indicators and targets to USAID for approval. However, during our most recent review of USAID’s agricultural development programs, we found that USAID did not always approve implementing partner performance indicators and targets. While the implementing partners for the eight agricultural programs we reviewed did submit monitoring and evaluation plans, which generally contained performance indicators and targets, we found that USAID had not always approved these plans and did not consistently require targets to be set for all of indicators as required. For example, only 2 of 7 active agricultural programs included in our review had set targets for all of their indicators for fiscal year 2009. Figure 2 shows the number of performance indicators with targets by fiscal year for the eight agricultural programs we reviewed that the implementing partner developed and submitted to USAID for approval. In addition to collecting performance data and assessing the data’s quality, ADS also includes the monitoring activities of analyzing and interpreting performance data in order to make program adjustments, inform higher- level decision making, and resource allocation. We found that while USAID collects implementing partner performance data, or information on targets and results, the agency did not fully analyze and interpret this performance data for the eight agricultural programs we reviewed. Some USAID officials in Afghanistan told us that they reviewed the information reported in implementing partners’ quarterly reports in efforts to analyze and interpret a program’s performance for the eight programs, although they could not provide any documentation of their efforts to analyze and interpret program performance. Some USAID officials also said that they did not have time to fully review the reports. In addition, in our 2008 report on road reconstruction in Afghanistan, we reported that USAID officials did not collect data for two completed road projects or for any active road reconstruction projects in a manner to allow them to accurately measure impact. As a result, it is unclear the extent to which USAID uses performance data. USAID is also required to report results to advance organizational learning and demonstrate USAID’s contribution to overall U.S. government foreign assistance goals. While USAID did not fully analyze and interpret program data, the Mission did meet semiannually to examine and document strategic issues and determine whether the results of USAID-supported agricultural activities are contributing to progress toward high-level objectives. The Mission also reported aggregate results in the Foreign Assistance Coordination and Tracking System. ADS also requires USAID to undertake at least one evaluation for each of its high-level objectives, to disseminate the findings of evaluations, and to use evaluation findings to further institutional learning, inform current programs, and shape future planning. In May 2007, USAID initiated an evaluation covering three of the eight agricultural programs in our review—ADP-Northeast, ADP-East, and ADP-South. This evaluation intended to assess the progress toward achieving program objectives and offer recommendations for the coming years. The evaluators found insufficient data to evaluate whether the programs were meeting objectives and targets, and, thus, shifted their methodology to a qualitative review based on interviews and discussions with key individuals. As required, USAID posted the evaluation to its Internet site for dissemination. However, we are uncertain of the extent to which USAID used the 2007 evaluation to adapt current programs and plan future programs. Few staff were able to discuss the evaluation’s findings and recommendations and most noted that they were not present when the evaluation of the three programs was completed and, therefore, were not aware of the extent to which changes were made to the programs. With regards to using lessons learned to plan future programs, USAID officials could not provide examples of how programs were modified as a result of the discussion. USAID has planned evaluations for seven of the eight agricultural programs included in our review during fiscal year 2010. Madam Chairwoman and members of the subcommittee, this concludes my prepared statement. I will be happy to answer any questions you may have. To address our objectives, we reviewed past GAO reports and testimonies, examining U.S. efforts in Afghanistan, including reviews of USAID’s agricultural and road reconstruction projects. We reviewed U.S. government performance management and evaluation, funding; and reporting documents related to USAID programs in Afghanistan. Our reports and testimonies include analysis of documents and other information from USAID and other U.S. agencies, as well as private contractors and other implementing partners working on U.S.-funded programs in Washington, D.C., and Afghanistan. In Afghanistan, we also met with officials from the United Nations and the governments of Afghanistan and the United Kingdom. We traveled to Afghanistan to meet with U.S. and Afghan officials, implementing partners, and aid recipients to discuss several U.S.-funded projects. We analyzed program budget data provided by USAID to report on program funding, as well as changes in USAID’s program monitoring officials over time. We analyzed program data provided by USAID and its implementing partners to track performance against targets over time. We took steps to assess the reliability of the budget and performance and determined they were sufficiently reliable for the purposes of this report. Our work was conducted in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. A more detailed description of our scope and methodologies can be found in the reports cited throughout this statement. For questions regarding this testimony, please contact Charles Michael Johnson Jr., at (202) 512-7331 or johnsoncm@gao.gov. Individuals making key contributions to this statement include: Jeffrey Baldwin-Bott, Thomas Costa, Aniruddha Dasgupta, David Hancock, John Hutton, Hynek Kalkus, Farahnaaz Khakoo, Bruce Kutnick, Anne McDonough-Hughes, and Jim Michels. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses oversight of U.S. assistance programs in Afghanistan. Strengthening the Afghan economy through development assistance efforts is critical to the counterinsurgency strategy and a key part of the U.S Integrated Civilian-Military Campaign Plan for Afghanistan. Since fiscal year 2002, the U.S. Agency for International Development (USAID) has awarded over $11.5 billion in support of development assistance programs in Afghanistan. Since 2003, GAO has issued several reports and testimonies related to U.S. security, governance, and development efforts in Afghanistan. In addition to reviewing program planning and implementation, we have focused on efforts to ensure proper management and oversight of the U.S. investment, which are essential to reducing waste, fraud, and abuse. Over the course of this work, we have identified improvements that were needed, as well as many obstacles that have affected success and should be considered in program management and oversight. While drawing on past work relating to U.S. development efforts in Afghanistan, this testimony focuses on findings in our most recent report released yesterday on the USAID's management and oversight of its agricultural programs in Afghanistan. It will address (1) the challenges the United States faces in managing and overseeing development programs in Afghanistan; and (2) the extent to which USAID has followed its established performance management and evaluation procedures. Various factors challenge U.S. efforts to ensure proper management and oversight of U.S. development efforts in Afghanistan. Among the most significant has been the "high-threat" working environment, the difficulties in preserving institutional knowledge due to the lack of a formal mechanism for retaining and sharing information during staff turnover, and the Afghan government ministries' lack of capacity and corruption challenges. USAID has taken some steps to assess and begin addressing the limited capacity and corruption challenges associated with Afghan ministries. In addition, USAID has established performance management and evaluation procedures for managing and overseeing its assistance programs. These procedures, among other things, require (1) the development of a Mission Performance Management Plan (PMP); (2) the establishment and approval of implementing partner performance indicators and targets; and (3) analyses and use of performance data. Although USAID disseminated alternative monitoring methods for projects in high-threat environments such as Afghanistan, USAID has generally required the same performance management and evaluation procedures in Afghanistan as it does in other countries in which it operates. Summary USAID has not consistently followed its established performance management and evaluation procedures. There were various areas in which the USAID Mission to Afghanistan (Mission) needed to improve upon. In particular, we found that the Mission had been operating without an approved PMP to guide its management and oversight efforts after 2008. In addition, while implementing partners have routinely reported on the progress of USAID's programs, we found that USAID did not always approve the performance indicators these partners were using, and that USAID did not ensure, as its procedures require, that its implementing partners establish targets for each performance indicator. For example, only 2 of 7 USAID-funded agricultural programs active during fiscal year 2009, included in our review, had targets for all of their indicators. We also found that USAID could improve its assessment and use of performance data submitted by implementing partners or program evaluations to, among other things, help identify strengths or weaknesses of ongoing or completed programs. Moreover, USAID needs to improve documentation of its programmatic decisions and put mechanisms in place for program managers to transfer knowledge to their successors. Finally, USAID has not fully addressed the risks of relying on contractor staff to perform inherently governmental tasks, such as awarding and administering grants. In the absence of consistent application of its existing performance management and evaluation procedures, USAID programs are more vulnerable to corruption, waste, fraud, and abuse. We reported in 2009 that USAID's failure to adhere to its existing policies severely limited its ability to require expenditure documentation for Afghanistan-related grants that were associated with findings of alleged criminal actions and mismanaged funds. To enhance the performance management of USAID's development assistance programs in Afghanistan, we have recommended, among other things, that the Administrator of USAID take steps to: (1) ensure programs have performance indicators and targets; (2) fully assess and use program data and evaluations to shape current programs and inform future programs; (3) address preservation of institutional knowledge; and (4) improve guidance for the use and management of USAID contractors. USAID concurred with these recommendations, and identified steps the agency is taking to address them. We will continue to monitor and follow up on the implementation of our recommendations.
The nearly $3 billion in unpaid federal taxes owed by over 27,000 contractors registered in DOD’s Central Contractor Registration system (CCR) represented almost 14 percent of the registered contractors as of February 2003. In addition, DOD contractors receiving fiscal year 2002 payments from five of the largest DFAS contract and vendor payment systems represented at least $1.7 billion of the nearly $3 billion in unpaid federal taxes shown on IRS records. Data reliability issues with respect to DOD and IRS records prevented us from identifying an exact amount of unpaid federal taxes. Consequently, the total amount of unpaid federal taxes owed by DOD contractors is not known. The type of unpaid taxes owed by these DOD contractors varied and consisted of payroll, corporate income, excise, unemployment, individual income, and other types of taxes. Unpaid payroll taxes include amounts that a business withholds from an employee’s wages for federal income taxes, Social Security, Medicare, and the related matching contributions of the employer for Social Security and Medicare. As shown in figure 1, about 42 percent of the total tax amount owed by DOD contractors was for unpaid payroll taxes. Employers are subject to civil and criminal penalties if they do not remit payroll taxes to the federal government. When an employer withholds taxes from an employee’s wages, the employer is deemed to have a responsibility to hold these amounts “in trust” for the federal government until the employer makes a federal tax deposit in that amount. To the extent these withheld amounts are not forwarded to the federal government, the employer is liable for these amounts, as well as the employer’s matching Federal Insurance Contribution Act contributions for Social Security and Medicare. Individuals within the business (e.g., corporate officers) may be held personally liable for the withheld amounts not forwarded and assessed a civil monetary penalty known as a trust fund recovery penalty (TFRP). Failure to remit payroll taxes can also be a criminal felony offense punishable by imprisonment of more than a year, while the failure to properly segregate payroll taxes can be a criminal misdemeanor offense punishable by imprisonment of up to a year. The law imposes no penalties upon an employee for the employer’s failure to remit payroll taxes since the employer is responsible for submitting the amounts withheld. The Social Security and Medicare trust funds are subsidized or made whole for unpaid payroll taxes by the general fund, as we discussed in a previous report. Over time, the amount of this subsidy is significant. As of September 1998, the estimated cumulative amount of unpaid taxes and associated interest for which the Social Security and Medicare trust funds were subsidized by the general fund was approximately $38 billion. A substantial amount of the unpaid federal taxes shown in IRS records as owed by DOD contractors had been outstanding for several years. As reflected in figure 2, 78 percent of the nearly $3 billion in unpaid taxes was over a year old as of September 30, 2002, and 52 percent of the unpaid taxes was for tax periods prior to September 30, 1999. Our previous work has shown that as unpaid taxes age, the likelihood of collecting all or a portion of the amount owed decreases. This is due, in part, to the continued accrual of interest and penalties on the outstanding tax debt, which, over time, can dwarf the original tax obligation. Until DOD establishes processes to provide information from all payment systems to TOP, the federal government will continue missing opportunities to collect hundreds of millions of dollars in tax debt owed by DOD contractors. Additionally, IRS’s current implementation strategy appears to make the levy program one of the last collection tools IRS uses. Changing the IRS collection program to (1) remove the policies that work to unnecessarily exclude cases from entering the levy program and (2) promote the use of the levy program to make it one of the first collection tools could allow IRS—and the government—to reap the advantages of the program earlier in the collection process. We estimate that DOD, which functions as its own disbursing agent, could have offset payments and collected at least $100 million in unpaid taxes in fiscal year 2002 if it and IRS had worked together to effectively levy contractor payments. However, in the 6 years since the passage of the Taxpayer Relief Act of 1997, DOD has collected only about $687,000. DOD collections to date relate to DFAS payment reporting associated with implementation of the TOP process in December 2002 for its contract payment system, which disbursed over $86 billion to DOD contractors in fiscal year 2002. Although it has been more than 7 years since the passage of DCIA, DOD has not fully assisted IRS in using its continuous levy authority for the collection of unpaid taxes by providing Treasury’s Financial Management Service (FMS) with all DFAS payment information. IRS’s continuous levy authority authorizes the agency to collect federal tax debts of businesses and individuals that receive federal payments by levying up to 15 percent of each payment until the debt is paid. Under TOP, FMS matches a database of debtors (including those with federal tax debt) to certain federal payments (including payments to DOD contractors). When a match occurs, the payment is intercepted, the levied amount is sent to IRS, and the balance of the payment is sent to the debtor. All disbursing agencies are to compare their payment records with the TOP database. Since DOD has its own disbursing authority, once DFAS is notified by FMS of the amount to be levied, DOD should deduct this amount from the contractor payment before it is made to the payee and forward the levied amount to the Department of the Treasury as described in figure 3. The TOP database includes federal tax and nontax debt, state tax debt, and child support debt. By fully participating in the TOP process, DOD will also aid in the collection of other debts, such as child support and federal nontax debt (e.g., student loans). At the completion of our work, DOD had no formal plans or schedule to begin providing payment information from any of its 15 vendor payment systems to FMS for comparison with the TOP database. These 15 decentralized payment systems disbursed almost $97 billion to DOD contractors from 22 different payment locations in fiscal year 2002. In response to our draft report, DOD developed a schedule to provide payment information to TOP for all of its additional payment systems by March 2005. As we have previously reported, DOD’s business systems environment is stovepiped and not well integrated. DOD recently reported that its current business operations were supported by approximately 2,300 systems in operation or under development, and requested approximately $18 billion in fiscal year 2003 for the operation, maintenance, and modernization of DOD business systems. In addition, DFAS did not have an organizational structure in place to implement the TOP payment reporting process. DOD recently communicated a timetable for implementing TOP reporting for its vendor payment systems with completion targeted for March 2005. IRS’s continuing challenges in pursuing and collecting unpaid taxes also hinder the government’s ability to take full advantage of the levy program. For example, due to resource constraints, IRS has established policies that either exclude or delay referral of a significant number of cases to the program. Also, the IRS review process for taxpayer requests, such as installment agreements or certain offers in compromise which IRS is legally required to consider, often takes many months, during which time IRS excludes these cases from the levy program. In addition, inaccurate or outdated information in IRS systems prevents cases from entering the levy program. Our audit and investigation of 47 case studies also showed IRS continuing to work with businesses and individuals to achieve voluntary compliance, and taking enforcement actions such as levies of federal contractor payments later in the collection process. We recently recommended that IRS study the feasibility of submitting all eligible unpaid federal tax accounts to FMS on an ongoing basis for matching against federal payment records under the levy program, and use information from any matches to assist IRS in determining the most efficient method of collecting unpaid taxes, including whether to use the levy program. The study was not completed at the time of our audit. In earlier reviews, we estimated IRS could use the levy program to potentially recover hundreds of millions of dollars in tax debt. Although the levy program could provide a highly effective and efficient method of collecting unpaid taxes from contractors that receive federal payments, IRS policies restrict the number of cases that enter the program and the point in the collection process they enter the program. For each of the collection phases listed below, IRS policy either excludes or severely delays putting cases into the levy program. Phase 1: Notify taxpayer of unpaid taxes, including a demand for payment letter. Phase 2: Place the case into the Automated Collection System (ACS) process. The ACS process consists primarily of telephone calls to the taxpayer to arrange for payment. Phase 3: Move the case into a queue of cases awaiting assignment to a field collection revenue officer. Phase 4: Assign the case to field collections where a revenue officer attempts face-to-face contact and collection. As of September 30, 2002, IRS listed $81 billion of cases in these four phases: 17 percent were in notice status, 17 percent were in ACS, 26 percent were in field collection, and 40 percent were in the queue awaiting assignment to the field. At the same time these four phases take place, sometimes over the course of years, DOD contractors with unpaid taxes continue to receive billions of dollars in contract payments. IRS excludes cases in the notification phase from the levy program to ensure proper notification rules are followed. However, as we previously reported, once proper notification has been completed, IRS continues to delay or exclude from the levy program those accounts placed in the other three phases. IRS policy is to exclude accounts in the ACS phase primarily because officials believed they lack the resources to issue levy notices and respond to the potential increase in telephone calls from taxpayers responding to the notices. Additionally, IRS excludes the majority of cases in the queue phase (awaiting assignment to field collection) from the levy program for 1 year. Only after cases await assignment for over a year does IRS allow them to enter the levy program. Finally, IRS excludes most accounts from the levy program once they are assigned to field collection because revenue officers said that the levy action could interfere with their successfully contacting taxpayers and resolving the unpaid taxes. These policy decisions, which may be justified in some cases, result in IRS excluding millions of cases from potential levy. IRS officials that work on ACS and field collection inventories can manually unblock individual cases they are working in order to put them in the levy program. However, by excluding cases in the ACS and field collection phases, IRS records indicate it excluded as much as $34 billion of cases from the levy program as of September 30, 2002. In January 2003, IRS unblocked and made available for levy those accounts identified as receiving federal salary or annuity payments. However, other accounts remain blocked from the levy program. IRS stated that it intended to unblock a portion of the remaining accounts sometime in 2005. Additionally, $32 billion of cases are in the queue, and thus under existing policy would be excluded from the levy program for the first year each case is in that phase. IRS policies, along with its inability to more actively pursue collections, both of which IRS has in the past attributed to resource constraints, combine to prevent many cases from entering the levy program. Since IRS has a statutory limitation on the length of time it can pursue unpaid taxes, generally limited to 10 years from the date of the assessment, these long delays greatly decrease the potential for IRS to collect the unpaid taxes. We identified specific examples of IRS not actively pursuing collection in our review of 47 selected cases involving DOD contractors. In one case, IRS cited resource and workload management considerations. IRS is not currently seeking collection of about $14.9 billion of unpaid taxes citing these considerations–about 5 percent of its overall inventory of unpaid assessments as of September 30, 2002. In another case, IRS cited financial hardship where the taxpayer was unable to pay. This puts collection activities on hold until the taxpayer’s adjusted gross income (per subsequent tax return filings) exceeds a certain threshold. Some cases repeatedly entered the queue awaiting assignment to a field collection revenue officer and remained there for long periods. In addition to excluding cases for various operational and policy reasons as described above, IRS excludes cases from the levy program for particular taxpayer events such as bankruptcy, litigation, or financial hardship, as well as when taxpayers apply for an installment agreement or an offer in compromise. When one of these events take place, IRS enters a code in its automated system that excludes the case from entering the levy program. Although these actions are appropriate, IRS may lose opportunities to collect through the levy program if the processing of agreements is not timely or prompt action is not taken to cancel the exclusion when the event, such as a dismissed bankruptcy petition, is concluded. Delays in processing taxpayer documents and errors in taxpayer records are long-standing problems at IRS and can harm both government interests and the taxpayer. Our review of cases involving DOD contractors with unpaid federal taxes indicates that problems persist in the timeliness of processing taxpayer applications and in the accuracy of IRS records. For example, we identified a number of cases in which the processing of DOD contractor applications for an offer in compromise or an installment agreement was delayed for long periods, thus blocking the cases from the levy program and potentially reducing government collections. We also found that inaccurate coding at times prevented both IRS collection action and cases from entering the levy program. For example, if these blocking codes remain in the system for long periods, either because IRS delays processing taxpayer agreements or because IRS fails to input or reverse codes after processing is complete, cases may be needlessly excluded from the levy program. Although the nation’s tax system is built upon voluntary compliance, when businesses and individuals fail to pay voluntarily, the government has a number of enforcement tools to compel compliance or elicit payment. Our review of DOD contractors with unpaid federal taxes indicates that although the levy program could be an effective, reliable collection tool, IRS is not using the program as a primary tool for collecting unpaid taxes from federal contractors. For the cases we audited and investigated, IRS subordinated the use of the levy program in favor of negotiating voluntary tax compliance with the DOD contractors, which often resulted in minimal or no actual collections. We selected for case study 47 businesses and individuals that had unpaid taxes and were receiving DOD contractor payments in fiscal year 2002. For all 47 cases that we audited and investigated, we found abusive or potentially criminal activity related to the federal tax system. Thirty-four of these case studies involved businesses with employees that had unpaid payroll taxes dating as far back as the early 1990s, some for as many as 62 tax periods. However, rather than fulfill their role as “trustees” of this money and forward it to IRS, these DOD contractors diverted the money for other purposes. The other 13 case studies involved individuals that had unpaid income taxes dating as far back as the 1980s. We are referring the 47 cases detailed in our related report to IRS for evaluation and additional collection action or criminal investigation. Our audit and investigation of the 34 case study business contractors showed substantial abuse or potential criminal activity as all had unpaid payroll taxes and all diverted funds for personal or business use. In table 1, and on the following pages, we highlight 13 of these businesses and estimate the amounts that could have been collected through the levy program based on fiscal year 2002 DOD payments. For these 13 cases, the businesses owed unpaid taxes for a range of 6 to 30 quarters (tax periods). Eleven of these cases involved businesses that had unpaid taxes in excess of 10 tax periods, and 5 of these were in excess of 20 tax periods. The amount of unpaid taxes associated with these 13 cases ranged from about $150,000 to nearly $10 million; 7 businesses owed in excess of $1 million. In these 13 cases, we saw some cases where IRS filed tax liens on property and bank accounts of the businesses, and a few cases where IRS collected minor amounts through the levying of non-DOD federal payments. We also saw 1 case in which the business applied for an offer in compromise, which IRS rejected on the grounds that the business had the financial resources to pay the outstanding taxes in their entirety, and 2 cases in which the businesses entered into, and subsequently defaulted on, installment agreements to pay the outstanding taxes. In 5 of the 13 cases, IRS assessed the owners or business officers with TFRPs, yet no collections were received from these penalty assessments. The following provides illustrative detailed information on several of these cases. Case # 1 - This base support contractor provided services such as trash removal, building cleaning, and security at U.S. military bases. The business had revenues of over $40 million in 1 year, with over 25 percent of this coming from federal agencies. This business’s outstanding tax obligations consisted of unpaid payroll taxes. In addition, the contractor defaulted on an IRS installment agreement. IRS assessed a TFRP against the owner. The business reported that it paid the owner a six figure income and that the owner had borrowed nearly $1 million from the business. The business also made a down payment for the owner’s boat and bought several cars and a home outside the country. The owner allegedly has now relocated his cars and boat outside the United States. This contractor went out of business in 2003 after state tax authorities seized its bank account. The business transferred its employees to a relative’s business, which also had unpaid federal taxes, and submitted invoices and received payments from DOD on a previous contract through August 2003. Case # 2 - This engineering research contractor received nearly $400,000 from DOD during 2002. At the time of our audit, the contractor had not remitted its payroll tax withholdings to the federal government since the late 1990s. In 1996, the owner bought a home and furnishings worth approximately $1 million and borrowed nearly $1 million from the business. The owner told our investigators that the payroll tax funds were used for other business purposes. Case # 3 - This aircraft parts manufacturer did not pay payroll withholding and unemployment taxes for 19 of 20 periods through the mid- to late 1990s. IRS assessed a TFRP against several corporate officers, and placed the business in the FPLP in 2000. This business claims that its payroll taxes were not paid because the business had not received DOD contract payments; however, DOD records show that the business received over $300,000 from DOD during 2002. Case # 5 - This janitorial services contractor reported revenues of over $3 million and had received over $700,000 from DOD in a recent year. The tax problems of this business date back to the mid-1990s. At the time of our audit, the business had both unpaid payroll and unemployment taxes of nearly $3 million. In addition, the business did not file its corporate tax returns for 8 years. IRS assessed a TFRP against the principal officer of the business in early 2002. This contractor employed two officers who had been previously assessed TFRPs related to another business. Case # 7 - This furniture business reported gross revenues of over $200,000 and was paid nearly $40,000 by DOD in a recent year. The business had accumulated unpaid federal taxes of over $100,000 at the time of our audit, primarily from unpaid employee payroll taxes. The business also did not file tax returns for several years, even after repeated notices from IRS. The owners made an offer to pay IRS a portion of the unpaid taxes through an offer in compromise, but IRS rejected the offer because it concluded that the business and its owners had the resources to pay the entire amount. At the time of our audit, IRS was considering assessing a TFRP against the owners to make them personally liable for the taxes the business owed. The owners used the business to pay their personal expenses, such as their home mortgage, utilities, and credit cards. The owners said they considered these payments a loan from the business. Under this arrangement, the owners were not reporting this company benefit as income so they were not paying income taxes, and the business was reporting inflated expenses. Case # 9 - This family-owned and operated building contractor provided a variety of products and services to DOD, and DOD provided a substantial portion of the contractor’s revenues. At the time of our review, the business had unpaid payroll taxes dating back several years. In addition to failing to remit the payroll taxes it withheld from employees, the business had a history of filing tax returns late, sometimes only after repeated IRS contact. Additionally, DOD made an overpayment to the contractor for tens of thousands of dollars. Subsequently, DOD paid the contractor over $2 million without offsetting the earlier overpayment. Case # 10 - This base support services contractor has close to $1 million in unpaid payroll and unemployment taxes dating back to the early 1990s, and the business has paid less than 50 percent of the taxes it owed. IRS assessed a TFRP against one of the corporate officers. This contractor received over $200,000 from DOD during 2002. Individuals are responsible for the payment of income taxes, and our audit and investigation of 13 individuals showed significant abuse of the federal tax system similar to what we found with our DOD business case studies. In table 2, and on the following pages, we highlight four of the individual case studies. In all four cases, the individuals had unpaid income taxes. In one of the four cases, the individual operated a business as a sole proprietorship with employees and had unpaid payroll taxes. Taxes owed by the individuals ranged from four to nine tax periods, which equated to years. Each individual owed in excess of $100,000 in unpaid income taxes, with one owing in excess of $200,000. In two of the four cases, the individuals had entered into, and subsequently defaulted on, at least one installment agreement to pay off the tax debt. The following provides illustrative detailed information on these four cases. Case # 14 - This individual’s business repaired and painted military vehicles. The owner failed to pay personal income taxes and did not send employee payroll tax withholdings to IRS. The owner owed over $500,000 in unpaid federal business and individual taxes. Additionally, the TOP database showed the owner had unpaid child support. IRS levied the owner’s bank accounts and placed liens against the owner’s real property and business assets. The business received over $100,000 in payments from DOD in a recent year, and the contractor’s current DOD contracts are valued at over $60 million. In addition, the business was investigated for paying employee wages in cash. Despite the large tax liability, the owner purchased a home valued at over $1 million and a luxury sports car. Case # 15 - This individual, who is an independent contractor and works as a dentist at a military installation, had a long history of not paying income taxes. The individual did not file several tax returns and did not pay taxes in other periods when a return was filed. The individual entered into an installment agreement with IRS but defaulted on the agreement. This individual received $78,000 from DOD during a recent year, and DOD recently increased the individual’s contract by over $80,000. Case # 16 - This individual is another independent contractor who also works as a dentist on a military installation. DOD paid this individual over $200,000 in recent years, and recently signed a multiyear contract worth over $400,000. At the time of our review, this individual had paid income taxes for only 1 year since the early 1990s and had accumulated unpaid taxes of several hundred thousand dollars. In addition, the individual’s prior business practice owes over $100,000 in payroll and unemployment taxes for multiple periods going back to the early 1990s. Case # 17 - DOD paid this individual nearly $90,000 for presenting motivational speeches on management and leadership. This individual has failed to file tax returns since the late 1990s and had unpaid income taxes for a 5-year period from the early to mid-1990s. The total amount of unpaid taxes owed by this individual is not known because of the individual’s failure to file income tax returns for a number of years. IRS placed this individual in the levy program in late 2000; however, DOD payments to this individual were not levied because DFAS payment information was not reported to TOP as required. See our related report for details on the other 30 DOD contractor case studies. Federal law does not prohibit a contractor with unpaid federal taxes from receiving contracts from the federal government. Existing mechanisms for doing business only with responsible contractors do not prevent businesses and individuals with unpaid federal taxes from receiving contracts. Further, the government has no coordinated process for identifying and determining the businesses and individuals with unpaid taxes that should be prevented from receiving contracts and for conveying that information to contracting officers before awarding contracts. In previous work, we supported the concept of barring delinquent taxpayers from receiving federal contracts, loans and loan guarantees, and insurance. In March 1992, we testified on the difficulties involved in using tax compliance as a prerequisite for awarding federal contracts. In May 2000, we testified in support of H.R. 4181 (106th Congress), which would have amended DCIA to prohibit delinquent federal debtors, including delinquent taxpayers, from being eligible to contract with federal agencies. Safeguards in the bill would have enabled the federal government to procure goods or services it needed from delinquent taxpayers for designated disaster relief or national security. Our testimony also pointed out implementation issues, such as the need to first ensure that IRS systems provide timely and accurate data on the status of taxpayer accounts. However, this legislative proposal was not adopted and there is no existing statutory bar on delinquent taxpayers receiving federal contracts. Federal agencies are required by law to award contracts to responsible sources. This statutory requirement is implemented in the FAR, which requires that government purchases be made from, and government contracts awarded to, responsible contractors only. To effectuate this policy, the government has established a debarment and suspension process and established certain criteria for contracting officers to consider in determining a prospective contractor’s responsibility. Contractors debarred, suspended, or proposed for debarment are excluded from receiving contracts and agencies are prohibited from soliciting offers from, awarding contracts to, or consenting to subcontracts with these contractors, unless compelling reasons exist. Prior to award, contracting officers are required to check a governmentwide list of parties that have been debarred, suspended, or declared ineligible for government contracts, as well as to review a prospective contractor’s certification on debarment, suspension, and other responsibility matters. Among the causes for debarment and suspension is tax evasion. In determining whether a prospective contractor is responsible, contracting officers are also required to determine that the contractor meets several specified standards, including “a satisfactory record of integrity and business ethics.” Except for a brief period during 2000 through 2001, contracting officers have not been required to consider compliance with federal tax laws in making responsibility determinations. Neither the current debarment and suspension process nor the requirements for considering contractor responsibility effectively prevent the award of government contracts to businesses and individuals that abuse the tax system. Since most businesses and individuals with unpaid taxes are not charged with tax evasion, and fewer still convicted, these contractors would not necessarily be subject to the debarment and suspension process. None of the contractors described in this report were charged with tax evasion for the abuses of the tax system we identified. A prospective contractor’s tax noncompliance, other than tax evasion, is not considered by the federal government before deciding whether to award a contract to a business or individual. Further, no coordinated and independent mechanism exists for contracting officers to obtain accurate information on contractors that abuse the tax system. Such information is not obtainable from IRS because of a statutory restriction on disclosure of taxpayer information. As we found in November 2002, unless reported by prospective contractors themselves, contracting officers face significant difficulties obtaining or verifying tax compliance information on prospective contractors. Moreover, even if a contracting officer could obtain tax compliance information on prospective contractors, a determination of a prospective contractor’s responsibility under the FAR when a contractor abused the tax system is still subject to a contracting officer’s individual judgment. Thus, a business or individual with unpaid taxes could be determined to be responsible depending on the facts and circumstances of the case. Since the responsibility determination is largely committed to the contracting officer’s discretion and depends on the contracting situation involved, there is the risk that different determinations could be reached on the basis of the same tax compliance information. On the other hand, if a prospective contractor’s tax noncompliance results in mechanical determinations of nonresponsibility, de facto debarment could result. Further, a determination that a prospective contractor is not responsible under the FAR could be challenged. Because individual responsibility determinations can be affected by a number of variables, any implementation of a policy designed to consider tax compliance in the contract award process may be more suitably addressed on a governmentwide basis. The formulation and implementation of such a policy may most appropriately be the role of OMB’s Office of Federal Procurement Policy. The Administrator of Federal Procurement Policy provides overall direction for governmentwide procurement policies, regulations, and procedures. In this regard, OMB’s Office of Federal Procurement Policy is in the best position to develop and pursue policy options for prohibiting federal contract awards to businesses and individuals that abuse the tax system. Thousands of DOD contractors that failed in their responsibility to pay taxes continue to get federal contracts. Allowing these contractors to do substantial business with the federal government while not paying their federal taxes creates an unfair competitive advantage for these businesses and individuals at the expense of the vast majority of DOD contractors that do pay their taxes. DOD’s failure to fully comply with DCIA and IRS’s continuing challenges in collecting unpaid taxes have contributed to this unacceptable situation, and have resulted in the federal government missing the opportunity to collect hundreds of millions of dollars in unpaid taxes from DOD contractors. Working closely with IRS and Treasury, DOD needs to take immediate action to comply with DCIA and thus assist in effectively implementing IRS’s legislative authority to levy contract payments for unpaid federal taxes. Also, IRS needs to better leverage its ability to levy DOD contractor payments, moving quickly to use this important collection tool. Beyond DOD, the federal government needs a coordinated process for dealing with contractors that abuse the federal tax system, including taking actions to prevent these businesses and individuals from receiving federal contracts. Our related report on these issues released today includes nine recommendations to DOD, IRS, and OMB. Our DOD recommendations address the need to comply with the DCIA by supporting IRS efforts under the Taxpayer Relief Act of 1997 to collect unpaid federal taxes. Our IRS recommendations address improving the effectiveness of IRS collection activities through earlier use of the Federal Payment Levy Program and changing or eliminating policies that prevent businesses and individuals with federal contracts from entering the levy program. Our OMB recommendation addresses developing and pursuing policy options for prohibiting federal contract awards to businesses and individuals that abuse the federal tax system. In written comments on a draft of our report, DOD and IRS officials partially agreed with our recommendations. OMB officials did not agree with our recommendation to develop policy options for prohibiting federal contract awards to businesses and individuals that abuse the federal tax system. Our report also suggests that Congress consider requiring DOD to periodically report to Congress on progress in providing its payment information to TOP for each of its contract and vendor payment systems, including details of the resulting collections by system and in total for all contract and vendor payment systems during the reporting period. In addition, our report suggests that Congress consider requiring that OMB report to Congress on progress in developing and pursuing options for prohibiting federal government contract awards to businesses and individuals that abuse the federal tax system, including periodic reporting of actions taken. DOD and OMB did not agree with our matters for congressional consideration. We continue to believe all of our recommendations and matters for congressional consideration constitute valid and necessary courses of action, especially in light of the identified weaknesses and the slow progress of DOD to fully implement the offset provisions of the DCIA since its passage more than 7 years ago. Mr. Chairman, Members of the Subcommittee, and Ms. Schakowsky, this concludes our prepared statement. We would be pleased to answer any questions you may have. For future contacts regarding this testimony, please contact Gregory D. Kutz at (202) 512-9095 or kutzg@gao.gov, Steven J. Sebastian at (202) 512- 3406 or sebastians@gao.gov, or John J. Ryan at (202) 512-9587 or ryanj@gao.gov. Individuals making key contributions to this testimony included Tida Barakat, Gary Bianchi, Art Brouk, Ray Bush, William Cordrey, Francine DelVecchio, K. Eric Essig, Kenneth Hill, Jeff Jacobson, Shirley Jones, Jason Kelly, Rich Larsen, Tram Le, Malissa Livingston, Christie Mackie, Julie Matta, Larry Malenich, Dave Shoemaker, Wayne Turowski, Jim Ungvarsky, and Adam Vodraska. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO addressed issues related to three high-risk areas including the Department of Defense (DOD) and the Internal Revenue Service (IRS) financial management and IRS collection of unpaid taxes. This testimony provides a perspective on (1) the magnitude of unpaid federal taxes owed by DOD contractors, (2) whether indications exist of abuse or criminal activity by DOD contractors related to the federal tax system, (3) whether DOD and IRS have effective processes and controls in place to use the Treasury Offset Program (TOP) in collecting unpaid federal taxes from DOD contractors, and (4) whether DOD contractors with unpaid taxes are prohibited by law from receiving federal contracts. In a companion report issued today. DOD and IRS records showed that over 27,000 contractors owed about $3 billion in unpaid taxes as of September 30, 2002. DOD has not fully implemented provisions of the Debt Collection Improvement Act of 1996 that would assist IRS in levying up to 15 percent of each contract payment to offset a DOD contractor's federal tax debt. We estimate that DOD could have collected at least $100 million in fiscal year 2002 had it and IRS fully utilized the levy process authorized by the Taxpayer Relief Act of 1997. As of September 2003, DOD had collected only about $687,000 in part because DOD provides contractor payment information from only 1 of its 16 payment systems to TOP. In response to our draft report, DOD developed a schedule to provide payment information to TOP for all of its additional payment systems by March 2005. Furthermore, we found abusive or potentially criminal activity related to the federal tax system through our audit and investigation of 47 DOD contractor case studies. The 47 contractors provided a variety of goods and services, including building maintenance, catering, dentistry, funeral services, and parts or support for weapons and other sensitive military programs. The businesses in these case studies owed primarily payroll taxes with some dating back to the early 1990s. These payroll taxes included amounts withheld from employee wages for Social Security, Medicare, and individual income taxes. However, rather than fulfill their role as "trustees" and forward these amounts to IRS, these DOD contractors diverted the money for personal gain or to fund the business. For example, owners of two businesses each borrowed nearly $1 million from their companies and, at about the same time, did not remit millions of dollars in payroll taxes. One owner bought a boat, several cars, and a home outside the United States. The other paid over $1 million for a furnished home. Both contractors received DOD payments during fiscal year 2002, but one went out of business in 2003. The business, however, transferred its employees to a relative's company (also with unpaid taxes) and recently received payments on a previous contract. IRS's continuing challenges in collecting unpaid federal taxes also contributed to the problem. In several case studies, IRS was not pursuing DOD contractors due to resource and workload management constraints. For other cases, control breakdowns resulted in IRS freezing collection activity for reasons that were no longer applicable. Federal law does not prohibit contractors with unpaid federal taxes from receiving federal contracts. OMB is responsible for providing overall direction to governmentwide procurement policies, regulations, and procedures, and is in the best position to develop policy options for prohibiting federal contracts to contractors that abuse the tax system.
The Low-Level Radioactive Waste Policy Act of 1980, as amended in 1985, made states responsible for disposing of commercially generated low-level radioactive waste. Consequently, in 1987 Arizona, California, North Dakota, and South Dakota entered into a compact in which California agreed to develop a disposal facility that would serve the needs of waste generators in the four states. The Congress ratified the compact in 1988. California, the only state since 1980 to have authorized the construction and operation of a disposal facility, is responsible for licensing and regulating its disposal facility. As authorized by the Atomic Energy Act of 1954, as amended, the Atomic Energy Commission (a predecessor to the Nuclear Regulatory Commission [NRC]) relinquished to the state in 1962 a significant portion of the Commission’s authority to regulate radioactive materials within the state, including the disposal of low-level radioactive waste. The state incorporated NRC’s criteria for siting and regulating low-level waste disposal facilities into the state’s regulations. In 1985, California named US Ecology its “license designee” and authorized the company to screen and select a potential site for a disposal facility, to investigate its suitability, and to construct and operate the facility as licensed and regulated by the state. After evaluating potential sites, a 1,000-acre site in Ward Valley in the Mojave Desert was selected. (See fig. 1.) About 70 of the 1,000 acres would be used for the trenches containing the disposed waste. Almost all of the remaining area would constitute a buffer zone. In April 1991 Interior’s Bureau of Land Management, which manages the land, and the state jointly issued an environmental impact statement concluding that the proposed facility would not cause significant adverse environmental effects. The statement is required as part of the record for the Secretary of the Interior’s land-transfer decision. In July 1992, California asked Interior to sell the Ward Valley site to the state under authority granted to the Secretary by the Federal Land Policy and Management Act of 1976 (FLPMA). Among other things, this act authorizes the Secretary to transfer public land by direct sale upon finding that the transfer would serve important public objectives that cannot be achieved elsewhere and that outweigh other public objectives and values served by retaining federal ownership of the land. After making such a finding, the land transfer must be made on terms that the Secretary deems are necessary to ensure proper land use and the protection of the public interest. After considering the environmental impacts of a licensed disposal facility at the site, the outgoing Secretary decided in January 1993 to sell the land as requested. Acting for the state, US Ecology then paid Interior $500,000 for the land. The outgoing Secretary’s decision was immediately challenged in federal court on the basis of Interior’s alleged noncompliance with FLPMA and the National Environmental Policy Act (NEPA) and alleged failure to protect native desert tortoises under the Endangered Species Act. To settle the lawsuits and to assure himself that the proposed land transfer would comply with applicable federal laws, the incoming Secretary rescinded the earlier land-transfer decision and returned US Ecology’s payment. Meanwhile, in September 1993, California issued a license to US Ecology, contingent on transfer of the land to the state, to construct and operate the disposal facility. Legal challenges to the state’s licensing action were denied by the state’s courts. From 1993 until 1996 the Secretary deferred the land-transfer decision while (1) the Bureau completed a first supplement to the April 1991 environmental impact statement, (2) the National Academy of Sciences reviewed seven technical issues related to the Ward Valley site, and (3) Interior negotiated, with the state, the terms of a public hearing on the proposed facility and the land-transfer agreement. The land-transfer negotiations reached an impasse in late 1995 over the issue of Interior’s authority to enforce the state’s compliance with the Academy’s recommendations in court. Then, in February 1996 Interior announced that it would prepare a second supplement to the environmental impact statement and conduct tests that the Academy had recommended. Interior expected these activities to take about a year to complete; however, Interior has not begun preparing the supplement or conducting the tests. When Interior announced in February 1996 that it would prepare the second supplement, it cited the Academy’s May 1995 report and new information about the migration of radioactive elements in the soil from the former disposal facility at Beatty as its basis for preparing the supplement. Although Interior also said it would address “nearby Indian sacred sites” in the supplement, it did not identify any such sites or sources of information on this issue. Thereafter, Interior relied on information obtained from the public, including environmental groups, Native Americans, and others, to select 10 more issues to address in the supplement and to expand the issue of sacred Indian sites to include a variety of issues pertaining to Native Americans. In March 1994, the Secretary asked the Academy to study seven radiological safety and environmental concerns about the proposed Ward Valley facility that were raised by three scientists employed by the Geological Survey. The scientists were particularly concerned about the potential for (1) water to flow into the trenches containing the waste, (2) radioactive materials to move down through the unsaturated soil to the water table, and (3) a connection between the local groundwater and the Colorado River. In a May 1995 report, a 17-member committee of the Academy concluded that the occurrence of any of these three situations is unlikely. Two committee members, however, disagreed with the majority’s conclusion that the movement of radioactive elements to the water table is “highly unlikely.” The Academy added that the potential effect on water quality of any contaminants that might reach the Colorado River would be insignificant. Among other things, however, the Academy recommended that additional measurements at the site be made to explain why tritium had apparently been detected about 100 feet beneath the surface of Ward Valley during the site’s investigation. The unexpected measurement of tritium at this depth raised questions about how quickly radioactive elements might migrate from the disposal facility to the groundwater. The Academy concluded that inappropriate sampling procedures probably introduced atmospheric tritium into the soil samples. Fifteen committee members concluded that the tritium tests could be done during the facility’s construction because the purpose of the tests was to improve baseline information for the long-term monitoring of the site rather than to resolve questions about the site’s suitability for a disposal facility. Two members concluded that the tests should be completed in time to use the results in a final decision on the site’s suitability as a disposal facility. In 1994 and 1995, the Geological Survey detected tritium and another radioactive element in the soil adjacent to a disposal facility for low-level radioactive waste located at Beatty, Nevada. This facility had operated from 1962 until Nevada closed it after 1992. US Ecology began operating the facility in 1981. While conducting research next to the Beatty facility, the Survey detected radioactive elements in concentrations well above natural background levels. The Survey attributed this situation to disposal practices at Beatty, such as disposing of liquid radioactive waste, that are now prohibited. The Survey added that it is doubtful that the distribution of the radioactive elements leaking from the site and their movement through the ground over time will ever be understood because of incomplete records of the disposal of liquid radioactive wastes. Therefore, the Survey concluded, extrapolations of the information from Beatty to the proposed Ward Valley facility are too tenuous to have much scientific value because of the uncertainties about how radioactive elements at Beatty are transported and because liquid wastes cannot be buried at Ward Valley. The Survey concluded that the findings of tritium near Beatty do not help explain the measurements of tritium at Ward Valley. Interior relied on the views of the public to add 10 more issues to address in the second supplement and to expand another issue—“nearby Indian sacred sites”—into a broader review of Native American issues. For example, before Interior announced that it would prepare a second supplement, an environmental group—the Committee to Bridge the Gap—had already requested that Interior prepare a supplement addressing the Academy’s report, the Beatty facility, and four other issues that Interior eventually selected: (1) the potential pathways of waste to the groundwater and then to the Colorado River; (2) the types, quantities, and sources of waste to be disposed of; (3) the recent financial troubles of US Ecology; and (4) protection of the desert tortoise. After Interior’s February 1996 announcement that it would prepare a second supplement, the Bureau obtained and summarized public comments and recommended to Interior’s Deputy Secretary that 10 issues be addressed in the supplement. Four of the 10 issues were similar to those that the Committee to Bridge the Gap had already raised. Subsequently, the Deputy Secretary approved 13 issues to be addressed in the second supplement. In addition to the Academy’s report, the new information on the Beatty facility, and the four other issues that the Committee to Bridge the Gap had recommended, Interior expanded the scope of the Indian sacred sites issue and added (1) the movement of radioactive elements in the soil, (2) alternative methods of disposal, (3) the potential introduction of nonnative plants, (4) waste transportation, (5) the state’s long-term obligations, and (6) the public health impacts of operating the disposal facility. Except for the Academy’s report and the new information about the Beatty facility, all of the issues that Interior will address in the second supplement had been considered earlier in the state’s licensing proceeding; in the state’s and the Bureau’s joint environmental impact statement; and in the Bureau’s first supplement of September 1993. According to the Council on Environmental Quality’s regulations for implementing NEPA, however, when a federal agency has already addressed issues in an environmental impact statement, it must prepare a supplement to the statement when significant new circumstances or information relevant to environmental concerns has become available. An agency may also prepare a supplement when it determines that doing so will further the purposes of NEPA. Interior’s announcement that it would prepare the second supplement did not state that the Academy’s report and the new information on the Beatty facility constituted significant new circumstances or information that would require Interior to prepare a supplement. According to Interior, its decision to prepare the statement had been prompted by (1) the state’s rejection of Interior’s proposed land-transfer conditions and (2) the passage of 5 years since the initial environmental impact statement had been prepared. Other evidence indicates that Interior did not initially consider the Academy’s report and the new information on Beatty significant enough to require a supplement. For example, the Secretary’s public statement on the Academy’s report said that the report “provides a qualified clean bill of health in relation to concerns about the site.” According to the Secretary, with appropriate land-transfer conditions based on the recommendations of that report, the Secretary was “now confident that the transfer of the land is in the public interest.” Also, when Interior announced that it would prepare the second supplement, it stated that the Survey’s new information on the Beatty site indicated “little similarity with Ward Valley” but underscored the need for continued scientific monitoring at both locations. Interior also did not compare the public comments it received with the state’s licensing record or the previous environmental statements to provide a basis for identifying “significant” new circumstances or information. According to the Bureau’s Sacramento officials who are preparing the second supplement, whether or not there was any “new” information was not important to the Bureau’s deliberations about what issues should or should not be addressed in the supplement. For many of the issues, they said, what was “new” was the public’s concerns about the issues. The effect of the Ward Valley facility on Native Americans in the region is one example of an issue that had been addressed earlier by the state and the Bureau. In part, however, Interior plans to address Native American issues in the second supplement because of two recent Executive orders.One order requires federal agencies to accommodate access to and the ceremonial use of Indian sacred sites and avoid adversely affecting the integrity of such sites. The second order requires federal agencies to make “environmental justice” for low-income and minority populations (including Indian tribes) a part of their missions by identifying and addressing, as appropriate, the relatively high and adverse human health or environmental effects of their activities on these groups. To a significant degree, the state and the Bureau had addressed Native American issues in the site selection process, the state’s licensing proceeding, and the 1991 environmental impact statement. The specific consultation steps, according to the 1991 statement, included an archaeological survey of the site with Native American participation. This survey found that no significant cultural resources were present at the site. In addition, US Ecology contacted the Indian tribes in the region to evaluate the potential cultural impacts of a regional nature. A site-specific walkabout by tribal representatives did not identify any unique cultural resources. According to US Ecology’s license application, that part of Ward Valley where the proposed disposal site is located had once been disturbed by military tank maneuvers. Also, electric-power transmission lines cross the site, and a pit used to supply rock for highway construction is nearby. As recently as February 1997, The Director of the Bureau’s Sacramento office stated in a letter to the Environmental Protection Agency that the affected tribes were fully represented and consulted in the scoping and descriptive phases of the 1991 environmental impact statement. Interior plans to assess compliance with the two Executive orders in the second supplement by addressing the effects that a disposal facility at Ward Valley could have on Native Americans’ religious and cultural values, tourism, agricultural cultivation, and future economic developments, such as hotels and gambling casinos, along the Colorado River. The river is about 20 miles east of the Ward Valley site at its closest point. In commenting on a draft of our report, Interior also said that it will address the environmental justice implications to low-income and minority populations that may live near where waste is stored of not transferring the Ward Valley site to the state. The reasons Interior gave for its decision to prepare a second supplement were the impasse over land-transfer conditions and the age of the original environmental impact statement. Two other reasons for the second supplement, however, have shaped Interior’s actions on the Ward Valley issue for several years; specifically, Interior believes that it should provide a forum for resolving public concerns and independently determine if the site is suitable for a disposal facility. In contrast, California and US Ecology believe that (1) the state—not Interior—has the authority, implementing criteria, and expertise for determining if the site is suitable and (2) Interior had completed all essential requirements for deciding on the land transfer in January 1993. Consequently, California and US Ecology have sued Interior over, among other things, whether Interior has exceeded its authority with respect to radiological safety issues. The lawsuits are pending. Interior’s regulations for transferring federal land under FLPMA do not encourage, require, or prohibit public hearings on proposed transfers. Nevertheless, Interior wanted the state to conduct a formal public hearing on the Ward Valley facility because of the controversy over the facility. According to Interior, the second supplement and tritium tests will fulfill its responsibility to assure the public that health and safety concerns are adequately addressed. California conducted a public hearing as a part of its licensing procedures for the Ward Valley facility. The applicable state laws and regulations required the state to conduct a hearing in which the public makes brief oral statements and provide written comments. All comments were to be considered by the state and included in the written licensing record. Several individuals and groups unsuccessfully urged the state to conduct a public hearing on the license application using formal, trial-type procedures. However, a state appellate court found that the state had met the requirements of state law and regulations and an appeal of the court’s decision was denied. California issued a license to US Ecology to build and operate a disposal facility for low-level radioactive waste at Ward Valley in accordance with the state’s authority under the Atomic Energy Act of 1954 and related state laws and regulations. Interior, however, has not accepted the results of the state’s licensing proceeding as an adequate basis for Interior to make a land-transfer decision. For example, in an August 11, 1993, letter to the governor of California, Interior’s Secretary asked the state to conduct a formal public hearing as part of a credible process for determining if the site is appropriate so the Secretary can make a land-transfer decision. FLPMA requires the Secretary of the Interior to ensure that federal lands transferred to other parties are properly used and protect the public interest. California, on the other hand, is responsible for licensing and regulating the Ward Valley facility according to the state’s laws and regulations, which are intended to adequately protect public health and safety. Where the respective responsibilities of Interior and the state overlap, if at all, has been an uncertain matter. The former Secretary, in his January 1993 decision (subsequently rescinded) to transfer the land, accepted the state’s and US Ecology’s technical findings supporting the state’s licensing decision and accepted that the proposed facility would be licensed by the state according to all applicable federal and state laws and regulations. In contrast, the current Secretary has asserted more overlap between Interior’s and the state’s respective responsibilities. For example, when the Secretary requested the state to conduct a formal public hearing, he said the hearing should focus on the issue of the migration of radionuclides from the site because that issue directly relates to his “. . . responsibility under federal law regarding the suitability of the site. . . .” Setting aside the issue of authority, Interior has neither the criteria nor the technical expertise to independently assess the suitability of the site from a radiological safety perspective. Moreover, Interior had not sought advice or assistance on the suitability of the site from NRC or, until recently, the Department of Energy (DOE), which have such expertise. Interior has not sought NRC’s assistance in addressing issues about the suitability of the Ward Valley site for a disposal facility. In 1993, the Bureau verbally requested NRC’s views on the adequacy of California’s program for regulating radioactive materials, including the Ward Valley facility. NRC responded that it periodically reviews California’s regulatory program to determine, as required by the Atomic Energy Act, if the state’s program is compatible with NRC’s program for regulating radioactive materials in states that have not agreed to assume this responsibility. On the basis of these periodic reviews, NRC said that it had concluded that the state has a highly effective regulatory program for low-level radioactive waste and is capable of conducting an effective and thorough review of US Ecology’s license application for the Ward Valley facility. DOE had no role on the Ward Valley facility until February 1996, when Interior decided to perform the tritium tests at the site. Thereafter, DOE and Interior negotiated conditions under which Interior would use facilities at DOE’s Lawrence Livermore National Laboratory to conduct one technical part of the tests. Interior officials subsequently told us that DOE’s role in the testing has evolved into a partnership with Interior in setting up the test arrangements. The Interior officials also pointed out that federal agencies such as NRC and the Environmental Protection Agency are expected to comment on the second supplement. California and US Ecology do not agree that Interior is authorized to independently determine if the Ward Valley site is suitable for a disposal facility. Their position is that the regulation of radiological safety issues, such as migration of radionuclides, is the state’s responsibility because of the state’s agreement with NRC under the Atomic Energy Act. Therefore, they argue, radiological safety matters are outside of Interior’s authority and expertise. As discussed earlier, the state and US Ecology have sued Interior. They have asked the court to order Interior to complete the sale of the land and declare that Interior had exceeded its authority with respect to protecting the public against radiation hazards. Thus, the courts ultimately will decide the legality of, among other issues raised by the litigation, Interior’s position that it must independently determine if the site is suitable for a disposal facility. In conclusion, the task of developing new facilities for disposing of commercially generated low-level radioactive waste has proven more difficult than imagined when the Congress gave states this responsibility 17 years ago. Because no state has yet developed a new facility, the actions in California are viewed as an indicator of whether the current national disposal policy can be successful. In the case of Ward Valley, however, Interior has not accepted the state’s findings in the area of radiological safety as adequate to permit Interior to decide on the land transfer. Instead, Interior has decided that it must independently determine if the site is suitable for a disposal facility. Whether an independent determination is within Interior’s discretion will be decided in the courts. Setting this legal question aside, most of the substantive issues that the public has raised to Interior for its consideration have already been addressed by the state and by the Bureau. Moreover, subsequent new information, such as the Academy’s report, generally favors the proposed facility. Mr. Chairman, this concludes our prepared statement. We would be happy to respond to any questions that you or Members of the Committee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the proposed transfer of federal land in Ward Valley, California to the state for use as a low-level radioactive waste disposal site, focusing on: (1) what sources of information the Department of the Interior relied on in deciding to prepare a second supplemental environmental impact statement and in selecting issues to address in the supplement; (2) whether the selected issues had been considered in earlier state or federal proceedings and, if so, whether they are being reconsidered on the basis of significant new information; and (3) what Interior's underlying reasons were for preparing the supplement. GAO noted that: (1) Interior cited a May 1995 report on the Ward Valley site by the National Academy of Sciences and information developed by its U.S. Geological Survey in 1994 and 1995 about the migration of radioactive elements in the soil from a former disposal facility at Beatty, Nevada, as its basis for preparing the second supplement; (2) it also stated that it would address nearby Indian sacred sites in the supplement; (3) after obtaining and analyzing information from the public, including environmental groups, Native Americans, and others, Interior decided to address 10 more issues in the supplement and to expand the issue of sacred Indian sites to include a variety of issues pertaining to Native Americans; (4) eleven of the 13 issues that Interior is addressing in the second supplement had been considered in California's licensing process and in previous environmental impact statements prepared by the state and Interior's Bureau of Land Management; (5) the other two issues, the findings and recommendations of the Academy and the information on the Beatty facility, are new; (6) the reasons cited by Interior for preparing a second supplement were an impasse with California over land-transfer conditions and the 5 years that had passed since the original environmental impact statement was issued in April 1991; (7) two other reasons, however, have shaped Interior's action on the Ward Valley issue over the last several years; (8) specifically, Interior believes that it should provide a forum for resolving public concerns and independently determine if the site is suitable for a disposal facility; (9) it should be noted that California has met all of the state's procedural and substantive requirements for licensing the proposed facility; (10) consequently, the state and US Ecology, the company licensed by the state to construct and operate the disposal facility, have sued Interior to determine, among other things, if Interior exceeded its authority regarding radiological safety matters, such as independently deciding on the site's suitability; and (11) thus, whether or not an independent determination of the site's suitability is within Interior's discretion will be decided in the courts.
As you know, our country’s transition into the 21st Century is characterized by a number of key trends including global interdependence; diverse, diffuse, and asymmetrical security threats; rapidly evolving science and technologies; dramatic shifts in the age and composition of our population; important quality of life issues; and evolving government structures and concepts. Many of these trends are intertwined, and they call for a reexamination of the role of government in the 21st Century given changing public expectations. Leading public and private organizations here in the United States and abroad have found that for organizations to successfully transform themselves they must often change their culture. Leading organizations also understand that their people, processes, technologies, and environments are the key enablers that drive cultural change. For governmental entities, this evolution generally entails shifts away from process to results; stovepipes to matrixes; hierarchical to flatter and more horizontal structures; an inward focus to an external (citizen, customer, and stakeholder) focus; management control to employee empowerment; reactive behavior to proactive approaches; avoiding new technologies to embracing and leveraging them; hoarding knowledge to sharing knowledge; avoiding risk to managing risk; and protecting turf to forming partnerships. While transformation across government is critically important to successful transition into the 21st century, it is of utmost importance at the FBI. This is the agency at the front line of defending the public and our way of life from a new and lethal threat, that of terrorism against Americans. At the same time the FBI maintains the responsibility for investigations of other serious federal crimes. Every American has a stake in assuring the success of the FBI’s efforts. The FBI is a unique organization comprised of thousands of devoted and capable public servants who live and breathe the agency’s motto of fidelity, bravery, and integrity everyday. The FBI has a long and proud history, and it does many things well. But, times have changed, and the FBI must change with the times in considering what it does and how it does business. At the same time, the motto itself is timeless in nature. Any changes at the FBI must be part of, and consistent with, broader governmentwide transformations that are taking place. This is especially true as the establishment of a Department of Homeland Security is debated and put into place. Moreover, Director Mueller had noted that the FBI reorganization and realignment efforts that we are discussing today are just the second phase in a comprehensive effort that he is planning to address a broad range of management and organizational challenges. This is, in effect, a down payment on a huge undertaking. Director Mueller has taken the first and most important step in successfully undertaking the needed transformation at the FBI—he has demonstrated his personal commitment through his direct involvement in developing and leading the Bureau’s transformation efforts. He has recognized a need to refocus priorities to meet the demands of a changing world and is now taking first steps to realign resources to achieve his objectives. His continued leadership, coupled with the involvement of other senior executives at the FBI, and clear lines of accountability for making needed improvements will be critical if the effort is to succeed. These factors are prerequisites to overcoming the natural resistance to change, marshalling the resources needed to improve the Bureau’s effectiveness, and building and maintaining the FBI-wide commitment to new ways of doing business. The Director is early in his 10-year term. This should prove very helpful because the experiences of leading organizations suggest that given the enormous challenges the FBI faces, successfully completing needed cultural and other transformations may take up to 7 or more years. At the same time, some steps are critical and time sensitive. As a result, the FBI needs to develop a comprehensive transformation plan with key milestones and assessment points to guide its overall transformation efforts. FBI Director Mueller unveiled the second phase of the reorganization at a news conference on May 29 and discussed it further at a hearing before the Senate Judiciary Committee on June 6, 2002. These proposed changes are designed to build on the initial reorganization actions Director Mueller took in December 2001. These earlier actions were to strengthen the FBI’s top-level management structure, enhance accountability, reduce executive span of control, and establish two new divisions for Records Management and Security. The central thrust of this next phase of the reorganization plan is to build an FBI with a national terrorism response capability that is larger and more mobile, agile, and flexible. The key elements of this second installment of the reorganization include a shifting of some resources from long-standing areas of focus, such as drugs, to counterterrorism and intelligence; building analytic capacity; and recruiting to address selected skill gaps. In light of the events of September 11, 2001, this shift is clearly not unexpected and is, in fact, consistent with FBI’s 1998 Strategic Plan as well as the current Department of Justice Strategic Plan. Since September 11, unprecedented levels of FBI resources have been devoted to counterterrorism and intelligence initiatives with widespread public approval. Indeed, the goals of this phase of the reorganization plan are not highly controversial. Enhancement of resources for counterterrorism, greater sharing of information with the Central Intelligence Agency (CIA) and others, improvements in analytic capacity, establishment of a centralized intelligence unit to make sense out of the gathered information, more training, and recruitment of specialists all seem to be rational steps to building agency capacity to fight terrorism. However, some specific aspects of the plan should be highlighted. A key element of the reorganization is to “redirect FBI’s agent workforce to ensure that all available energies and resources are focused on the highest priority threat to the nation, i.e. terrorism.” This shift is intended to move the FBI from a reactive mode of operation to a more proactive orientation. The primary goal is to prevent terrorism rather than investigate and apprehend after an event occurs. The FBI has been involved in proactive counterterrorism work for some time. This reorganization is intended to make a greater commitment. In accordance with the goal, some agents from drug, white collar, and violent crime investigative work will shift their focus to counterterrorism. Specifically, the plan calls for 518 agents to be shifted—400 agents from drug work and 59 each from white collar and violent crime to be reassigned to work on counterterrorism, security improvements, and training. Of the 518 agents being shifted, 480 will be permanently reassigned to counterterrorism work. In the case of drug enforcement, this shift moves about 30 percent of the staff currently assigned to this activity to counterterrorism work. For white collar and violent crime the shift is not as substantial representing about 2.5 percent and 3 percent of their staff years, respectively. Given the massive move of resources to counterterrorism following the events of September 11, this really represents fewer agents returning to their more traditional crime investigative work as opposed to agents moving away from current drug, white collar, and violent crime work. According to FBI data, the number of field agents assigned to terrorism work jumped from 1,057 before September 11 to 6,390 immediately following the tragic events of that day. FBI data show that a shift of 518 agents from drugs, white collar crime, and violent crime seems to do little to change the picture of the overall deployment of FBI special agent resources. Counterterrorism agent resources go from about 15 percent of total agent resources, to just under 20 percent. Thus, it seems that despite a change in priorities, most of the FBI resources will remain devoted to doing the same types of work they have been doing in the past. This realignment actually affects about five percent of the total FBI special agent workforce, and, therefore, represents a relatively modest change in the focus of the Bureau as a whole at least for the present time. Is this the right amount of resources to shift to counterterrorism at this time? Is this too much? Perhaps the more salient question is, is this too little? It is probably unrealistic to ask the FBI or anyone else for the answer to this question at this time, given that the government’s information about the nature and extent of the terrorist threat is still evolving. However, this is a question that must be answered in due course based on a comprehensive threat assessment and analysis, including the role the FBI and other government agencies should play in our future counterterrorism efforts. According to the FBI, the Special Agents in Charge (SACs) of the 56 field offices were asked to indicate how many agents could be redirected into terrorism work in their locations without unduly jeopardizing other investigative work. In fact, SACs generally volunteered more agents to shift to counterterrorism work than were actually shifted. According to the FBI, SACs were given general guidance but not specific guidelines or other directives upon which to base their decisions concerning reallocation of resources. Thus, for good or ill, field offices may have used different criteria for determining how many resources could be reallocated. FBI headquarters made final reallocation decisions based on resource needs requested by the Executive Assistant Director for Counterterrorism/Counterintelligence. Careful monitoring will be needed to ensure that the agents to be devoted to counterterrorism can be appropriately utilized and to what extent additional resources will be needed. Conversely, the impact of having fewer field agents working drug cases needs to be monitored and assessed over time. Prior to September 11, 2001, there was no indication from the FBI that their more traditional crime areas were overstaffed. FBI officials advised us that agents will still participate in as many crime-fighting taskforces as they have in the past, but that the number of agents assigned to each effort will be fewer in order to free resources for counterterrorism work. FBI officials also indicated that agents would be made available to assist state and local law enforcement with short-term needs, such as adding agents when widespread arrests are planned. In the drug area, which is the hardest hit in this reallocation, the Drug Enforcement Administration (DEA) is the major federal player. While DEA’s resources have increased in recent years, at this time we are not aware of any plans by DOJ to request additional resources for DEA to fill any gap that may be left by withdrawal of substantial FBI drug enforcement resources. DEA has announced, though, that it will move some agents from headquarters to the field, which could potentially help fill any gaps in federal-level drug law enforcement. The reorganization plan also calls for a build-up of the FBI headquarters Counterterrorism Division through the transfer of 150 counterterrorism agents from field locations to Washington, D.C. This seems consistent with the Director’s intention of shifting from a reactive to a proactive orientation in addressing terrorism and making counterterrorism a national program with leadership and expertise in headquarters and a response capability that is more mobile, agile, and flexible in terms of assisting the field offices. These 150 positions would then be backfilled in the field through recruitment of new agents. According to the FBI, the enhancement of this headquarters’ unit is intended to build “bench strength” in a single location rather than have expertise dispersed in multiple locations. When additional counterterrorism assistance is needed in field locations, headquarters staff would be deployed to help. Staff assigned to this unit would also be expected, and encouraged through incentives, to stay in counterterrorism work for an extended period of time. Staying in place would help to ensure increasing the depth of skills rather than following the more usual FBI protocol of more frequent rotations through a variety of assignments. An important part of the build-up of the Counterterrorism Division and making headquarters more responsive to the field, according to the FBI, is the establishment of “flying squads” with national level expertise and knowledge to enhance headquarters’ ability to coordinate national and international investigations and support field investigative operations. The flying squads are intended to provide a “surge capacity” for quickly responding to and resolving unfolding situations and developments in locations, both within and outside the United States, where there is a need to augment FBI field resources with specialized personnel or there is no FBI presence. Another important part of the build-up is the establishment of a National Joint Terrorism Task Force to facilitate the flow of information quickly and efficiently between the FBI and other federal, state, and local law enforcement and intelligence agencies. The national task force, which is to be comprised of members of the intelligence community, other federal law enforcement agencies, and two major police departments, is intended to complement and coordinate the already established 51 field office terrorism task forces. Training is also essential to ensuring that resources shifted to counterterrorism work can be used most effectively. There is no doubt that some of the skills needed for criminal investigations and intelligence work overlap with the skills needed for counterterrorism work. There will, however, be a need for specialized training concerning terrorist organizations and tactics. The FBI plans to fill this training need. Director Mueller is planning a number of steps in this phase of the reorganization to align resources with priorities. But, a broader assessment of the organization in relation to priorities may identify other realignment issues. Given the seeming disparity between priorities and resource allocation that will remain after the current realignment, more resource changes may be needed. Reconsideration may also be given to the field office structure. Is the 56 field office configuration the most effective spread of staff in terms of location to achieve results in relation to the priorities of the 21st Century? In December 2001, Director Mueller announced a headquarters reorganization that altered the number of layers of management. But, is more de-layering needed to optimize the functioning of the organization? Director Mueller will also need to address significant succession planning issues. According to a 2001 Arthur Anderson management study on the FBI, about a quarter of the special agent workforce will be eligible to retire between 2001 and 2005. Of perhaps greater concern, 80 percent of the Senior Executive Corps was eligible for retirement at the time of the Arthur Anderson review. While the potential loss of expertise through retirements will be substantial, this turnover also affords Director Mueller the opportunity to change culture, skill mix, deployment locations, and other agency attributes. To build the capacity to prevent future terrorist attacks, the FBI plans to expand its Office of Intelligence with an improved and robust analytical capability. In the past, the FBI has focused on case-specific analysis and on terrorism enterprise intelligence investigations intended to discern the structure, scope, membership, and finances of suspect organizations. Shortcomings in its analytical capabilities were identified by the FBI as far back as its 1998 strategic plan. That plan stated that the FBI lacked sufficient quantities of high-quality analysts, most analysts had little or no training in intelligence analysis, and many lacked academic or other experience in the subject matter for which they were responsible. Furthermore, it stated that the FBI needed strategic analysis capability for spotting trends and assessing U.S. vulnerabilities to terrorist activities. The events of September 11 and subsequent revelations highlight several of these continuing weaknesses. The Office of Intelligence, created in December 2001 as part of the first phase of the reorganization, supports both counterterrorism and counterintelligence. The Office will focus on building a strategic analysis capability and improving the FBI’s capacity to gather, analyze, and share critical national security information. According to the FBI, a new College of Analytical Studies at the FBI Academy will support the new Office by training analysts on the latest tools and techniques for both strategic and tactical analysis. This is a long-term effort that is long overdue, as is the need for technology that can support the analysts’ work. Our May 2000 review of the Justice Department’s Campaign Finance Task Force found that the FBI lacked an adequate information system that could manage and interrelate the evidence that had been gathered in relation to the Task Force’s investigations. It is unclear how the FBI’s proposed analytical efforts will interrelate with the planned analytical capability of the proposed Department of Homeland Security. The National Infrastructure Protection Center (NIPC) at the FBI is the “national focal point” for gathering information on threats and facilitating the federal government’s response to computer-based incidents. Specifically, NIPC is responsible for providing comprehensive analyses on threats, vulnerabilities, and attacks; issuing timely warnings on threats and attacks; and coordinating the government’s response to computer-based incidents. In April 2001, we reported that multiple factors have limited the development of NIPC’s analysis and warning capabilities. These include the lack of a comprehensive governmentwide or national framework for promptly obtaining and analyzing information on imminent attacks, a shortage of skilled staff, the need to ensure that NIPC does not raise undue alarm for insignificant incidents, and the need to ensure that sensitive information is protected. At that time, we recommended that NIPC develop a comprehensive written policy for establishing analysis and warning capabilities. Although the Director of NIPC generally agreed with GAO’s findings and stated that the NIPC considers it of the utmost urgency to address the shortcomings identified, we are not aware of any actions to address this recommendation. The FBI reorganization plan calls for NIPC to be housed in the Cyber Division, which is under the leadership of the Executive Assistant Director for Criminal Investigations. This location seems inconsistent with ensuring that it focuses proactively on early warning as opposed to reactively. The President’s plans for the Department of Homeland Security call for NIPC to be moved out of the FBI and into this new department. Regardless of location, a focus on enhancing its capabilities as outlined in our 2001 report is critical. The plan also calls for the recruitment of additional agents, analysts, translators, and others with certain specialized skills and backgrounds. In total, the FBI is expected to hire 900 agents this year—about 500 to replace agents who are projected to be leaving the agency and 400 to fill newly created positions. FBI officials stated that based on past experience they expect to be able to meet their agent-recruiting target and can accommodate the size of this influx at their training facilities. However, recruitment may become more difficult than in prior years because of the competing demand for qualified candidates, particularly those with specialized skills (e.g., technology, languages, and sciences), from other law enforcement and commercial entities that are also planning to increase their investigative capacity this year. This would include competition for qualified staff with the Transportation Security Administration and with the proposed Department of Homeland Security. In January 2002, we reported on the need for additional translators and interpreters in four federal agencies, including the FBI. We reported that of a total of about 11,400 special agents at the FBI, just under 1,800 have some foreign language proficiency, with fewer than 800 (about 7 percent) having language skills sufficient to easily interact with native speakers. Hiring new agents with foreign language proficiency, especially those with skills in Middle Eastern and Asian languages, is essential but could be difficult given competing market demands for their skills. Obtaining security clearances and basic training will add additional time to the process of enhancing the FBI’s strength in language proficiency. The FBI also uses part-time contract staff to meet translation and interpretation needs and to augment its 446 authorized translator and interpreter positions (55 of which are vacant at this time). However, counterterrorism missions may require flexibility that contract staff working part-time schedules cannot provide, such as traveling on short notice or working extended and unusual hours. While the FBI has shared linguistic resources with other agencies, more opportunities for pooling these scarce resources should be considered. Transformations of organizations are multifaceted undertakings. The recently announced changes at the FBI focus on realignment of existing resources to move in the direction of aligning with the agency’s new priorities. Earlier changes altered the FBI’s top-level management structure, accountability, and span of control. A variety of issues will require the Director’s attention, and that of others, including Attorney General Ashcroft, to successfully move the agency into the 21st Century. major communications and information technology improvements, development of an internal control system that will ensure protection of civil liberties as investigative constraints are loosened, and management of the ripple effect that changes at the FBI will have on other aspects of the law enforcement community. Communications has been a longstanding problem for the FBI. This problem has included antiquated computer hardware and software, including the lack of a fully functional e-mail system. These deficiencies serve to significantly hamper the FBI’s ability to share important and time sensitive information with the rest of the FBI and across other intelligence and law enforcement agencies. Sharing of investigative information is a complex issue that encompasses legal requirements related to law enforcement sensitive and classified information and its protection through methods such as encryption. It is also a cultural issue related to a tradition of agents holding investigative information close so as not to jeopardize evidence in a case. Whereas, in a more proactive investigative environment, the need for more functional communication is of paramount importance and will be essential for partnering with other law enforcement agencies and the intelligence community. Stated differently, we do not believe the FBI will be able to successfully change its mission and effectively transform itself without significantly upgrading its communications and information technology capabilities. This is critical, and it will take time and money to successfully address. In February 2002, as part of a governmentwide assessment of federal agencies, we reported on enterprise architecture management needs at the FBI. Enterprise architecture is a comprehensive and systematically derived description of an organization’s operations, both in logical and technical terms, that has been shown to be essential to successfully building major information technology (IT) systems. Specifically, we reported that the FBI needed to fully establish the management foundation that is necessary to begin successfully developing, implementing, and maintaining an enterprise architecture. While the FBI has implemented most of the core elements associated with establishing the management foundation, it had not yet established a steering committee or group that has responsibility for directing and overseeing the development of the architecture. While establishing the management foundation is an essential first step, important additional steps still need to be taken for the FBI to fully implement the set of practices associated with effective enterprise architecture management. These include, among other things, having a written and approved policy for developing and maintaining the enterprise architecture and requiring that IT investments comply with the architecture. The successful development and implementation of an enterprise architecture, an essential ingredient of an IT transformation effort for any organization and even more important for an organization as complex as the FBI, will require, among other things, sustained commitment by top management, adequate resources, and time. The Director has designated IT as one of the agency’s 10 priorities. Although the FBI wishes to become a more proactive agency, it needs to be cognizant of individuals’ civil liberties. Guidelines created in the 1970’s to stem abuses of civil liberties resulting from the FBI’s domestic intelligence activities have recently been revised to permit agents to be more proactive. For example, these guidelines permit FBI presence at public gatherings, which generally had been inhibited by the prior guidelines. No information obtained from such visits can be retained unless it relates to potential criminal or terrorist activity. To better ensure that these new investigative tools do not infringe on civil liberties, appropriate internal controls, such as training and supervisory review, must be developed, implemented, and monitored. Our central focus today is on the effects of changes at the FBI on the FBI itself, and we have also alluded to a potential impact on DEA of a shift in FBI drug enforcement activity. It is also important to remember that these changes may have a ripple effect on the nature and volume of work of other Justice Department units and their resource needs, including the Office of Intelligence Policy and Review, the U.S. Attorneys Offices, and the Criminal Division’s Terrorism and Violent Crime Section. For example, if the volume of FBI counterterrorism investigations increases substantially and the FBI takes a more proactive investigative focus, one could expect an increased volume of Foreign Intelligence Surveillance Act requests to the Office of Intelligence Policy and Review. Moreover, should those requests be approved and subsequent surveillance or searches indicate criminal activity, U.S. Attorneys Offices and the Terrorism and Violent Crime Section would be brought in to apply their resources to those investigations. In addition, because of the FBI’s more proactive investigations, one could expect more legal challenges to the admissibility of the evidence obtained and to the constitutionality of the surveillance or search. State and local law enforcement are also likely to be affected by a change in FBI focus. Although the major gap that state and local law enforcement may have to help fill as a result of this realignment is in the drug area, if additional FBI resources are needed for counterterrorism, state and local law enforcement may have to take on greater responsibility in other areas of enforcement as well. As the FBI moves forward in its efforts to transform its culture and reexamine its roles, responsibilities, and desired results to effectively meet the realities and challenges of the post-September 11 environment, it should consider employing the major elements of successful transformation efforts that have been utilized by leading organizations both here and abroad. These begin with gaining the commitment and sustained attention of the agency head and all in senior-level leadership. It requires a redefinition and communication of priorities and values and a performance management system that will reinforce agency priorities. It will also require a fundamental reassessment of the organizational layers, levels, units, and locations. Any realignment must support the agency’s strategic plan and desired transformation. Organizations that have successfully undertaken transformation efforts also typically use best practices for strategic planning; strategic human capital management; senior leadership and accountability; realignment of activities, processes, and resources; and internal and external collaboration among others. It has long been understood that in successful organizations strategic planning is used to determine and reach agreement on the fundamental results the organization seeks to achieve, the goals and measures it will set to assess programs, and the resources and strategies needed to achieve its goals. Strategic planning helps an organization to be proactive, anticipate and address emerging threats, and take advantage of opportunities to be reactive to events and crises. Leading organizations, therefore, understand that planning is not a static or occasional event, but a continuous, dynamic, and inclusive process. Moreover, it can guide decision making and day-to-day activities. In addition to contributing to the overall DOJ Strategic Plan, the FBI has developed its own strategic planning document. Issued in 1998, and intended to cover a 5-year period, the plan emphasized the need for many of the changes we are talking about today. It is important to note that the 1998 plan called for a build up of expertise and emphasis in the counterterrorism area and a diminution of activities in enforcement of criminal law, which is consistent with the focus of the Director’s current priorities. These priorities, as presented by the Director on May 29, 2002, lay the groundwork for a new strategic plan that FBI officials have indicated they will be developing. A new strategic plan is essential to guide decision making in the FBI’s transformation. The Director has set agency priorities, but the strategic plan can be the tool to link actions together to achieve success. The first step in developing a strategic plan is the development of a framework. This framework can act as a guide when the plan is being formulated. The FBI’s employees, or human capital, represent its most valuable asset. An organization’s people define its character, affect its capacity to perform, and represent the knowledge base of the organization. We have recently released an exposure draft of a model of strategic human capital management that highlights the kinds of thinking that agencies should apply and steps they can take to manage their human capital more strategically. The model focuses on four cornerstones for effective strategic human capital management—leadership; strategic human capital planning; acquiring, developing, and retaining talent; and results-oriented organizational culture—that the FBI and other federal agencies may find useful in helping to guide their efforts. Director Mueller recognizes that one of the most basic human capital challenges the FBI faces is to ensure that it has staff with the competencies—knowledge, skills, and abilities—needed to address the FBI’s current and evolving mission. The announced plan makes a number of changes related to human capital that should move the FBI toward ensuring that it has the skilled workforce that it needs and that staff are located where they are needed the most. Hiring specialists, developing added strength in intelligence and analytic work, and moving some expertise to headquarters so that it can be more efficiently shared across the agency are all steps in a positive direction toward maximizing the value of this vitally important agency asset. Given the anticipated competition for certain highly skilled resources, some hiring flexibility may be needed. The FBI does not have a comprehensive strategic human capital plan. This plan, flowing out of an updated strategic plan, could guide the FBI as it moves through an era of transformation. A performance management system that encourages staff to focus on achieving agency goals is an important tool for agency transformation and leads to positive staff development. The importance of Director Mueller’s personal commitment to change at the FBI cannot be overstated. His leadership and commitment is essential, but he needs help to be successful. Director Mueller has recently brought on board a Special Assistant to oversee the reorganization and re- engineering initiatives. This individual brings a wide range of expertise to the position and will perform many of the functions of a Chief Operating Officer (COO). The FBI can reinforce its transformation efforts and improve its performance by aligning institutional unit, and individual employee, performance expectations with planned agency goals and objectives. The alignment will help the FBI’s employees see the connection between their daily activities and Bureau’s success. High-performing organizations have recognized that a key element of an effective performance management system is to create a “line of sight” that shows how individual responsibilities and day-to-day activities are intended to contribute to organizational goals. Coupled with this is the need for a performance management system that encourages staff to focus on performing their duties in a manner that helps the FBI achieve its objectives. The FBI currently uses a pass/fail system to rate special agents’ performance. This type of system does not provide enough meaningful information and dispersion in ratings to recognize and reward top performers, help everyone attain their maximum potential, and deal with poor performers. As a result, the FBI needs to review and revise its performance management system in a way that is in line with the agency’s strategic plan, including results, core values, and transformational objectives. An organization’s activities, core processes, and resources must be aligned to support its mission and help it achieve its goals. Leading organizations start by assessing the extent to which their programs and activities contribute to meeting their mission and intended results. They often find, as the FBI’s efforts are suggesting, that their organizational structures are obsolete and inadequate to meet modern demands and that levels of hierarchy or field to headquarters ratios, must be changed. As indicated earlier in this testimony, this FBI reorganization plan deals directly with reallocation of existing resources to more clearly realign with the agency’s revised mission. The Director has taken a major step in relation to this aspect of transforming an organization. However, ultimately the FBI must engage in a fundamental review and reassessment of the level of resources that it needs to accomplish its mission and how it should be organized to help achieve the desired results. This means reviewing and probably revising the number of layers, levels, and units to increase efficiency and enhance flexibility and responsiveness. There is also a growing understanding that all meaningful results that agencies hope to achieve are accomplished through networks of governmental and nongovernmental organizations working together toward a common purpose. In almost no area of government is this truer than it is in the law enforcement arena. Effectiveness in this domain, particularly in relation to counterterrorism, is dependent upon timely information sharing and coordinated actions among the multiple agencies of the federal government, states, localities, the private sector, and, particularly with the FBI, the international community. In his plan, Director Mueller has indicated that he has taken and will take additional steps to enhance communication with the CIA and other outside organizations. It should be noted that the CIA has agreed to detail analysts to the FBI on a short-term basis to augment FBI expertise. In the law enforcement setting, specifically at the FBI, there are certain legal restrictions concerning the sharing of information that set limits on communications. Recently, some of these restrictions have been eased. The USA PATRIOT Act, P.L. 107-56, contains a number of provisions that authorize information sharing and coordination of efforts relating to foreign intelligence investigations. For example, Section 905 of the PATRIOT Act requires the Attorney General to disclose to the Director of the CIA foreign intelligence information acquired by DOJ in the course of a criminal investigation, subject to certain exceptions. Internally, leading organizations seek to provide managers, teams, and employees at all levels the authority they need to accomplish programmatic goals and work collaboratively to achieve organizational outcomes. Communication flows up and down the organization to ensure that line staff has the ability to provide leadership with the perspective and information that the leadership needs to make decisions. Likewise, senior leadership keeps line staff informed of key developments and issues so that the staff can best contribute to achieving the organizations goals. New provisions that provide more authority to FBI field offices to initiate and continue investigations is in keeping with this tenet of leading organizations. Transforming an organization like the FBI with its deep-seated culture and tradition is a massive undertaking that will take considerable effort and time to implement. Specifically, the reorganization and realignment plan are important first steps; the implementation of the plan and the elements relating to a successful organizational transformation will take many years. A strategic plan and human capital plan are essential to keep the FBI on course. Continuous internal, and independent external, monitoring and oversight are essential to help ensure that the implementation of the transformation stays on track and achieves its purpose of making the FBI more proactive in the fight against terrorism without compromising civil rights. It was such oversight of the FBI’s domestic intelligence activities in the 1970’s that helped identify civil liberties abuses and helped lead to the more restrictive Attorney General guidelines for such activities. The DOJ’s Inspector General recently discussed several ongoing, completed, and planned reviews relating to counterterrorism and national security. But, it is equally important for Congress to actively oversee the FBI’s proposed transformation. In its request for our testimony today, the Committee asked us to identify issues relating to the reorganization and realignment for follow-up review and said that it may want us to do further reviews of the implementation of the reorganization plan. We stand ready to assist this and other congressional committees in overseeing the implementation of this landmark transformation. There are, in fact, specific areas relating to the reorganization and realignment that might warrant more in-depth review and scrutiny, including (1) progress in developing a new strategic plan (2) a review of broader human capital issues, (3) FBI uses of the funds appropriated to fight terrorism, (4) measurement of performance and results, (5) the implementation of the Attorney General’s revised guidelines, and (6) the upgrading of information technology and analytic capacity. In closing, I would like to commend the Department of Justice and FBI officials for their cooperation and responsiveness in providing requested documentation and scheduling meetings needed to develop this statement within a tight timeframe. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions you and the Subcommittee members may have. Homeland Security: Key Elements to Unify Efforts Are Underway but Uncertainty Remains. GAO-02-610. Washington, D.C.: June 7, 2002. National Preparedness: Integrating New and Existing Technology and Information Sharing into an Effective Homeland Security Strategy. GAO-02-811T. Washington, D.C.: June 7, 2002. Homeland Security: Responsibility and Accountability for Achieving National Goals. GAO-02-627T. Washington, D.C.: April 11, 2002. National Preparedness: Integration of Federal, State, Local, and Private Sector Efforts is Critical to an Effective National Strategy for Homeland Security. GAO-02-621T. Washington, D.C.: April 11, 2002. Homeland Security: Progress Made; More Direction and Partnership Sought. GAO-02-490T. Washington, D.C.: March 12, 2002. Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs. GAO-02-160T. Washington, D.C.: November 7, 2001. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Homeland Security: Key Elements of a Risk Management Approach.GAO-02-150T. Washington, D.C.: October 12, 2001. Critical Infrastructure Protection: Significant Challenges in Safeguarding Government and Privately Controlled Systems from Computer-Based Attacks. GAO-01-1168T. Washington, D.C.: September 26, 2001. Homeland Security: A Framework for Addressing the Nation’s Issues.GAO-01-1158T. Washington, D.C.: September 21, 2001. Critical Infrastructure Protection: Significant Challenges in Developing Analysis, Warning, and Response Capabilities. GAO-01-1005T. Washington, D.C.: July 25, 2001. Critical Infrastructure Protection: Significant Challenges in Developing National Capabilities. GAO-01-323. Washington, D.C.: April 25, 2001. Critical Infrastructure Protection: Challenges to Building a Comprehensive Strategy for Information Sharing and Coordination. T-AIMD-00-268. Washington, D.C.: July 26, 2000. Critical Infrastructure Protection: National Plan for Information Systems Protection. GAO/AIMD-00-90R. Washington, D.C.: February 11, 2000. Critical Infrastructure Protection: Fundamental Improvements Needed to Assure Security of Federal Operations. T-AIMD-00-7. Washington, D.C.: October 6, 1999. Combating Terrorism: Intergovernmental Cooperation in the Development of a National Strategy to Enhance State and Local Preparedness. GAO-02-550T. Washington, D.C.: April 2, 2002. Combating Terrorism: Enhancing Partnerships Through a National Preparedness Strategy. GAO-02-549T. Washington, D.C.: March 28, 2002. Combating Terrorism: Critical Components of a National Strategy to Enhance State and Local Preparedness. GAO-02-548T. Washington, D.C.: March 25, 2002. Combating Terrorism: Intergovernmental Partnership in a National Strategy to Enhance State and Local Preparedness. GAO-02-547T. Washington, D.C.: March 22, 2002. Combating Terrorism: Key Aspects of a National Strategy to Enhance State and Local Preparedness. GAO-02-473T. Washington, D.C.: March 1, 2002. Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness. GAO-01-162T. Washington, D.C.: October 17, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. Combating Terrorism: Actions Needed to Improve DOD’s Antiterrorism Program Implementation and Management. GAO-01-909. Washington, D.C.: September 19, 2001. Information Security: Code Red, Code Red II, and SirCam Attacks Highlight Need for Proactive Measures. GAO-01-1073T. Washington, D.C.: August 29, 2001. International Crime Control: Sustained Executive-Level Coordination of Federal Response Needed. GAO-01-629. Washington, D.C.: August 13, 2001. Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Preparedness. GAO-01-555T. Washington, D.C.: May 9, 2001. Combating Terrorism: Observations on Options to Improve the Federal Response. GAO-01-660T. Washington, D.C.: April 24, 2001. Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy. GAO-01-556T. Washington, D.C.: March 27, 2001. Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination. GAO-01-14. Washington, D.C.: November 30, 2000. Combating Terrorism: Linking Threats to Strategies and Resources. GAO/T-NSIAD-00-218. Washington, D.C.: July 26, 2000. Combating Terrorism: How Five Foreign Countries Are Organized to Combat Terrorism. GAO/NSIAD-00-85. Washington, D.C.: April 7, 2000. Combating Terrorism: Issues in Managing Counterterrorist Programs. GAO/T-NSIAD-00-145. Washington, D.C.: April 6, 2000. Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism. GAO/T-NSIAD-00-50. Washington, D.C.: October 20, 1999. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attack. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Combating Terrorism: Analysis of Federal Counterterrorist Exercises. GAO/NSIAD-99-157BR. Washington, D.C.: June 25, 1999. Combating Terrorism: Analysis of Potential Emergency Response Equipment and Sustainment Costs. GAO-NSIAD-99-151. Washington, D.C.: June 9, 1999. Combating Terrorism: Observations on Growth in Federal Programs. GAO/T-NSIAD-99-181. Washington, D.C.: June 9, 1999. Combating Terrorism: Use of National Guard Response Teams Is Unclear. GAO/NSIAD-99-110. Washington, D.C.: May 21, 1999. Combating Terrorism: Issues to Be Resolved to Improve Counterterrorism Operations. GAO/NSIAD-99-135. Washington, D.C.: May 13, 1999. Combating Terrorism: Observations on Federal Spending to Combat Terrorism. GAO/T-NSIAD/GGD-99-107. Washington, D.C.: March 11, 1999. Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency. GAO/NSIAD-99-3. Washington, D.C.: November 12, 1998. Combating Terrorism: Observations on the Nunn-Lugar-Domenici Domestic Preparedness Program. GAO/T-NSIAD-99-16. Washington, D.C.: October 2, 1998. Combating Terrorism: Observations on Crosscutting Issues. GAO/T-NSIAD-98-164. Washington, D.C.: April 23, 1998. Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments. GAO/NSIAD-98-74. Washington, D.C.: April 9, 1998. Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination. GAO/NSIAD-98-39. Washington, D.C.: December 1, 1997. Combating Terrorism: Federal Agencies’ Efforts to Implement National Policy and Strategy. GAO/NSIAD-97-254. Washington, D.C.: September 26, 1997. Terrorism and Drug Trafficking: Responsibilities for Developing Explosives and Narcotics Detection Technologies. GAO/NSIAD-97-95. Washington, D.C.: April 15, 1997. FBI Intelligence Investigations: Coordination Within Justice on Counterintelligence Criminal Matters Is Limited. GAO-01-780. Washington, D.C.: July 16, 2001. GAO’s Work at the FBI: Access to Data, Documents, and Personnel. GAO-01-888T. Washington, D.C.: June 20, 2001. Information Security: Software Change Controls at the Department of Justice. GAO/AIMD-00-191R. Washington, D.C.: June 30, 2000. Campaign Finance Task Force: Problems and Disagreements Initially Hampered Justice’s Investigation. GAO/GGD-00-101BR. Washington, D.C.: May 31, 2000. Year 2000 Computing Challenge: Readiness of FBI’s National Instant Criminal Background Check System Can Be Improved. GAO/AIMD/GGD-00-49. Washington, D.C.: December 16, 1999. FBI Accountability for Drugs Used in Special Operations: Deficiencies Identified and Actions Taken. GAO/AIMD-00-34R. Washington, D.C.: December 2, 1999. Year 2000 Computing Challenge: FBI Needs to Complete Business Continuity Plans. GAO/AIMD-00-11. Washington, D.C.: October 22, 1999. Combating Terrorism: FBI’s Use of Federal Funds for Counterterrorism- Related Activities (FY1955-1998). GAO/GGD-99-7. Washington, D.C.: November 20, 1998. FBI: Advanced Communications Technologies Pose Wiretapping Challenges. GAO/IMTEC-92-68BR. Washington, D.C.: July 17, 1992. Justice Management: The Value of Oversight Has Been Demonstrated. GAO/T-GGD-91-51. Washington, D.C.: July 11, 1991. International Terrorism: FBI Investigates Domestic Activities to Identify Terrorists. GAO/GGD-90-112. Washington. D.C.: September 7, 1990. International Terrorism: Status of GAO’s Review of the FBI’s International Terrorism Program. GAO/T-GGD-89-31. Washington, D.C.: June 22, 1989. FBI Domestic Intelligence Operations: An Uncertain Future. GAO/GGD-78-10. Washington, D.C.: November 9, 1977. Controlling the FBI’s Domestic Intelligence Operations. GAO/GGD-76-79. Washington, D.C.: March 29, 1976. FBI Domestic Intelligence Operations — Their Purpose and Scope: Issues that Need to be Resolved. GAO/GGD-76-50. Washington, D.C.: February 24, 1976. Managing for Results: Building on the Momentum for Strategic and Human Capital Reform. GAO-02-528T. Washington, D.C.: March 18, 2002. A Model of Strategic Human Capital Management. GAO-02-373SP. Washington, D.C.: March 15, 2002. Information Technology: Enterprise Architecture Use Across the Federal Government Can Be Improved. GAO-02-6. Washington, D.C.: February 19, 2002. Foreign Languages: Human Capital Approach Needed to Correct Staffing and Proficiency Shortfalls. GAO-02-375. Washington, D.C.: January 31, 2002. Information Security: Advances and Remaining Challenges to Adoption of Public Key Infrastructure Technology. GAO-01-277. Washington, D.C.: February 2001. Management Reform: Elements of Successful Improvement Initiatives. GAO/T-GGD-00-26. Washington, D.C.: October 15, 1999. Agencies’ Annual Performance Plans Under the Results Act: An Assessment Guide to Facilitate Congressional Decisionmaking. GAO/GGD/AIMD-10.1.18. Washington, D.C.: February 1998. Performance-Based Organizations: Lessons From the British Next Steps Initiative. GAO/T-GGD-97-151. Washington, D.C.: July 8, 1997. Performance-Based Organizations: Issues for the Saint Lawrence Seaway Development Corporation Proposal. GAO/GGD-97-74. Washington, D.C.: May 15, 1997.
This testimony discusses the Federal Bureau of Investigation's (FBI) proposed reorganization and realignment plans. The FBI's plans are part of a broader effort to fundamentally transform the federal government in light of recent trends and long-range fiscal challenges. As it moves into the 21st century, the country faces several key trends, including global interdependence; diverse, diffuse, and asymmetrical security threats; rapidly evolving science and technologies; dramatic shifts in the age and composition of the population; important quality of life issues; and evolving government structures and concepts. The second phase of the reorganization focuses on major aspects of FBI's realignment efforts, including realigning staff, building analytical capacity, the National Infrastructure Protection Center, and recruiting. Other issues include (1) major communications and information technology improvements, (2) development of an internal control system that will ensure protection of civil liberties as investigative constraints are loosened, and (3) management of the ripple effect that changes at the FBI will have on other aspects of the law enforcement community. As the FBI moves to effectively meet the realities and challenges since September 11, it should reconsider employing the major elements of successful transformation efforts used by leading organizations. These elements include strategic planning; strategic human capital management; senior leadership and accountability, realignment of activities, processes, and resources; and internal and external collaboration. Continuous internal, and independent external, monitoring and oversight are essential to ensure that the implementation of the transformation stays on track and achieves its purpose of making the FBI more proactive in the fight against terrorism without compromising civil rights.
There is no single definition of transit-oriented development; however, research generally describes such a development as a compact, mixed-use, walkable neighborhood located near transit facilities. Research has highlighted that most transit-oriented developments are typically near a fixed-guideway rail station, generally encompass multiple city blocks up to a half-mile from a transit station, have pedestrian-friendly environments and streetscapes, and include high-density and mixed-use developments. In addition, these developments may have fewer parking spaces compared to more traditional developments because residents have easy access to transit, and thus less need for an automobile. Transit-oriented developments can range in both size and scope, with some being in suburban neighborhoods with streetcars or bus rapid transit systems and community-related services while others are located in major urban locations with light, heavy, or commuter rail. Transportation experts believe that transit-oriented developments can increase accessibility to employment, educational, cultural, and other opportunities by promoting transportation options to households, thereby increasing transit ridership and reducing road congestion. Figure 1 provides a graphic representation of a transit-oriented development, and appendix II provides a description of various types of such developments. Planning and development of a transit-oriented development and affordable housing are driven largely by state and local governments, transit agencies, and private developers. For example, state and local government agencies provide many of the necessary infrastructures of transit-oriented developments, including transit stations, connections to other transportation modes, sidewalks, utilities, and other public amenities. Local governments also create the zoning environment, which may, for example, allow developers to build a mix of uses at higher densities. Some of the key agencies involved and their principal roles are summarized below. State and local departments of transportation and metropolitan planning organizations develop transportation plans and improvement programs; and build, maintain, and operate transportation infrastructure and services. Local transit agencies, such as transit authorities or transit operators, are responsible for building, maintaining, and operating transit systems. These transit systems can include fixed-guideway transit systems—such as light or heavy rail, and bus rapid transit—ferry systems, paratransit services, and local bus service. Local county and city governments, and regional councils, through agencies such as county or city planning departments, have control over land use planning, which includes zoning policies and growth management policies. Regional councils develop land use plans used by metropolitan planning organizations for transportation planning. In addition, state housing agencies, local governments, and private and nonprofit housing developers are the main stakeholders in building affordable housing. Some of the key agencies involved include: State housing development and financing agencies provide funding for affordable housing through the Low Income Housing Tax Credit (LIHTC) program—an indirect federal subsidy used to finance the development of affordable rental housing for low-income households— and other state programs for affordable housing. City and county housing departments are responsible for planning, developing, and funding affordable housing. In addition, local housing departments or agencies are required by federal law to develop local area housing plans. Local public housing authorities (PHA), normally created by state law, typically manage a local region’s public housing units and federally sponsored housing voucher programs. Private for-profit housing developers and nonprofit housing developers, such as community development corporations, build and manage housing units. FTA provides financial and technical assistance to local and state public transit agencies to build, maintain, and operate public transit systems. FTA’s New Starts program, its major capital investment program for new and extensions to existing fixed-guideway transit systems—a key element of transit-oriented developments—awards funds to individual projects through a competitive selection process. Only a few systems are recommended by FTA for funding in each fiscal year. FTA also provides transit funding to state and local governments through formula grants, which are funded entirely from the Highway Trust Fund’s Mass Transit Account. These grants provide capital and operating assistance to local transit agencies and states through a combination of five relatively large and five relatively smaller grant programs, which FTA distributes via formula grants. In addition, two programs administered by the Federal Highway Administration, the Surface Transportation Program and the Congestion Mitigation and Air Quality Improvement Program—also referred to here as flexible funding programs—routinely provide state and local transportation agencies flexibility in using funding for transit projects by permitting a portion of the program funding to be transferred for these purposes. A portion of flexible funding is allocated to localities and metropolitan planning organizations rather than states, allowing local authorities, acting through the metropolitan planning organization, to select projects reflecting their jurisdictions’ priorities. Table 1 provides additional information on these programs. HUD generally provides rental housing assistance through three major affordable housing programs—housing choice vouchers, public housing, and project-based rental assistance. These three programs generally serve low-income households—that is, households with incomes less than or equal to 80 percent of area median income (AMI). Some of these programs include targets for households with extremely low incomes—30 percent or less of AMI. HUD-assisted households generally pay 30 percent of their monthly income, after certain adjustments, toward their unit’s rent. The Housing Choice Voucher program, which supports over 2 million housing units and is administered by local PHAs, provides vouchers that eligible families can use to rent houses or apartments in the private housing market. Voucher holders are responsible for finding suitable housing that meets HUD’s housing quality standards. The subsidies in the voucher program are provided to the household (that is, tenant-based), so tenants can use the vouchers in new residences if they move. The housing subsidy is paid to the property owner directly on behalf of the participating households. The household then pays the difference between the actual rent charged by the owner and the amount subsidized by the program. PHAs have some flexibility to determine the maximum amount of rental subsidy they can pay for assisted households within limits set by HUD. For example, HUD establishes “fair market rents” for each metropolitan area, based on actual market rents for standard-quality rental units, but PHAs may choose a “payment standard” that is up to 10 percent lower or higher than the fair market rent. The public housing program, also managed by PHAs through operating and capital grants, subsidizes the development, operation, and modernization of government-owned properties and provides units for eligible tenants in these properties. In contrast to the voucher program, the subsidies in the public housing program are connected to specific rental units, so tenants receive assistance only when they live in these units. HUD pays an operating subsidy, which helps to cover the difference between the PHA’s operating costs and the rents the PHA collects from tenants. Through a variety of project-based programs including project-based Section 8, HUD provides rent subsidies in the form of multiyear housing assistance payments to private property owners and managers on behalf of eligible tenants. Tenants may apply for admission to these properties with project-based rental assistance contracts. HUD pays the difference between the household’s contribution and the unit’s rent. HUD also administers formula grant programs, such as the Community Development Block Grant (CDBG) program and the HOME program, which help low-income households obtain access to affordable housing. These programs divide billions of dollars across local jurisdictions and numerous activities on an annual basis using funding formulas pursuant to statutory guidance. Activities funded by the CDBG program can include housing, economic development, neighborhood revitalization, and community development. The HOME program provides federal assistance to participating jurisdictions for housing rehabilitation, rental assistance, homebuyer assistance, and new housing construction. Recipients of CDBG and HOME funding have a great deal of flexibility in how they use these grants, and must fulfill HUD’s planning requirements to receive funding. According to most of the literature we reviewed, plans for the existence of transit stations and amenities commonly found in transit-oriented developments generally increase nearby land and housing values, but the magnitude of the increase varies greatly depending upon several other characteristics. The studies generally conclude that increases occur because residents place a premium on land and housing the closer each is to a transit station. Although the presence of transit generally affects land and housing values, increases in some cases are modest, and results can vary throughout an entire transit system depending on several characteristics which are summarized below. Retail development is common to the type of mixed-use development found in transit-oriented developments because it allows residents to avoid car trips for everyday shopping. A few studies we looked at found that retail presence near transit stations affected land and housing values near transit positively. One particular study found that the stations with the highest increases in nearby housing values had a retail presence. Neighborhood characteristics surrounding a transit station is another factor the studies we reviewed have shown to be valued by transit users and nontransit users resulting in increased land and housing values. The studies we reviewed showed a number of neighborhood characteristics such as higher relative incomes, and proximity to parks, schools, or other neighborhood amenities. One of the goals often cited in research of transit-oriented development is to create quality, desirable neighborhoods that include many of these amenities. Other factors that have been found to increase land and housing values include proximity to job centers, pedestrian amenities, and quality or frequency of transit service. For example, a study of California transit systems found that increases in property values are more likely along reliable, frequent, and fast transit systems in the San Francisco-Oakland Area and San Diego than near more limited light rail service in Sacramento and San Jose. Conversely, some characteristics of areas near transit can limit increases, or even cause a transit station to be a negative influence on land or housing values. These characteristic include: Non-transit-oriented land uses and prevalence of crime: In the San Francisco-Oakland Area, studies found that a transit station generally has a positive influence on land and housing values, except near certain stations in a largely industrial area in Oakland. In addition, the study of the Atlanta rail system found that the presence of crime limited increases and sometimes even decreased land and housing values, particularly near rail stations with an adjacent surface parking lot. Poor economic environments: A study of Buffalo transit stations found a premium value for real estate near stations in high-income areas, but a negative effect on land and housing values near stations in low-income areas. The authors conclude that these negative effects may be the result of a lengthy economic decline and population loss in Buffalo, delayed development, and a lack of job centers on the transit system rather than the presence of the transit station. Higher land and housing values generally tend to limit housing units affordable for lower-income households but many other factors can also affect the availability of affordable housing near transit and in transit- oriented developments. According to local officials and transit and housing stakeholders we spoke with, higher land and housing values have the potential to limit the affordable housing units that are market rate, government subsidized, or incentivized. Increased land and housing values can raise the market price of sale and rental housing beyond an affordable percentage for households at or below an area’s median household income, thus reducing the availability of market rate affordable housing. Subsidized or incentivized affordable housing units can also be affected by higher land and housing values. For example, if rents for units near transit stations increase above fair market rents, tenant-based rental vouchers— provided through HUD’s housing choice voucher program—may be insufficient to cover the increased rents. Moreover, a recent study conducted by Reconnecting America and the National Housing Trust highlighted that HUD project-based Section 8 contracts for many properties near transit stations will expire in coming years. Nearly two-thirds of those buildings near transit in the eight cities the study examined have contracts expiring by 2012. As the study indicates, if contract holders believe that the increasing land values could allow them to charge higher rents than the subsidy they receive from HUD for participating in the Section 8 contract, the property owners may decide not to renew the Section 8 contracts. Increased values and land speculation can also potentially stifle development of affordable or mixed-income housing projects. In several places we visited, local officials and developers told us that higher land costs can make it difficult for projects to meet profit expectations, resulting in a preference for developers to market projects in transit- oriented developments to higher income households. In addition to land and housing values, local officials told us that several other conditions and local decisions can affect the availability of affordable housing near transit. For example: Local economic conditions can suppress land and housing values proportionately more than the local median household income, resulting in an increased supply of affordable housing units at market rate. For example, officials in Cleveland told us that the weak local economic conditions have led to an abundant supply of market rate units that are affordable to people with low incomes. Therefore they are less focused on increasing the stock of affordable housing units near the recently completed Euclid Corridor Bus Rapid Transit, and more focused on stimulating economic development and revitalizing the general housing market. Local transit station location decisions can also affect the availability of affordable housing near transit. For instance, although housing advocates and local officials raised concerns that as land values increase, low-income households may be displaced, several local officials we spoke with told us that new transit station locations were planned in corridors with limited housing prior to the construction of the transit line. These station locations were specifically placed in blighted or industrial areas or railyards with relatively inexpensive land and plentiful space available for infill developments. For localities committed to providing affordable housing, new development near these transit stations provides opportunities for new affordable housing near transit. For example, officials in Washington, D.C., told us that some stations of the rail transit system were aligned to support housing and economic development on vacant or underutilized properties (see fig. 2). Other transit lines can be aligned to serve populated, low-income neighborhoods. Local officials told us that one of the goals of the Hudson- Bergen Light Rail was to provide better transit access to low-income residents of Jersey City and Hoboken, New Jersey. The introduction of light rail in the past decade, paired with longer-term, focused investment of HUD affordable housing and revitalization dollars, has helped improve the availability of quality, affordable housing near transit. While transit lines can provide better transit access to low-income residents in the short term—a key role of transit—housing advocates have raised concerns that rents will increase in the long term, placing pressure on existing low- income residents. State and local commitment to preserving or developing affordable housing near transit can help ensure the availability of affordable housing despite potential increases in land value. Local officials told us that some tools—which we will discuss in greater detail in the next section—are available to subsidize affordable units or to encourage developers to provide affordable units to help counter higher rents and property values near transit. In addition, coordinated state and regional planning also can influence local governments’ support of affordable housing in transit- oriented developments, according to recommendations from recent reports. For example, to overcome issues associated with increasing land values and speculation, nonprofit organizations and regional and local governments can invest in land along transit alignments with the intent to develop the land in the future with an affordable housing component. A lack of direct research, incomplete data, and factors unique to each transit station limit the conclusions that can be made about how transit- oriented developments affect the availability of affordable housing. To date, there has been little research that specifically links transit-oriented developments to affordable housing, hindering the ability of policy makers and private investors to make informed decisions or evaluate results. For example, most of the studies we reviewed focused on land or housing values near transit but did not distinguish between stand-alone transit stations and transit stations in transit-oriented developments. In addition, most studies did not directly measure the number of affordable housing units, or otherwise quantify the availability of affordable housing, whether it was market rate or subsidized. Moreover, the research that does exist on land and housing values is typically focused on specific geographic areas and does not distinguish among the effects of state and local commitment to affordable housing and other factors that affect the availability of affordable housing. Finally, many communities with relatively new fixed- guideway transit systems have limited experience with transit-oriented developments, and often development—and any potential long-term effects on the availability of affordable housing—has yet to fully take hold. The scarcity of reliable housing data and limitations in analytical transportation planning methods also limit thorough study and evaluation of the direct effect that transit-oriented developments have on the availability of affordable housing. For example, HUD-subsidized housing data—the only nationwide data available for subsidized housing—do not provide a full and accurate picture of the availability of subsidized housing for research purposes. According to HUD, funding recipients self-report data for the locations of subsidized housing programs to local PHAs, limiting HUD’s ability to ensure completeness and accuracy. In addition, the data that are collected are primarily intended for administrative purposes and HUD officials told us they think the data are sufficiently reliable for administrative purposes. However, our analysis of the data revealed that several quality concerns—including inconsistencies in data over time—limit using the data for research purposes which involve reliable long-term data analysis. HUD officials told us that a variety of potential explanations existed for these inconsistencies including transitions between reporting requirements having potentially caused missing records as well as some records with missing geographic data. For example, 32 PHAs were not required to submit data for Housing Choice vouchers or public housing units from 2000 to 2006 due to participation in the Moving to Work program. Figure 3 shows examples of three geographic areas in which the number of HUD-subsidized housing unit records changes significantly over time and in some cases from one year to the next. While HUD’s Performance and Accountability reports indicate there has been some variation in the overall number of HUD- subsidized units over time, the extent of change in figure 3 is not explained by this variation. The lack of reliable and complete data would limit analysis on the impacts of HUD investments for affordable housing near transit. Furthermore, state and local governments—which provide significant amounts of housing subsides—vary in the extent to which data are collected, available, and reliable. The affordability of market rate housing can also be affected by increases in land value in transit-oriented developments; however the market rate housing datasets that do exist, such as the American Housing Survey, do not record housing costs with the detail, scale, or frequency needed to capture trends that may result from transit-oriented developments. Our past work has also cited the difficulties of accurately predicting changes in traveler behavior and land use resulting from a transit project, as well as concerns about the quality of data inputs into local travel models. Few local and state programs are targeted to assist local housing and transit providers develop affordable housing in transit-oriented developments; the few targeted programs that do exist mainly provide financial incentives to developers if they include affordable housing in new residential developments in transit-oriented developments. In our site visits, we found examples in which local housing providers used these targeted programs to either build new or preserve existing affordable housing in transit-oriented developments. For example, California allocated $285 million over a period of 3 years to the Transit-Oriented Development Housing Program, which uses loans and grants to encourage the development of housing development projects within one-quarter mile of transit stations. The loans and grants are made available on a competitive basis to affordable housing developers and local government housing agencies that commit to build at least 15 percent of the units they develop as affordable housing units. According to California officials, in 2008, $145 million was committed or awarded to 16 applicants, and over 1,800 affordable housing units will be created as a result of these awards. In Portland, Oregon, the Transit-Oriented Development Property Tax Abatement supports affordable housing on vacant or underutilized sites in transit-oriented developments by reducing operating costs for affordable housing property owners and developers through a 10-year maximum property tax exemption. For 2007-2008, Portland reported that the tax abatement program assisted 971 housing units resulting in over $1.3 million in foregone tax revenues for the city and county. Of these units, 279 have rents restricted for residents with incomes between 30 percent to 80 percent median family income. Finally, in Denver, Colorado, the city is developing a transit-oriented development fund that will provide funding to local affordable housing developers to preserve and create at least 1,200 affordable units for sale and rental along Denver mass transit corridors over a 10-year period. Many states use federal tax credits as a financial incentive to encourage development of affordable housing near transit as well as specifically in transit-oriented developments. States administer federal LIHTCs and provide them to developers in accordance with state Qualified Allocation Plans (QAP)—plans that states are required to develop which outline the competitive process used to award these funds. In QAPs, most states use a competitive points system to award LIHTCs. There is no statutory requirement that a state incorporate proximity to transit into its QAP. States that have either required or provided incentive points for proximity to transit have done so independent of federal requirements. In these states, incentive points can be earned if developments are within a certain radius of public transit or in transit-oriented developments as designated by a state or local authority. For example, New Jersey’s QAP awards an additional point for proposed developments that are one-half mile from public transportation, as well as up to 10 points for proposed developments in Transit Villages, a designation given by New Jersey’s Department of Transportation to transit-oriented developments. New Jersey officials commented that since tax credits are awarded only to a fraction of those that apply for them, developers consider these points a strong incentive to propose projects that will earn the additional points for proximity to transit. Appendix III provides examples of LIHTC programs that contain proximity to transit incentives. Additionally, states can award additional tax credits in HUD-designated high-cost areas. While HUD designates high-cost areas, which could include transit-oriented developments, states have the authority to designate buildings as being located in areas which they determine as high cost, independent of the areas designated by HUD. Oregon has used this authority to designate affordable housing buildings in transit-oriented developments as high-cost areas which require additional funding to be financially feasible. Therefore, Oregon provides additional tax credits to projects located in transit-oriented developments. State and local governments we visited generally did not use incentives in their local land use regulations or building codes to promote affordable housing in transit-oriented developments. Rather, incentives and requirements provide broad support for state and local governments to promote affordable housing without regard to location. These can help support the economic feasibility of affordable housing development by using nonfinancial incentives—such as land use regulations and building codes—and financial incentives—such as direct funding or financing options. In addition to incentives, we found that a few state and local governments have implemented certain requirements to include affordable housing in new developments, but like most state and local incentives, these requirements are not specifically targeted at affordable housing in transit-oriented developments. Determining if these nontargeted incentives and requirements have ensured the availability of affordable housing in and near transit-oriented developments is difficult due to a number of reasons described earlier. Density bonus permits allow developers to build more than the maximum number of allowable units permitted by local code if they agree to designate a certain number of units as affordable housing. Parking reductions allow local governments to reduce minimum parking requirements set forth in local building codes for developers that build near transit. This incentive allows developers to build fewer parking spaces and use the money saved from the reduced parking construction costs to support additional affordable units. Tax increment financing is used by local governments to encourage economic development by issuing municipal bonds to subsidize development, which are repaid using the incremental future tax revenues. Some localities dedicate a portion of tax increment financing for affordable housing. Affordable housing trust funds are distinct funds set aside by cities, counties, and states that dedicate sources of revenue to support affordable housing development. Inclusionary zoning: Some states or localities may require that all new housing developments, regardless of location, include a portion of units as affordable housing. Some inclusionary zoning ordinances allow developers to pay the local government for each affordable unit they choose not to build. Affordability requirements on publicly financed residential development: Some state and local governments include affordable housing requirements when they sell land to housing developers or when any government financing is involved in the project. We found examples from our site visits and other research where each type of nontargeted incentive and requirement discussed above was being used to support affordable housing in transit-oriented developments (see table 2). Because developers (both for profit and nonprofit) are not required to do so, developers may or may not take advantage of the various state and local government incentives to build or preserve affordable housing in transit-oriented developments. Local housing providers have used HUD programs in a number of cases to support affordable housing in transit-oriented developments, however, these programs support affordable housing in any location. HUD programs, such as the CDBG and HOME programs, generally provide local and state agencies with flexibility to tailor their housing spending decisions to meet local needs. According to HUD officials, CDBG and HOME grant recipients have flexibility in applying funds to local initiatives, and in some locations we visited, local officials told us they used these funds, among others, to increase affordable housing near transit as part of a transit-oriented development plan. A community development corporation in Washington, D.C., used approximately $7 million in CDBG funds to rehabilitate or develop approximately 800 units of affordable housing and generate economic development as part of its efforts to revitalize the neighborhood around a transit station. In Hoboken, New Jersey, a local housing agency official highlighted the flexibility in the CDBG program as an opportunity for the city to target funding to best meet the city’s need to revitalize the existing housing stock around new transit stations. In Seattle, Washington, local housing agency officials allocated over $4 million in HOME program funds to subsidize 200 new affordable units in four rental housing developments located in transit-oriented developments. Similarly, numerous other HUD programs, including project-based Section 8 and the Housing Choice Voucher program, can be used to support affordable housing in transit-oriented developments, but according to HUD officials, these programs have not been targeted specifically for developing or preserving affordable housing in transit-oriented developments. While project-based Section 8 assists more than 1.3 million low- and very-low-income families, during our site visits, local and federal housing agency officials told us they had not prioritized the renewal of project-based Section 8 contract housing in transit-oriented developments. As highlighted earlier, a study found that some project- based Section 8 housing is located near rail stations—defined as within one-half mile of existing or proposed rail stations—and that many of the contracts for the properties near rail stations are set to expire before the end of 2012. Housing experts have identified this as a potential problem since Section 8 contract holders may not renew these contracts if they believe the rents they could earn without the contract would be higher than the rental subsidy they receive from HUD, thereby, reducing the number of affordable units in these areas. The Housing Choice Voucher program is also a significant source of HUD-subsidized affordable housing that individuals can use for housing in transit-oriented developments. However, this requires that individuals find units where the owner accepts the voucher, which is set at the region’s fair market rent. HUD allows for exceptions to be made to the fair market rent valuations in high-cost areas, but HUD officials did not know of any exceptions that had been made specifically based on rents in transit-oriented developments exceeding HUD’s fair market rent valuations. It is unclear the extent to which increases in market rate rents that may occur in transit-oriented developments have affected subsidized housing, such as project-based Section 8 properties and rental vouchers. In transit- oriented developments, consideration of ways to ensure that project-based Section 8 contract units remain affordable and rental vouchers remain viable may be integral to ensuring the ongoing availability of affordable housing in transit-oriented developments. However, as described earlier, HUD’s data for its subsidized housing programs have limitations. These limitations do not permit a comprehensive analysis of the HUD-subsidized housing units located in transit-oriented developments. In addition, HUD has not assessed the effects of its policies and programs in supporting the availability of affordable housing in transit-oriented developments. Without such analysis grounded in reliable data, it will be difficult for HUD to assess how its programs might help to ensure the availability of affordable housing in transit-oriented developments. While most HUD programs do not consider a connection between housing and transit in the program criteria, some HUD programs do provide incentives for building affordable housing near transit, but not specifically in transit-oriented developments. For example, the Neighborhood Stabilization Program, funded by the American Recovery and Reinvestment Act of 2009, provides competitive grants to states, local governments, and nonprofits to address the damaging economic effects of properties that have been foreclosed and abandoned. Of the funds to be awarded to successful applicants of this program, 25 percent must be used for the purchase and redevelopment of abandoned and foreclosed-upon homes and residential properties to house individuals and families whose incomes do not exceed 50 percent of AMI. Under the program’s competitive scoring criteria, applications for projects that are transit accessible will be awarded additional points. Also, Sections 202 and 811—multifamily programs for the elderly and people with disabilities, respectively—consider proximity to transit in their selection criteria. HUD officials from one region noted that the location of multifamily projects is determined by many other factors, such as land prices, which have a greater impact in the rating process than access to transit. HUD’s HOPE VI program, which funds the redevelopment of obsolete public housing, also has a formal link between public housing and transit. If public housing locations that are selected as HOPE VI redevelopment sites lack sufficient transportation to services and employment, then project plans for revitalization must include increased access to transportation. Since FTA’s core mission is to support locally planned and operated public transportation systems, we found that FTA policies that allow local transit agencies to support affordable housing in transit-oriented developments are limited and still have a statutory requirement to support transit use. Under FTA’s Joint Development Guidance, local transit agencies can use land that was purchased with FTA funds to support transit-oriented developments through joint development partnerships. With FTA approval, local transit agencies can improve this property through incorporation of private investment, including commercial or residential development (to include affordable housing), as long as the transit agency can demonstrate that the development supports transit. To receive FTA approval, the local transit agency must demonstrate that the joint development project provides an economic link, public transportation benefits, revenue for public transportation, and reasonable share of costs (if applicable). The current Joint Development Guidance seeks to allow the maximum flexibility to transit agencies under the law when undertaking joint development purposes. Of four FTA regions we contacted regarding their approved joint development projects, FTA officials identified a total of 11 approved projects, of which four were used by a local transit agency to support the development of affordable housing as part of a transit-oriented development. Portland, Oregon’s transit agency had three of the approved projects, including a recent project which included the development of 54 affordable housing units. In Portland, the transit agency used land it had purchased for construction staging areas as part of a New Starts-funded transit development for a joint development project. When construction of the transit stop was completed, the agency sold the land to a developer with the condition that affordable housing be part of the development. In two of our site visit locations, we heard from local transit agency officials that the guidelines for the Joint Development Program policy are unclear and that further clarification would assist them in supporting transit-oriented development through joint development partnerships with the private sector. According to FTA officials, an FTA task force is clarifying the eligible activities that can be supported through the provisions and applications of this policy. While FTA’s New Starts Program considers mobility improvements for riders—which includes consideration of the lowest socioeconomic group of transit dependent residents—and economic development benefits of proposed New Starts projects, FTA currently does not weigh these criteria in its overall project rating. In a number of our site visit locations, local transit agencies planned to implement components of transit-oriented development around one or more of the transit stations that were part of the transit project funded by New Starts. Some local government transit officials we interviewed and literature we reviewed described the benefits of transit-oriented development—which includes components such as higher-density and mixed-use projects of commercial and residential activity—as potentially including economic development. However, many of the local transit agencies we met with commented that although they viewed the transit stations as anchors for economic development, they did not believe the New Starts project evaluation criteria fully assessed the project’s impact from economic development activities. In a previous report, we also found transit stakeholders who expressed concern about how economic development is considered in the New Starts project evaluations. When we discussed this with FTA officials as part of our current review, they discussed the challenges of capturing economic development benefits and separating those benefits from the measures included under the transit supportive land use criterion. FTA officials acknowledged the limitations of its current approach, but noted that FTA has been working with the transit industry to develop a more robust methodology for measuring economic development effects. FTA officials explained that the transit industry has not yet reached consensus on the best way to measure economic development effects that would be useful in meaningfully distinguishing between projects and would not require extensive new data collection and reporting by project sponsors. The FTA officials also said a quantitative approach could require significant additional time and contractor resources for both project sponsors and FTA. Some HUD and FTA regional officials noted they support having affordable housing in close proximity to transit but they emphasized that local governments have jurisdiction over land use planning and determine priorities for the development of affordable housing and transit. Recipients of either certain HUD or DOT program funding must fulfill planning requirements calling for them to focus on either community development and affordable housing issues for HUD funding or transportation issues for DOT funding. The requirement for local and state agencies to integrate housing and transportation issues in these planning activities, however, is minimal. Guidelines for the Consolidated Plan required by HUD urge jurisdictions to coordinate with other local plans, which may include metropolitan-wide plans that address issues such as transportation. Some officials from HUD and local agencies receiving HUD funding noted that guidance on the Consolidated Plan was not significant in integrating affordable housing and access to transit. DOT requires states and metropolitan areas, through their metropolitan planning organizations, to develop long-range and short-term transportation plans, which includes planning for transit. These transportation plans are also limited in integrating housing into transportation planning. In some regions, HUD officials told us they attend local or regional planning meetings, but their role is limited to observing or providing guidance on HUD programs. Similarly, DOT officials said that FTA officials provide guidance to states, metropolitan planning organizations, and transit agencies regarding FTA program and planning requirements, but do not influence decision making related to transportation plans and programs. HUD and DOT have recently considered ways to strengthen integrated housing and transit planning, as described in more detail later in this report. According to transit and housing agency officials and stakeholders we interviewed, infrastructure and economic conditions can present challenges to supporting affordable housing in transit-oriented developments. Some local housing agency officials told us that in some areas where land values are higher (and irrespective of proximity to transit) the high cost of land acquisition made it economically unfeasible for a developer to build affordable housing units. As discussed earlier, several studies we reviewed show that the presence of a transit station, as well as factors associated with transit-oriented developments, generally increase the value of land near the transit station. Based on their experiences, some affordable housing providers we interviewed commented that the value of land near a transit station rose quickly with the announcement of the station’s opening. Therefore, affordable housing providers told us they may require additional financial support from government agencies to support affordable housing units in close proximity to transit stations and in transit-oriented developments. Local affordable housing providers often referred to land banking as another tool to address the challenge of land acquisition in high-cost areas. Affordable housing developers land bank when they purchase land at a low cost in anticipation of future increases in land values, thereby lowering land acquisition costs and using the additional funds on affordable housing. Many officials told us, however, they had limited opportunities to practice land banking for affordable housing development due to limited resources and available land near transit. According to local affordable housing providers and experts, the ongoing economic slowdown has contributed to a slowdown in the construction of new housing, including affordable housing. Specifically, they noted that the ongoing economic slowdown has caused LIHTCs to be less valuable, which may lead to less funding for affordable housing. Tax credits are allocated to affordable housing developers, who typically sell the credits to private investors, who then use the tax credits to offset taxes otherwise owed on their tax returns. Generally, the money private investors pay for the credits is paid into the projects as equity financing. This equity financing is used to fill the difference between the development costs for a project and the non-tax-credit financing sources available, such as mortgages that could be expected to be repaid from rental income. Financial institutions with limited resources have been buying fewer tax credits, and as a result prices for tax credits have dropped and funding available for affordable housing has declined. Local housing officials we spoke to also described aspects of federal policy and programs that may limit the programs’ use for supporting affordable housing in transit-oriented developments. One source of financial support, the LIHTC program, has some specific provisions that limit its use in developing affordable housing in transit-oriented developments. Specifically, the amount of tax credits for which a development project is eligible is based in part on the amount of development costs for the project. But, the development costs used to calculate the amount of tax credits excludes the cost of acquiring land and higher land costs may be associated with transit-oriented development. This can potentially make LIHTCs less valuable for developers building affordable housing in transit-oriented developments. Developers may receive financial assistance through the CDBG and HOME programs to acquire land as part of LIHTC projects. Also, as described earlier, states may designate transit-oriented developments as high-cost areas, allowing them to allocate additional tax credits to affordable housing developments in such areas. Another aspect of the LIHTC program which may limit its use in transit-oriented developments is that the maximum tax credit allowed for each project is based on the development costs allocated to only those units that are designated for low-income residents. Since tax credits are applied only to those units in the housing development that qualify as affordable, there is an incentive for developers to plan for as many affordable units as possible, making mixed-income developments relatively less competitive in this regard. However, some transit-oriented development studies have cited the benefits of mixed-income housing in transit-oriented developments. Some states appear to be addressing this by prioritizing mixed-income housing for tax credits in their QAPs. In some cases local transit agencies we contacted described the challenge of selling surplus land—purchased using federal funds—for affordable housing development near transit. Transit officials said they had explored the possibility of selling the land at a low cost to affordable housing developers to increase the availability of affordable housing in transit- oriented developments. However, they cited the requirement to sell this land at fair market value as a potential barrier to selling the land at a low cost in order to make it more feasible for the development of affordable housing in high cost areas. According to FTA officials, transit agencies may dispose of real property through sale, using competitive sale procedures to the extent practical, which yield the highest possible value return. In certain circumstances, transit agencies may transfer the property for purposes such as affordable housing joint development. Starting in 2005, HUD and FTA, and more recently DOT, have collaborated to promote affordable housing in transit-oriented developments through three interagency efforts, which are summarized below. Interagency agreement: In 2005, HUD and FTA entered into an interagency agreement to assist communities in understanding the potential demand for housing in transit-oriented developments by conducting a research study. The agreement identified five major research objectives, including (1) increasing the understanding of the potential for incorporating housing—including affordable or mixed-income housing— and homeownership in transit-oriented developments; (2) enhancing data analyses and communities’ geographic information system capacity for developing and building affordable housing adjacent to transit-oriented developments; (3) identifying federal, state, and local policies and future research that can influence linking affordable housing and transit-oriented developments; (4) quantifying the factors that facilitate the development of affordable housing in transit-oriented developments; and (5) identifying regulatory barriers to building affordable housing in transit-oriented developments. To address these research objectives, HUD and FTA funded the Center for Transit-Oriented Development (CTOD) to conduct this research study and publish a final report. The final report, which was published in April 2007, recommended broad approaches to addressing some key challenges in supporting affordable housing in transit-oriented developments, including high land prices around transit stations, complex financing structures of mixed-income and mixed-use developments, and limited funding for building new affordable housing. HUD-FTA action plan: In December 2007, the Appropriations Committees indicated that HUD and FTA should jointly address new and better ways for promoting affordable housing near transit service and develop a best practice manual to assist communities that seek to establish mixed-income transit-oriented developments. In response to this request, HUD and FTA jointly developed an action plan to better coordinate their respective programs to promote affordable housing in transit-oriented developments, expand mixed-income and affordable housing choices in the immediate proximity of new and existing transit stations, develop a more comprehensive approach to address housing and transportation expenditures, and preserve existing affordable housing near transit. The HUD-FTA action plan outlines 11 strategies—including the development of the best practices manual for local governments to successfully promote mixed-income housing and transit-oriented developments—that HUD and FTA say they will implement from fiscal year 2008 through fiscal year 2010. Some strategies are focused on increasing education for housing and transit stakeholders and reviewing current housing and transit policies and regulations. Partnership for Sustainable Communities: In March 2009, DOT and HUD announced the Partnership for Sustainable Communities, which seeks to help American families gain better access to affordable housing, more transportation options, and lower transportation costs by coordinating federal programs. Since the partnership’s original announcement, the Environmental Protection Agency (EPA) has joined the partnership. As part of this partnership, the agencies have highlighted six livability principles that will serve as the partnership’s foundation. While these six livability principles establish some broad goals, including increasing transportation options to address climate change and supporting existing communities, a major component of the partnership is promoting affordable housing. To support this partnership, the Secretaries of DOT and HUD and the EPA Administrator have created a high-level interagency task force, led by DOT’s Deputy Assistant Secretary for Transportation Policy, the Senior Advisor to the HUD Deputy Secretary, and the EPA Director for the Development, Community and Environment Division. For example, the high-level interagency task force is charged with collaborating in developing a federal funding program, called the Sustainable Communities Initiative, to encourage local governments to integrate their regional housing, transportation, and land use planning and investments by funding grants for local governments to reform their current zoning, building codes, and land use codes. This Sustainable Communities Initiative will be administered by HUD under the proposed Office of Sustainable Development and in consultation with DOT and EPA. The President’s fiscal year 2010 budget request for HUD includes $100 million in Regional Planning Grants, $40 million for Community Challenge Grants, and $10 million for joint DOT and HUD research efforts in its fiscal year 2010 budget. The partnership will also fund joint DOT, HUD, and EPA research and evaluation efforts and work to align the respective agency programs. In addition, a joint HUD-FTA working group, which was originally formed as part of the HUD-FTA action plan, will be one of several individual working groups that will support the DOT/HUD/EPA high-level interagency task force in implementing the partnership. According to HUD officials, the partnership is intended to supersede and incorporate the activities contemplated by the HUD-FTA action plan. Based on our review, the three interagency efforts outlined a number of similar strategies and recommendations. For example, the CTOD report recommended that HUD explore regulatory and policy approaches that may increase the supply of affordable or mixed-income housing within transit corridors—a strategy outlined in the HUD-FTA action plan. In addition, all three interagency efforts have recommendations or strategies that encourage local jurisdictions to better integrate and coordinate their housing and transportation planning and to conduct research to better measure affordability. Table 3 provides a summary of recommendations and strategies made by the three interagency efforts. Although HUD and FTA’s collaboration has produced numerous recommendations and strategies to promote affordable housing in transit- oriented developments, local officials told us these efforts have had little impact on local housing and transit agencies’ planning and decision making. As we mentioned above, because housing and transportation planning and decision making is done by state and local jurisdictions, many of these recommendations and strategies could affect how local housing and transit agencies use federal programs to support their activities. However, since many of these strategies are relatively new and have yet to be implemented, it may be too soon to evaluate their effectiveness. Furthermore, our review of the HUD-FTA action plan shows that while some of the strategies identify specific products or other deliverables— such as publishing a best practices manual in fiscal year 2010 or developing an outreach plan—many other strategies require additional research and analysis before any actual change to current federal policy or programs could be made. For example, both the CTOD report and the HUD-FTA action plan recommend identifying regulatory barriers to promote affordable housing in transit-oriented developments and identifying a range of incentives that could be adopted to support efforts to include affordable housing in such developments. And while HUD has recently issued two competitive task order requests to implement some of the strategies, including identifying regulatory barriers identified in the HUD-FTA action plan, it will still take some time before these strategies can potentially benefit housing and transit agencies. For example, under the terms of the first contract, three policy reports assessing—(1) state, federal, and local regulatory barriers to mixed-income housing in transit- oriented developments; (2) financing techniques available for mixed- income housing in transit-oriented developments; and (3) incentives through HUD and FTA programs—are due to be completed and published by July 1, 2010, almost 3 years after this assessment was first recommended by the CTOD report. Once the agencies have identified the regulatory barriers, they need to take additional steps—some of which, such as public notice and comment periods, take time—to address those barriers. Furthermore, HUD and FTA must identify which areas may require congressional action to revise current statutory requirements. In addition, the second contract solicits the development of a model transportation and housing plan that can be utilized as a template by local jurisdictions; however, this plan is not expected to be completed until March 2011. Because several strategies in both the HUD-FTA action plan and the Partnership for Sustainable Communities have no detailed implementation information available, it is unknown when and how these strategies could impact local housing and transit agencies. In our prior work examining a variety of federal programs, we have highlighted the importance of having implementation plans to build momentum and show progress from the outset. For the Partnership for Sustainable Communities, the President’s fiscal year 2010 budget request for HUD includes $150 million for the Sustainable Communities Initiative. As part of this initiative, $100 million is allocated for the proposed Regional Integrated Planning Grants program, which will award grants to local metropolitan areas or states that integrate their regional transportation, housing, and land use planning and investment. However, this budget has not been approved and therefore, no detailed information is available regarding the components of this program or how this program will be implemented. An example of the steps that HUD, in consultation with DOT and EPA, may need to take to implement this type of grant program can be seen with DOT’s recent implementation of a similar type of grant program, the new Urban Partnership Agreement initiative. This initiative—a competitive grant program intended to demonstrate the feasibility and benefits of comprehensive, integrated, and innovative approaches to relieving congestion—illustrates the many steps required to implement a competitive grant program. DOT issued a Federal Register notice soliciting proposals for the Urban Partnership Agreement Initiative, set requirements for applications, created a multistep review process, and established terms and conditions of the agreement. However, because there is no detailed information available on the proposed Regional Integrated Planning Grants program, it is unclear how grants will be awarded or when the program will be finalized. In addition, HUD and FTA officials stated they are still working to determine whether there will be any link between these competitive grants under the Sustainable Communities Initiative and the HUD-FTA development of a transportation and housing planning model. Finally, local housing and transit agencies with whom we met were generally unaware of the collaboration between HUD and FTA. Many of the local housing and transit agencies officials we interviewed stated they were not aware of the HUD-FTA action plan or that HUD and FTA had been working on this project. In addition, as part of our site visits, we interviewed officials from HUD regional and field offices and FTA regional offices. During these visits, we found that most of these regional officials had not received official copies of the HUD-FTA action plan, were unaware that the action plan was posted on the agencies’ Web sites, and most were generally unaware of the plan’s strategies. In addition, HUD and FTA headquarters officials noted that only headquarters staff were involved in the development of the action plan and did not receive any formal input from regional officials or local housing and transit agencies. We have highlighted in prior GAO reports that other federal agencies reach out to key stakeholders to collect input from stakeholders and gain support for the program both during the development of the program and during its implementation. Key practices for enhancing and sustaining collaboration could be used to help the agencies implement the HUD-FTA action plan and the recently announced Partnership for Sustainable Communities. We have reported before that federal agencies often face a range of barriers when they attempt to collaborate with other agencies, including missions that are not mutually reinforcing, concerns about protecting jurisdictions over missions and controls over resources, and incompatible procedures, processes, data, and computer systems. In our October 2005 report, we identified eight key practices federal agencies can undertake to overcome these barriers and enhance and sustain their collaborative efforts. Table 4 summarizes the key practices and extent to which DOT, HUD, and FTA are using these key practices. While these practices can facilitate greater collaboration among federal agencies, we recognize that other practices may also help to foster greater collaboration. In addition, while the specific ways in which agencies implement these practices may differ in light of the specific collaboration challenges each agency faces, we have previously recommended that federal agencies adopt a formal approach— to include practices such as a memorandum of agreement or formal incentives focused on collaboration signed by senior officials—to encourage further collaboration. In comparing the agencies’ collaboration—through the interagency agreement, the HUD-FTA action plan and the Partnership for Sustainable Communities—to these key practices, we found that the agencies have taken some initial actions that are consistent with some of the key practices; however, these actions have not been fully formalized. Even though some of the interagency efforts are still in the early stages—such as the HUD-FTA action plan and the Partnership for Sustainable Communities—and implementation has only recently started, collaboration between the agencies started back in 2005 with their interagency agreement. The agencies have started defining and articulating a common outcome, through the three interagency efforts, to highlight a compelling rationale for why the agencies have to collaborate, by establishing some goals for their collaboration efforts. These goals include expanding mixed-income and affordable housing choices near transit, developing a more comprehensive approach on housing and transportation affordability, and preserving existing affordable housing. However, the development of a common outcome takes place over time and requires sustained resources and commitment by both agencies staff. The agencies have only recently begun allocating resources to implementing the strategies and recommendations produced by the efforts and assigning staff to work on the various interagency working groups to implement these strategies and recommendations. The agencies have identified and taken steps to establish mutually reinforcing or joint strategies that were developed in both the HUD-FTA action plan and the Partnership for Sustainable Communities. For example, the HUD-FTA action plan calls for identifying opportunities for joint research and development, improving coordination of local housing and transportation planning through federal housing and transportation programs, and identifying financial incentives to local communities through both HUD and FTA funding programs. By establishing these reinforcing strategies, the agencies can align core processes and resources to accomplish the common outcome. However, while the agencies have started implementing some of these reinforcing strategies, such as awarding the contract to prepare an outreach plan, and have recently adopted a shared set of principles, a number of these efforts require additional research and analysis, and therefore, it is too soon to determine how the agencies will integrate these reinforcing strategies into current agency processes and resources. HUD and FTA have identified and started leveraging resources needed to initiate or sustain their collaboration efforts by allocating funding to start implementing a number of the HUD-FTA action plan strategies and proposing funds for the Partnership of Sustainable Communities, through the proposed Sustainable Communities Initiative in the fiscal year 2010 HUD budget. In addition, the agencies have assigned staff to work on the interagency task force and have established four interagency working groups to work on a number of items, such as developing performance measures and identifying barriers to coordinated housing and transportation investments. However, the most significant resource investment, $150 million for the Sustainable Communities Initiative, has not yet been approved by Congress, and therefore, the initiative’s final budget is unclear. The agencies have started to define and agree on their respective roles and responsibilities, and in doing so, are beginning to clarify who will do what, identify how to organize their joint and individual efforts, and facilitate their decision making. For example, under the Partnership for Sustainable Communities, DOT and HUD have formed an interagency task force to implement the specific programs and policies of the initiative and plans to have HUD administer the Regional Integrated Planning Grants program, in consultation with DOT, EPA, and other federal agencies. However, while the DOT-HUD high-level interagency task force has conducted numerous meetings, and has scheduled future meetings, to discuss goals, objectives, implementation issues, and establish working groups, the agencies have yet to formally determine and document how specific roles and responsibilities will be divided. There have been actions taken to reinforce agency accountability through strategic and annual performance plans. For example, our review of both agencies’ recent strategic and annual performance plans found that while only HUD had included the collaboration efforts with FTA in its 2008 annual performance review, officials in both agencies noted they would be updating their strategic plans and annual performance plans to include their collaboration efforts. Based on interviews with HUD and FTA officials, there are several other key collaboration practices the agencies have not yet begun to implement. These key practices include establishing compatible policies, procedures, and other means to operate across agency boundaries; developing a mechanism to monitor, evaluate, and report results; and reinforcing individual accountability for collaborative efforts through performance management systems. Adopting each of these key practices could enhance the agencies’ collaboration, and the effectiveness of both the HUD-FTA action plan and the Partnership for Sustainable Communities. For example, in each of the interagency efforts, strategies and recommendations call for increasing research and development between the agencies. In the HUD-FTA action plan, the agencies have identified eight specific topics for joint research to include development of tools, techniques, and methods for addressing housing and transportation expenditures, improving the use of geographical information systems, and monitoring and assessing the effectiveness of policies and tools that have been deployed to promote mixed-income housing in transit-oriented developments. To facilitate this collaboration, the agencies need to address the compatibility of standards, policies, procedures, and data systems that will be used. However, according to HUD and FTA officials, there has been no assessment on whether any compatibility exists or can be established. The HUD-FTA action plan calls for the joint HUD-FTA working group to develop performance measures and an associated management information system which, in part, would require that HUD and FTA determine if there are reliable data available for assessing the effectiveness of the results of joint actions taken by the two agencies. This is in line with a key practice to develop a mechanism to monitor, evaluate, and report results. In addition, we have previously reported that the annual performance planning processes under the Government Performance and Results Act (GPRA) allow agencies to foster greater collaboration by ensuring that collaborating agencies’ individual program goals are complementary and, as appropriate, common performance measures are used. However, agency officials reported that there has been no effort to establish a monitoring system or to determine whether current data systems would be able to provide reliable data that would be needed to identify areas for improvement. In addition, as we stated above, the current scarcity of reliable housing data and limitations on transit modeling would need to be addressed to make sure the agencies develop an effective performance measurement system. Agencies can strengthen collaboration by reinforcing individual accountability through their performance management systems. HUD and DOT officials stated they have not implemented any changes to their performance management systems to reflect better coordination efforts between their respective agencies’ staffs. Inherent to each of these eight practices, factors such as leadership and trust are key to establishing collaborative working relationships. These factors can foster a collaborative culture and therefore help agencies overcome the barriers they face when they attempt to collaborate. In addition to these eight practices, there are other management tools available that can foster greater collaboration among federal agencies. For example, GPRA, with its focus on strategic planning, the development of long-term goals, and accountability for results, provides a framework that Congress, the Office of Management and Budget, and executive branch agencies can use to consider the appropriate mix of long-term strategic goals and strategies needed to identify and address federal goals that cut across agency boundaries. One way to assist lower-income households, which are generally more transit dependent and thus more vulnerable to increased housing and transportation costs, is to increase the availability of affordable housing in transit-oriented developments. Since state and local governments are the main providers of affordable housing and transit services, they are on the front line of this issue. From our site visits and review of relevant studies, we found that some communities have programs and policies that specifically promote affordable housing in transit-oriented developments, but most do not. Therefore, many communities that choose to build or preserve affordable housing near transit-oriented developments rely on broader affordable housing programs and other incentives that can be used wherever the development is located. With HUD and FTA focusing on their individual core missions and, until recently, promoting affordable housing and transit separately, these agencies have not generally attempted to link federal housing and transportation programs. Furthermore, while there are federal requirements for both housing and transportation planning, traditionally these plans have not been integrated. In fact, some program requirements (such as the LIHTC) may limit the development of affordable housing in transit-oriented developments. Starting in 2005, HUD and FTA have collaborated to develop strategies for better coordination of their respective programs with the goal of helping to provide more affordable housing in transit-oriented developments. The strategies that HUD and FTA, and more recently DOT, have developed are in line with what housing and transit stakeholders have stated can assist local communities, and address some current weaknesses we found in the two agencies’ independent programs and policies. While these strategies have the potential to assist local communities better link housing and transportation programs, only a few strategies—such as the best practices manual—have the potential to provide assistance in the near term. Many strategies, such as identifying regulatory barriers and financial incentives, still require additional research and analysis and others have only just been announced. In particular, any areas that may require congressional action to revise current statutory requirements may require the agencies to take additional steps. Without an implementation plan for each strategy, however, DOT and HUD run the risk of losing momentum. Given the number of steps and time it may take to implement the various strategies, including the proposed new regional planning grants, it will be important to establish an implementation plan that encompasses the various strategies in order for HUD and DOT to be better positioned to implement each of their strategies as their interagency efforts progress. The scarcity of reliable housing data and the limitations on transit modeling also limit the ability of DOT, HUD, and FTA to determine whether current and future efforts are ensuring the availability of affordable housing in transit- oriented developments. Therefore, without development of better data and data systems, which are key elements of any performance measurement system, the agencies will not have the information necessary to determine, among other things, whether they need to increase coordination or adjust existing strategies. In addition, when comparing the agencies’ collaboration to the key practices we have previously identified, we found that the agencies have taken actions that are consistent with some of the practices. However, the agencies had not taken actions on a number of practices—such as reinforcing individual accountability for collaborative efforts or developing mechanisms to monitor, evaluate, and report on results. Furthermore, without a formal approach to collaboration for all of the key practices, DOT, HUD, and FTA may miss opportunities to effectively leverage each other’s unique strengths to promote affordable housing in transit-oriented developments. To strengthen formal collaboration efforts, we recommend that the Secretary of Transportation should direct the Administrator of the Federal Transit Administration, and the Secretary of Housing and Urban Development should direct the appropriate program offices, to take the following three actions: Develop and publish an implementation plan for interagency efforts to promote affordable housing in transit-oriented developments, to include the HUD-FTA action plan and the Partnership for Sustainable Communities. This plan should include, but not be limited to, a project schedule, resource allocation, outreach measures, and a performance measurement strategy. Develop a plan to ensure that data collected on the various programs of the agencies related to affordable housing and transit are sufficient to measure the agencies’ performance toward goals and outcomes established in the HUD-FTA action plan and the Partnership for Sustainable Communities. Adopt a formal approach to encourage further collaboration in promoting affordable housing in transit-oriented developments. Such an approach could include establishing and implementing a written agreement to include defining and articulating a common outcome; establishing mutually reinforcing or joint strategies; identifying and addressing needs by leveraging resources; agreeing on agency roles and responsibilities; establishing compatible policies, procedures, and other means to operate across agency boundaries; reinforcing agency accountability for collaborative efforts through agency plans and reports; and reinforcing individual accountability for collaborative efforts through performance management systems. We provided draft copies of this report to the Secretary of Transportation and the Secretary of Housing and Urban Development for their review and comment. DOT generally agreed to consider the recommendations in this report, and provided technical comments, which we incorporated, as appropriate. We also received technical comments from HUD that we have incorporated as appropriate. In written comments, HUD’s Director of the Office of Departmental Operations and Coordination stated that HUD would consider the findings and recommendations of the report carefully as the agency continues its efforts to combine housing and transportation funds and resources near transit. The Director’s letter is reprinted in appendix IV. Discussed below are the additional comments HUD had with certain aspects of the report and our response. First, HUD stated that the definition of affordable housing used in the report is overly narrow in that we focused on subsidized housing and that affordable housing goes beyond subsidized housing for low- and moderate- income families. However, the definition of affordable housing used in the draft report is not limited to subsidized housing. Nor does the report suggest that subsidized housing is the only source of affordable housing; however, as we state in the report, national data on all affordable housing in transit-oriented developments are limited and there has been little research that specifically links transit-oriented developments to affordable housing. HUD also noted that we did not sufficiently look at the combined cost of housing and transportation, as a measure of affordability. We agree that the combined costs of housing and transportation are an important indicator of housing affordability and noted in our draft report that some organizations have worked to establish a link between housing and transportation costs by developing new measures of affordability. However, determining transit-oriented developments’ effects on affordability—as defined by combined housing and transportation costs— is complicated due to a lack of national data including reliable data on subsidized housing. Second, HUD commented that it has made significant progress on coordinating housing and transportation under the Partnership on Sustainable Communities. HUD cited some specific actions it has taken, including developing the six livability principles announced in June and creating four working groups to implement the partnership. While our draft report did mention these efforts and recognized other actions the agencies have taken consistent with some of the key practices for collaboration, we have added some additional discussion of the six livability principles and the four working groups. However, we believe that we correctly assessed the level of progress made by the agencies and maintain that to sustain their initial efforts, it will be important for the agencies to meet the principles of interagency collaboration, which we discuss in the report. Lastly, HUD noted that we overstated the issues associated with accuracy of HUD data on subsidized households. HUD stated in its comments that the data collected on subsidized housing are primarily intended for administrative purposes and may be sufficiently reliable for administrative purposes. We acknowledge in our report that HUD officials told us that the data are collected primarily for administrative purposes but we did not evaluate the reliability for those purposes. Rather, we discuss the data’s limitations for monitoring, evaluating, and reporting results related to understanding the impact of transit-oriented development on the availability of certain affordable housing. In the report, we provide multiple reasons why we believe the data have gaps and inconsistencies beyond those associated with the Moving to Work program that make their use for geographic analysis limited. Reliable geographic information will be important for the department to measure the impact of its programs and make adjustments to those programs to ensure the availability of certain affordable housing in transit-oriented developments. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days after the date of this letter. At that time, we will send copies of this report to congressional committees with responsibilities for surface transportation and housing programs; DOT officials, including the Secretary of Transportation and the Administrator of FTA; and HUD officials, including the Secretary of Housing and Urban Development. This report will also be available on our home page at no charge at http://www.gao.gov. If you have any question about this report, please contact us by e-mail at wised@gao.gov or by telephone at (202) 512-2834 or by e-mail at sciremj@gao.gov or by telephone at (202) 512-8678. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To identify how transit-oriented developments affect the availability of affordable housing, we reviewed reports and studies issued by federal, state, and local agencies; transportation research organizations; and academia, as well as our past work in surface transportation and affordable housing. A GAO economist reviewed relevant reports and studies, which were identified by searching economics, housing, and transportation literature, and found their methodology and economic reasoning to be sound and sufficiently reliable for our purposes. To identify how local, state, and federal agencies have worked to ensure that affordable housing, including housing subsidized through Department of Housing and Urban Development (HUD) programs, is available in and near transit-oriented developments, we interviewed Department of Transportation (DOT), Federal Transit Administration (FTA) officials, and HUD officials. We also conducted 11 site visits or interviews with state and local transportation and housing officials in Mesa, Phoenix, and Tempe, Arizona; Sacramento, California; Chicago, Illinois; Cleveland, Ohio; Jersey City and Hoboken, New Jersey; Portland, Oregon; Washington, D.C.; and Arlington, Virginia. We selected this nongeneralizable sample of metropolitan areas based on experience with transit-oriented development and if the area had received New Starts federal funding for construction of a local fixed-guideway transit system, and geographical diversity. During these site visits, we interviewed federal, state, and local housing and transportation officials and toured transit-oriented developments. In addition, we reviewed studies and documentation on how these and other metropolitan areas and states have promoted transit-oriented developments. To identify to what extent do HUD, DOT, and FTA work together to ensure that transportation and affordable housing objectives are integrated in transit-oriented development projects, we reviewed documentation describing the collaborative efforts. We examined the mechanisms (e.g. interagency agreements, task force agendas, etc.) the agencies used to collaborate. Additionally, we interviewed agency officials on their knowledge of any past or future collaborative effort. To determine what opportunities exist to enhance collaboration between HUD, DOT, and FTA, we reviewed our prior work on key practices that can help enhance and sustain collaboration and address barriers to more effective collaboration. We also obtained the views of agency officials, local housing and transit providers, transportation organizations, and nonprofit housing organizations with experience in developing, implementing, or analyzing these issues. Finally, we compared the agencies’ collaboration efforts with key practices that can help federal agencies enhance and sustain their collaborative efforts. We conducted this performance audit from August 2008 to September 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Urban centers contain a mix of residential, employment, retail, and entertainment uses, usually at slightly lower densities and intensities than in regional centers. Destinations draw residents from surrounding neighborhoods. These centers serve as commuter hubs for the larger region and are served by multiple transit options, often including rail and high-frequency regional bus or bus rapid transit, as well as local- serving bus. Suburban centers contain a mix of residential, employment, retail, and entertainment uses, usually at intensities similar to that found in urban centers but lower than that in regional centers. Suburban centers can serve as both origins and destinations for commuters. They are typically connected to the regional transit network and include a mix of transit types—regional rail and bus, bus rapid transit, and local bus— with high-frequency service. Transit town centers function more as local-serving centers of economic and community activity than either urban or suburban centers, and they attract fewer residents from the rest of the region. A variety of transit modes serve transit town centers, and there is a mix of origin and destination trips—primarily commuter service to jobs in the region. There is less secondary transit service when compared to regional centers, urban centers, or suburban centers. Secondary transit lines feed primary lines, often at intervals timed to facilitate transfers at the primary transit stations. Urban neighborhoods are primarily residential areas that are well-connected to regional centers and urban centers. Densities are moderate to high, and housing is usually mixed with local-serving retail. Commercial uses are limited to small businesses or some industry. Development is usually oriented along a well- connected street grid that is served by a secondary transit network. Transit is often less a focal point for activity than in the “center” types of locations, and stations may be located at the edge of two distinct neighborhoods. Transit neighborhoods are primarily residential areas that are served by rail service or high frequency bus lines that connect at one location. Densities are low to moderate and economic activity is not concentrated around stations, which may be located at the edge of two distinct neighborhoods. Secondary transit service is less frequent and less well connected. There is often not enough residential density to support much local-serving retail, but there are often retail nodes. Special-use or employment districts are often single use—either they are low to moderate density employment centers, or are focused around a major institution such as a university, or an entertainment venue such as a stadium. Transit stations are not a focus of economic activity. Secondary transit service is infrequent and focused on stations; development tends to be more recent and the street grid may be less connected than in older neighborhoods. Mixed-used corridors are a focus of economic and community activity but have no distinct center. These corridors are typically characterized by a mix of moderate-density buildings that house services, retail, employment, and civic or cultural uses. Many were developed along streetcar lines or other transit service. Mixed-use corridors are especially suitable for streetcars, bus rapid transit or other high-quality bus service with closely spaced stops. Awards 7 points for projects part of a transit-oriented development strategy where there is a transit station, rail station, commuter rail station, or bus station, or bus stop within one-fourth mile from the site with service at least every 30 minutes during the hours of 7 to 9 a.m. and 4 to 6 p.m., and the project’s density will exceed 25 units per acre. Awards 3 points for projects designated as a transit-oriented development by a rapid transit authority or projects located within one-fourth mile walking distance of a rapid rail transit station along paved roads, sidewalks, established pedestrian walkways, or bike trails. Awards 1 point for projects that are part of a transit-oriented development strategy (located within four blocks) of a regular bus route, or to a rapid transit system stop, etc. Awards 5 points for projects that are part of a transit-oriented development—which is defined as having a density that exceeds 25 units per acre, involves mixed-use or is part of a larger mixed-use undertaking, and involves good nonmotorized transport design (walkability)—and are located within one-half mile of a mass or public transit or rail station, or are located within one-fourth mile of a bus depot or bus stop with scheduled service at intervals at most 30 minutes between the hours of 6:30 a.m. and 7:00 p.m. Awards 6 points for projects in which the buildings and the project site, including the nearby surroundings, provide opportunities for recreation, education, convenient access to mass transit or rail systems, and community activities. Awards 10 points for projects that are located within a transit village. “Transit village” refers to a designation given by the New Jersey Department of Transportation to communities with a bus, train, light rail, or ferry station that has developed a plan to achieve its goals of transit-oriented development. The transit village program is designed to spur economic development, urban revitalization, and private-sector investment around passenger rail stations. Awards 1 point for projects located within one-half mile of public transportation. Awards 1 point for projects within one-fourth mile of local transit route. Awards 4 points to projects located within one-fourth mile of public transportation that is accessible to all residents including persons with disabilities and/or located within a community that has “on demand” transportation, special transit service, or specialized elderly transportation for qualified elderly developments. In addition to the individuals named above, Raymond Sendejas, Assistant Director; Paul Schmidt, Assistant Director; Lauren Calhoun; Melinda Cordero; Delwen Jones; Anar Ladhani; Terence Lam; Matthew LaTour; Sara Ann Moessbauer; Josh Ormond; Andrew Pauline; Linda Rego; and Tim Schindler made key contributions to this report.
The federal government has increasingly focused on linking affordable housing to transit-oriented developments--compact, walkable, mixed-use neighborhoods located near transit--through the Department of Housing and Urban Development's (HUD) housing programs and the Department of Transportation's (DOT) Federal Transit Administration's (FTA) transit programs. GAO was asked to review (1) what is known about how transit-oriented developments affect the availability of affordable housing; (2) how local, state, and federal agencies have worked to ensure that affordable housing is available in transit-oriented developments; and (3) the extent to which HUD and FTA have worked together to ensure that transportation and affordable housing objectives are integrated in transit-oriented developments. To address these issues, GAO reviewed relevant literature, conducted site visits, and interviewed agency officials. Characteristics of transit-oriented developments can increase nearby land and housing values, however determining transit-oriented development's effects on the availability of affordable housing in these developments are complicated by a lack of direct research and data. Specifically, the presence of transit stations, retail, and other desirable amenities such as schools and parks generally increases land and housing values nearby. However, the extent to which land and housing values increase--or in the rare case, decrease--near a transit station depends on a number of characteristics, some of which are commonly found in transit-oriented developments. According to transit and housing stakeholders GAO spoke with, higher land and housing values have the potential to limit the availability of affordable housing near transit, but other factors--such as transit routing decisions and local commitment to affordable housing--can also affect availability. Few local, state, and federal programs are targeted to assisting local housing and transit providers develop affordable housing in transit-oriented developments. The few targeted programs that exist primarily focus on financial incentives that state and local agencies provide to developers if affordable housing is included in residential developments in transit-oriented developments. However, GAO found that housing developers who develop affordable housing in transit-oriented developments generally rely on local and state programs and policies that have incentives for developing affordable housing in any location. HUD and FTA programs allow local and state agencies to promote affordable housing near transit, but rarely provide direct incentives to target affordable housing in transit-oriented developments. Since 2005, HUD and FTA, and more recently DOT, have collaborated on three interagency efforts to promote affordable housing in transit-oriented developments including (1) an interagency agreement, (2) a HUD-FTA action plan, and (3) a new DOT-HUD partnership. While these interagency efforts have produced numerous strategies, local housing and transit officials told GAO that these strategies had little impact, in part, because they have yet to be implemented. However, the agencies have not yet developed a comprehensive, integrated plan to implement all efforts, and without such a plan, the agencies risk losing momentum. GAO has previously identified key practices that could enhance and sustain collaboration among federal agencies; when compared to these practices, GAO found that HUD, FTA, and DOT have taken some actions consistent with some of these practices--such as defining a common outcome. However, weaknesses in agency housing data and analytical transportation planning methods will limit these agencies' ability to effectively monitor, evaluate, and report results--another key collaboration practice. GAO found that other collaboration practices, such as establishing compatible policies and procedures, could be taken to strengthen
CBP has two priority missions: (1) detecting and preventing terrorists and terrorist weapons from entering the United States, and (2) facilitating the orderly and efficient flow of legitimate trade and travel. CBP’s supporting missions include interdicting illegal drugs and other contraband; apprehending individuals who are attempting to enter the United States illegally; inspecting inbound and outbound people, vehicles, and cargo; enforcing all laws of the United States at the border; protecting U.S. agricultural and economic interests from harmful pests and diseases; regulating and facilitating international trade; collecting import duties; and enforcing U.S. trade laws. There are 317 official ports of entry into the United States. Each port can be composed of one or more individual facilities, such as airports, seaports, or land ports where CBP officers process arriving passengers. The port of Buffalo, New York, for example, has airport, seaport, and land port inspection facilities while the Port of Detroit has only the facility at the Detroit International Airport. CBP headquarters allocates staff to ports. A Director of Field Operations (DFO) is responsible for port activities within a geographic area and serves as a liaison between port management and headquarters. Within ports with multiple port facilities (that may be spread across a wide area), port directors decide whether officers are assigned to airport, sea port or land port facilities and individual facility managers are responsible for overseeing day-to-day operations. Port directors are also responsible for ensuring that officers are appropriately cross-trained to support the agency’s mission and to allow for flexibility in assigning officers to various inspections functions and locations within a port. Figure 1 shows the Port of Houston/Galveston’s multiple sea ports and one airport. At inspection facilities within airports, CBP officers inspect all international passengers wishing to enter the United States mainly to determine their admissibility into the country. Figure 2 shows inspection stations within the inspection facility at Dallas-Fort Worth International Airport in Dallas, Texas. After entering the inspection area, U.S. citizens (or permanent residents) and foreign nationals are directed to two different lines. Foreign national inspections are more complex than U.S. citizen inspections because the inspecting officer has to be familiar with different nations’ passports and visas and be able to identify fraudulent versions of these documents. In addition, foreign nationals must present the I-94 Form. During this process, the officer asks the foreign national passenger questions, such as his or her residence abroad and while in the United States, and intended length of stay. Generally, CBP takes longer to inspect foreign nationals than U.S. citizens or permanent residents. In addition to questioning the passenger and examining documentation, the officer observes the passenger’s behavior as part of his or her assessment of the passenger’s potential involvement in terrorism, criminal activities, or violation of immigration status. The officer also checks records in a variety of databases as well as any relevant and available intelligence information to identify high-risk passengers. If the CBP officer conducting the primary inspection decides that a passenger requires further scrutiny, then that passenger is referred to another CBP officer who conducts a more in-depth secondary inspection. Secondary inspection can involve additional interviews, document reviews, database queries, communication with other law enforcement agencies, observational techniques, and heightened physical inspections. After primary or secondary inspection, passengers may be subject to baggage inspection if they have items to declare, such as certain food items or currency or if a CBP officer suspects that they may be involved in illegal activity. Otherwise, if the inspecting officer determines passengers have nothing to declare and do not pose a risk, passengers are allowed to pick up their baggage and leave the inspection facility through the exit control area, where a CBP officer ensures that all passengers have undergone all necessary examinations. In any inspection, if the officer determines that certain passengers pose some risk, are engaged in illegal activity, or are otherwise trying to enter the country unlawfully, they may be returned to their originating country or detained for further legal proceedings. CBP calculates average daily wait times for an airport based on an average of the wait times of all flights that arrive on that day. Because it is an average, this calculation does not represent the wait times for each individual flight. In addition, the wait time recorded for an individual flight does not represent the amount of time that each individual passenger must wait for primary inspection. CBP calculates passenger wait time for individual flights as the time elapsed from the arrival of the first passenger on a flight into the inspection facility to the completion of primary inspection for 98 percent of the passengers on the flight. For example, on a flight that CBP records as having a wait time of 45 minutes, the first passenger to enter the inspection facility may be able to pass through the primary inspection area in less than 10 minutes, while the last 2 percent of passengers may wait more than an hour, because they arrived later to the inspection facility or were mixed in line with other flights. Figure 3 illustrates the steps arriving passengers take after they deplane until they exit the federal inspection facility and highlights the components of this process that CBP measures as passenger wait times. As illustrated in the figure, the wait time CBP calculates for primary passenger inspection is divided into two components: (1) the time spent waiting in line at the inspection facility and (2) the length of time of the primary inspection. This measurement is focused on primary inspection and does not include the time for passengers to deplane and walk to the inspection area before the primary inspection and also does not include the time needed for passengers to retrieve baggage and exit the inspection facility after the primary inspection. In addition, this measurement does not take into account time passengers may have to spend in secondary inspection. Prior to September 11, Congress had imposed wait time standards on the INS for processing international passengers. Congress enacted legislation in 1990 requiring INS to process incoming international passengers within 45 minutes. Although the legislation was not specific as to how INS should measure the 45 minutes, INS originally interpreted this requirement to include time spent in line in the inspections facility and the time for primary inspection—-the two components measured by CBP. The Enhanced Border Security and Visa Protection Act of 2002 repealed the 45- minute standard as a requirement for processing international passengers. It added a provision requiring that staffing levels estimated by CBP in workforce staffing models be based on a goal of providing immigration services within 45 minutes. The amount of time passengers from international locations have to wait before completing CBP inspections to enter the United States varies within individual airports and across the 20 airports at which CBP records wait times. Although wait times vary across airports, on average, CBP processed passengers within 45 minutes during the 2-month period for which data were available. Nonetheless, within a single airport, CBP has recorded wait times for individual flights as long as 5 hours for passengers to complete primary airport inspections and 15 of the 20 airports had one percent or more of their international flights exceed 60 minutes for processing international passengers through primary inspection. Based on our observations and analysis of wait time data, as well as our discussions with airport and airline officials, we concluded that the primary factors affecting wait time are passenger volume, the number of inspection stations available at an airport, and the number of CBP officers available to conduct inspections. However, none of the three factors, in isolation, had a decisive effect on passenger wait times. In January 2005, CBP began using its current methodology for recording average daily wait times for international arriving flights at 20 of the 285 airports that receive international air traffic. This calculation is an average of the wait times of all flights that come in that day. Because it is an average, this calculation does not represent the wait time for each individual flight. In addition, the wait time recorded for an individual flight does not represent the amount of time that each individual passenger must wait for primary inspection. For example, on a flight that CBP records as having a wait time of 45 minutes, the first passenger to enter the inspection facility may be able to pass through the primary inspection area in less than 10 minutes while the last 2 percent of passengers may wait more than an hour because they arrived later to the inspection facility or were mixed in line with other flights. Figure 4 illustrates average daily wait times at 20 international airports based on the average time required for the 98th percentile passenger to complete primary inspection at each airport (this applies to figures 4 through 8) and shows that average wait times at 19 of the 20 airports for which CBP maintained data were 40 minutes or less. Airline officials we spoke to cautioned that this data on wait times was not recorded during the peak June through September time periods. The officials stated that wait times recorded during the summer months may be significantly higher than those recorded during off-peak periods. Generally, the longer of the two components of wait time calculated by CBP is the time spent by passengers waiting in line to meet with a CBP officer. According to CBP officials and our own observations, the time spent by the passenger in the primary inspection station communicating directly with the CBP officer is rarely more than 5 minutes, with inspections for U.S. citizens lasting approximately 1 to 2 minutes and for foreign nationals from 3 to 5 minutes. CBP officials told us that if the officer conducting the primary inspection thinks it is taking an unreasonable amount of time given the nature of the inspection and the capacity of the secondary inspection area, he or she will refer the passenger to secondary inspection to allow for a more thorough examination of the passenger without unnecessarily holding up other travelers. While the Enhanced Border Security and Visa Protection Act removed the 45 minute standard as a requirement for processing international passengers through inspection, it added a provision specifying that staffing levels estimated by CBP in workforce models be based upon the goal of providing immigrations services within 45minutes. As shown in the figure above, only Miami International Airport has an average wait time of over 45 minutes. However, Miami and other airports do sometimes exceed 60 minutes for processing international passengers through primary inspection and CBP maintains data on these flights. Figure 5 illustrates the percentage of flights that exceed 60 minutes for processing international passengers at 20 airports where CBP records wait times. As figure 5 shows, at one airport, Miami, more than 20 percent of flights exceeded 60 minutes to process passengers through primary inspection while less than one percent of flights arriving at other airports, such as Baltimore-Washington International Airport, Minneapolis-St. Paul International Airport, Phoenix Sky Harbor Airport, and Seattle-Tacoma International Airport exceeded 60 minutes during that time frame. Based on our analysis and observations, along with a general consensus among CBP, airport, and airline officials, we determined that the primary factors affecting wait time are passenger volume, the number of inspection stations available at an airport, and the number of CBP officers available to conduct inspections. Wait times can also be affected by other factors such as the use of information technology. However, none of these three factors, in isolation, directly impacts passenger wait times across airports due to the variability of numerous other factors that influence wait time at airports, such as passengers’ countries of origin and airport configuration. Passenger volume is a primary factor that affects wait time for passengers at airports because large volumes of passengers can lead to more crowded inspection facilities and longer lines. Passenger volume can vary by the time of day, day of the week, or time of year. For example, according to airline officials, international passengers tend to travel early or late in the day to accommodate work schedules. Also they said international travel tends to be higher on Monday and Friday than other days of the week, which concentrates passenger volume at certain times of day and days of the week. Airline officials also told us that people tend to travel more during the summer and over holidays which can lead to more crowded inspection facilities and increased wait times during the vacation season. An airport official said flights that exceed 60 minutes for processing generally arrive during these peak passenger volume periods. Figure 6 illustrates average wait times at airports arranged from lowest to highest passenger volume. Although passenger volume is a factor in wait times, it does not directly correlate with wait times. For example, Dallas-Fort Worth and Newark airports had about the same average daily wait times while Newark had almost twice the passenger volume. Other factors, such as the number of inspection stations or CBP officers on duty, also affect wait times. According to CBP and airline officials, the number of passengers who can be processed within a given time period may be limited by the number of inspection stations available or open at some airports. For example, if an airport has all of its inspection stations in use by CBP officers, adding more officers will have little effect on the number of passengers who can be processed within a given time. Figure 7 lists average wait times at airports arranged from the lowest to greatest number of inspection stations. As shown in the figure, the number of inspection stations also does not necessarily impact wait times directly. For example, although average wait times at Boston’s Logan Airport are about the same as for Atlanta’s Hartsfield-Jackson Airport, Hartsfield-Jackson Airport has about five times the number of inspection stations as Logan Airport. The number of CBP staff available to perform primary inspections is also a primary factor that affects wait times at airports. According to CBP officials, the agency strives to place sufficient numbers of officers to fulfill its missions of preventing terrorism and facilitating trade and travel, and part of facilitating trade and travel involves minimizing wait times. Figure 8 illustrates average wait times arranged from lowest to highest CBP staffing levels at 20 airports where CBP records wait time data. As figures 7, 8, and 9 illustrate, no single factor necessarily has a direct impact on passenger wait times across airports; however, varying combinations of the factors within an individual airport may have an effect. For example, CBP and airline officials in Houston stated that the increase in the number of inspection stations at George Bush Intercontinental Airport, in combination with the addition of new CBP officers has reduced passenger wait times. Information technology systems used during the inspection process to help CBP officers determine admissibility can potentially affect passenger wait times. These systems can occasionally slow down passenger processing when one or more systems become unavailable for any length of time. Because CBP has procedures in place to continue inspections while the system is brought back online, officials said that this is not a major factor affecting wait times. The officials added that system downtime did not occur frequently or for extended periods. The main system used by CBP officers to process all passengers is the Interagency Border Inspection System, which is designed to facilitate and more effectively control entry of persons into the United States by providing information on passengers’ identities through querying a variety of databases. The Interagency Border Inspection System assists CBP officers in passenger processing and records the results of secondary inspections. The U.S. Visitor and Immigrant Status Indicator Technology (US VISIT) program is another system used by CBP to help the officer verify passenger identity. Although wait time data kept by CBP does not capture the period prior to the introduction of the US VISIT program, our analysis of available data and discussions with CBP and airline officials indicate that the program has not significantly increased wait times since the procedures associated with the system are generally done concurrently with the CBP officers’ other inspection activities. Some airports and airlines took steps to facilitate future increases in passenger volume and minimize passenger wait times. Specifically, three of the five international airports we visited had built new or expanded federal inspection facilities to accommodate future growth in passenger volume and minimize wait times for internationally arriving passengers. Additionally, three of these airports assigned staff to assist passengers in preparing documentation to minimize wait times. Airline officials we spoke to acknowledged that large volumes of arriving passengers may increase wait times, but said that, to accommodate market demand, airlines do not spread flight arrivals evenly throughout the day. According to airport and CBP officials, facility upgrades that increase the number of inspection stations help to minimize passenger wait times by allowing for the more rapid and efficient processing of passengers through inspection facilities. We visited three airports where airports facilities had been upgraded to increase the number of inspection stations and improve configuration of the inspection facility. For example, in 2004 a total of 12 new CBP inspection stations were constructed at Washington Dulles Airport. Airport and airline officials there said that increasing the number of stations has helped reduce wait times because passengers can now pass through the facility more easily. However, the benefit of adding inspection stations has been limited because, as of June 2003, CBP had not increased staffing levels. However we were not able to verify this because of limited data availability. According to airline officials, to fully maximize the benefit of new or expanded inspections facilities, the number of inspections personnel would need to be increased so that new inspection stations could be staffed. Construction of new terminals and inspection facilities has also taken place at the George Bush Intercontinental Airport in Houston. In Houston, the airport authority financed the construction of a new inspection facility, which opened in January 2005 and increased the number of inspection stations from 34 to 80. Airport, airline, and CBP officials agreed that the new facility, in combination with an increase in officer staffing levels, has reduced wait times at the airport. They stated that this is because the new inspection facility can more easily accommodate the increased passenger volume at the airport and the larger number of CBP officers allows more inspection stations to be used to process international passengers during peak periods. The new inspection facility at Dallas-Fort Worth Airport is scheduled for completion in July 2005 and will increase the number of inspection stations from 30 to 60. Airport officials stated that they expect that the new facility will help to minimize wait times because it will consolidate inspections activities in one area, whereas current facilities divide inspection activities among three separate terminals. Figures 9 and 10 compare the old and new inspection facilities at George Bush Intercontinental Airport in Houston, Texas. Houston’s new facility addresses one of the three factors that could facilitate faster processing of international passengers by increasing the number of inspection stations. The overall construction of the new facility shows a more expansive configuration than the old facility. According to airline and CBP officials, the new facility can accommodate a larger number of passengers. According to airport and airline officials, the new inspection facilities at three of the five airports we visited were constructed to increase capacity to accommodate current and projected passenger volume and planning for them began years in advance and, in the case of federal inspection facilities, were approved by CBP or its legacy agencies in advance. CBP is responsible for reviewing and approving design proposals for inspections facilities to ensure that they meet the agency’s security requirements. In each case, the airports or airlines conducted studies estimating future passenger volume to justify the cost of constructing these facilities. For example, the total cost of the new facility in Houston was approximately $440 million, according to airport officials. Airport and airline officials said that these projects were planned, funded, and completed with the expectation that CBP would increase staff for the new facilities as passenger volume increased. However, CBP officials stated that the agency is not legally or contractually required to allocate new staff when inspection facilities are constructed or expanded and the agency is to make no commitment implicitly or explicitly regarding future staffing levels in approving new inspection facility design proposals. Airports and airlines also have taken other steps to minimize passenger wait times. For example, at four of the five airports we visited, airport and airline officials stationed personnel in the inspection facility area to assist passengers in filling out required forms such as the I-94 Forms as they wait in line for primary inspection. According to airline officials, this assistance helps to reduce delays caused when passengers are turned away from the primary inspections stations due to incompletely or incorrectly filled out forms. Airport officials at one airport placed Internet terminals in the inspection area to allow passengers to search for address information required for the I-94 form. CBP and airline officials we spoke with said that scheduling large numbers of flights within a short time period, known as “peaking,” could cause longer passenger wait times. According to airport and airline officials, up to half of an airport’s daily volume may arrive within a few hours. For example, as figure 10 shows, over half of the daily international passenger volume at Atlanta Hartsfield Airport arrives between 1:00 p.m. and 5:00 p.m. Airline officials said that market demand and international travel patterns largely determine flight schedules, as follows. Passengers generally leave their city of origin early in the morning or later in the evening in order to work a full day at their destination. To deal with this market demand for flights, airlines schedule their flights in clusters referred to as “banks” that follow these business dynamics. Consequently, they said they have little flexibility to spread out flight schedules and still meet passenger demand for travel times. For example, flights leaving western Europe in the morning generally arrive at eastern U.S. airports between 11 a.m. and 4 p.m. In addition, according to airport officials, passengers prefer arriving during this time frame because it allows them to make connecting flights to other U.S. destinations. CBP has taken steps to increase management flexibility in staffing officers to various inspections functions and to improve the allocation of existing staff in an effort to minimize international passenger wait times and ensure that staff are being used as efficiently as possible. For example, at some airports, facility managers have arranged staff work schedules and used overtime to maximize the number of staff conducting inspections during peak periods. CBP’s One Face at the Border training program is designed to train staff to perform different inspection functions to increase staffing flexibility, but CBP has not established milestones for completing the training. CBP is also developing a national staffing model to help in allocating staff across ports and airports nationwide; however, the model does not address weaknesses in Customs and INS models identified in our and the Department of Justice Inspector General’s previous audit work. Agency officials told us that the model was to be completed by April of 2005. However, as of June 2005, it had not been finished and CBP officials had not established milestones for completing and implementing the model. CBP has taken advantage of existing staffing flexibility to help minimize passenger wait times. For example, CBP facility managers told us that they plan their officer work shifts so that the most officers available are working during peak hours. When the number of officers available to be assigned during peak time shifts is inadequate for passenger processing, the port director or CBP airport manager may use overtime by asking officers to come in early or stay late. Overtime is the most common tool management uses to address increases in passenger volume. CBP has not, however, established targets or milestones—such as having a certain percentage of staff cross-trained by a set date—for port directors to complete its One Face at the Border program to allow for greater flexibility in assigning officers to various functions and locations within airports. In July 2003, CBP began a cross-training effort, One Face at the Border, to integrate the former inspections workforces of Customs, INS, and Agriculture. The intent of this effort was to train legacy Customs inspectors to perform “historical” INS and agricultural inspection activities (such as processing passengers at primary inspection and screening for restricted food items) and for legacy INS inspectors to perform “historical” Customs and agricultural inspection activities (e.g., inspecting passenger baggage) in order to create a unified inspection force and a single primary processing point at ports of entry. The officials told us that this effort would allow officers to perform different inspection functions within airports as well as across different facilities. In certain instances where facilities are located geographically close to one another, inspections officers may be transferred to different facilities within a port to accommodate workload changes. For example, CBP officials at the port of Baltimore told us that officers are stationed at the airport during peak volume periods to inspect air passengers and may be moved to the seaport at other times. Managers may also move cross-trained officers among the various inspection functions performed within a specific port facility. For example, two CBP port directors told us that during peak volume periods, they may move officers from baggage or secondary inspection to primary inspection stations, although some airport and airline officials said this may actually increase wait time for passengers picking up baggage or passing through exit control. As of June 2005, CBP had developed and delivered some of the training materials for the One Face at the Border program to all ports and expects to develop and deliver all remaining training materials by the end of 2005. CBP officials said this program is essential for increasing staff flexibility so that staff can conduct different types of inspections within airports. However, CBP officials said it could take a number of years for officers nationwide to complete all required training. While CBP monitors the progress of each port in completing its required training, it has not established milestones for when ports should complete the training program or goals for having some percentage of staff complete the training. Milestones for completing this training program would help CBP to assess progress in implementing the program and determine when managers would be able to allocate officers within their port to areas of greatest need. They would also provide a basis to hold responsible officials accountable for implementing the training program. Without milestones for measuring the implementation status of its cross-training program, CBP has no assurance that port directors have the flexibility needed to allocate officers within and among facilities as efficiently as possible. CBP does not systematically assess the number of staff required to accomplish its mission at ports or airports nationwide or assure that officers are allocated to airports with greatest need. CBP’s current approach to allocating officers does not determine the optimal use of CBP inspection staff across all ports. Rather, it assumes the overall allocations are static, and relies on port directors to determine the number of staff necessary to accomplish CBP’s mission at airports and other port facilities within their purview. In instances where port directors identify a need for additional staff, for example due to a projected increase in international passenger volume, they are to forward staffing requests to the Director of Field Operations (DFO), who reviews the requests and determines whether they should be forwarded to headquarters for review. CBP human resources officials told us they review these requests and determine whether funds are available to address needs through allocation of additional staff. CBP Headquarters, however, has not provided formal, agencywide guidance to the port directors or DFOs on what factors should be considered to assess staffing needs or where staff should be allocated within a port. Without uniform agency guidance, everyone involved in the process from port directors to human resource officials must use their own judgment to determine staffing needs, and CBP cannot be assured that an individual port’s staff needs are being evaluated consistently or that staff are allocated to the ports with greatest needs nationwide. To provide a more systematic basis for allocating staff, CBP in October 2003 began developing a staffing model based on agencywide criteria to help allocate staff to its ports. The intent of CBP’s staffing model is to reduce the degree of subjectivity in the process of determining staffing needs. It will assist in allocating existing staff levels across ports by using a uniform set of approximately 30 different criteria, such as passenger and trade volume, that are weighted according to their importance to CBP’s mission. After assessing these criteria, the model is to determine how to allocate the existing officer workforce among ports. CBP officials developing the model said they plan to incorporate elements of two previous staffing models used by Customs and INS. However, as shown in table 1, the new model fails to address three weaknesses identified in our assessments of earlier models used by the legacy agencies upon which CBP’s model is based. Specifically, the model 1) will not take passenger wait times into account as a performance measure to help CBP assess whether staff levels are sufficient to address passenger volume, 2) will not regularly take into consideration field input in determining appropriate staffing levels, and 3) will not be used to assess optimal levels of staff to ensure security while facilitating travel at individual ports and port facilities, including airports. CBP officials told us that because 1) they do not want to risk security in order to adhere to a time limit, 2) field requests for staffing changes should be assessed by the DFO on an as- needed basis, and 3) it is unlikely that additional inspection personnel will be forthcoming in the current budget climate, they have not considered addressing these factors in their staffing model. Table 1 summarizes these reported weaknesses and CBP’s views regarding the need to address them. The Enhanced Border Security and Visa Entry Reform Act of 2002 repealed the 45-minute standard for processing international air passengers through inspection that was established for INS. However, it added a provision requiring CBP to base staffing level estimates from its workforce model on the goal of providing immigration services within 45 minutes. CBP officials said that minimizing wait times is not a high priority because officials do not want to risk sacrificing security in order to adhere to a time limit. However, when a flight exceeds 60 minutes for processing passengers through primary inspection, CBP requires that port directors provide an explanation for why this occurred and take corrective actions. Including a goal of providing inspection services within 45 minutes for international air passengers in its staffing model would assist CBP in determining the number of officers required to fulfill its missions of facilitating trade and travel while at the same time ensuring security and help identify airports with the greatest disparity between staffing requirements and current allocation levels. Our prior work has shown that involving staff in all phases of workforce planning can help improve its quality because staff are directly involved with daily operations. Plans for CBP’s model rely on input from the ports and port facilities, including airports, regarding passenger and trade volume; passenger and trade complexity variables, such as number and value of cargo seizures; number of airport terminals; mix of passengers; arrests; and level of on-board staff. However, CBP’s efforts to solicit information from field officials do not occur formally on a regular basis or include guidance to port directors and DFOs on how to assess staff levels, and as a result, CBP does not receive timely and consistent input on critical staffing needs to help them adjust staff levels to ensure that staff are used as efficiently as possible. CBP officials said that they do not have definite plans to ask for staff needs assessments on a regular basis. For example, in November 2004 shortly after we initiated our review, CBP headquarters issued its first formal letter since the agency’s creation in March 2003, soliciting DFOs for their input on critical staffing needs. The solicitation did not include guidance or criteria to DFOs or port directors on how to assess their staff levels to help ensure that headquarters’ staffing decisions are based on consistent data from all ports. Furthermore, the request was not consistently communicated to all CBP locations; facilities managers at two of the five airports we visited after the solicitation was sent out said that they were unaware of the request for information. CBP officials told us that it is not headquarters’ responsibility to evaluate staffing requests from individual ports. Rather, it is the responsibility of the DFOs to evaluate staffing needs at ports on an ongoing basis. Nonetheless, regular, formal input from facility and port management would help CBP headquarters ensure that staff are used as efficiently as possible by aligning staffing decisions with the needs and realities of CBP ports nationwide. CBP’s plans for the staffing model indicate it will be used to allocate existing staff across ports, for example it will help reallocate positions made available through attrition, but it will not determine whether current staff levels are appropriate or determine an optimal number of staff needed at individual ports or airports. CBP officials stated they have not assessed overall staffing needs across ports or airports and do not plan to do so with the proposed model because they do not expect to receive any additional resources given the current budget climate. However, according to our primary human capital principles, agencies should identify gaps in their workforce to provide a basis for proper staffing to meet program goals. These workforce gap analyses can help justify budget and staffing requests by connecting program goals and strategies with the budget and staff resources needed to accomplish them. The model, when it is completed, will not identify such gaps according to CBP officials because absent additional resources, the only way to address these gaps would be to relocate officers. The officials said this is not a viable solution because of the costs associated with relocating CBP officers. According to CBP, the cost of moving a single CBP officer from one port to another is $60,400 on average. Determining an optimal number of officers for airports will help CBP link its budget requests to mission priorities, allowing the agency to determine which facilities have the greatest disparity between staffing requirements and current allocation levels and help ensure the most efficient allocation of new staff. CBP officials told us that they set an original deadline of April 2005 for completing the proposed staffing model. As of June 2005, CBP had not finalized its model and did not have revised milestones or a schedule to measure their progress for completing and implementing the model. Until CBP finalizes its staffing model and establishes a schedule for completing and implementing its model, it is uncertain when the model will be available to provide a regular and consistent method for efficiently allocating staff. As it performs its official missions, CBP maintains two overarching and sometimes conflicting goals: increasing security while facilitating legitimate trade and travel. To help achieve these goals, CBP has taken steps to increase staffing flexibility and improve the allocation of staff to help ensure that wait times are minimized and that existing levels of staff are being used as efficiently as possible. To that end, CBP initiated its One Face at the Border program to cross-train officers from its legacy agencies with the intention of providing more flexibility in its placement of staff. However, CBP’s lack of milestones for ports to complete this cross- training makes it difficult for the agency to determine when training will be completed within individual ports and hold port directors accountable for having their staff complete training. Furthermore, the lack of milestones affects port directors’ and facility managers’ ability to allocate officers within airports to different functions. We recognize that ports experience different traffic flow patterns and demands, and that taking staff offline to train them may require overtime or may increase passenger wait times. Nevertheless with established milestones, CBP would be better able to measure the progress of its cross-training program across ports and maximize port staffing flexibility. CBP is also developing a staffing model to assist in determining officer allocation levels. In doing so, CBP has the opportunity to take a proactive approach to managing its human capital and address historical weaknesses of its legacy agencies’ systems for allocating personnel. Although CBP’s staffing model is a step in the right direction, we identified certain weaknesses that can affect CBP’s ability to place its staff to best advantage in addressing passenger wait times. While most airports were able to process passengers within 45 minutes on average during the period of time we examined, wait times for individual flights still exceeded 60 minutes five percent or more of the time at four of the 20 airports where CBP records wait time data. CBP’s exclusion of wait time standards for inspecting international air passengers in its planned model limits its ability to manage staff to accomplish the second part of its dual mission fostering international trade and travel. Furthermore, CBP’s lack of regular and formal input from airports and other port facilities limits the agency’s ability to ensure that its staffing decisions align with the needs and realities of its ports nationwide. Using the planned model to determine the allocation of existing staff without also determining an optimal number of staff for airports limits the agency’s knowledge of ports that have the greatest gaps between optimal and existing staff levels. Finally, CBP has not fully addressed what factors will be included in its model currently under development or set milestones for completing and implementing the model. By not addressing these weaknesses, CBP is bypassing an opportunity to develop information that would further enhance management decision-making concerning staff allocation and staff needs and providing budget justifications. To assist CBP in its efforts to develop a staffing model that will help provide a basis for budget justifications and management decision-making and to establish goals and performance measures to assess its progress in completing its staffing model and its cross-training program, we recommend that the Secretary of the Department of Homeland Security direct the Commissioner of U.S. Customs and Border Protection, to take the following five actions provide ports with targets and milestones for having staff cross-trained to measure the progress of its One Face at the Border program while being sensitive to work demands in setting training schedules; incorporate wait time performance measures in the staffing model currently under development as required by the Enhanced Border Security and Visa Protection Act of 2002; use the staffing model under development to determine the optimal number of staff at each airport nationwide; systematically solicit input from the field on staffing needs and include uniform, agencywide guidance on how they should assess their needs and environment; and set out milestones for completing CBP’s planned staffing model. DHS provided written comments on a draft of this report, and these comments are reprinted in appendix II. DHS concurred with three of our recommendations: to use CBP’s staffing model to determine the optimal number of staff at each airport nationwide, to systematically solicit input from the field on CBP staffing needs, and to set milestones for completing CBP’s planned staffing model. DHS said that CBP had efforts underway and additional plans to implement these recommendations. DHS partially concurred with our remaining two recommendations. With respect to our recommendation to provide ports with targets and milestones for having staff cross-trained, DHS said that CBP believes it is not advantageous to implement across-the-board milestones, citing the need to coordinate training with appropriate work assignments so that the training can be directly applied. CBP officials said that it could take a number of years for officers to complete training nationwide and noted that they plan to begin computing training requirements through fiscal year 2007. We continue to believe it is important to establish milestones for cross-training CBP staff. CBP told us that the cross-training program is essential for increasing staff flexibility and enabling staff to properly conduct different types of inspections within airports. Having milestones for individual ports to complete required training would help improve accountability and planning. Given CBP’s concern about workload demands and the timing of training, the milestones could be established in consideration of the training needs and operational environment of each port. The planning process described by CBP could provide a basis for establishing these milestones. With regard to our recommendation that CBP incorporate wait time performance measures in the staffing model currently under development, DHS said that CBP will consider (DHS emphasis) incorporating wait times for future resource allocation. We continue to believe that the wait time standards should be incorporated into CBP’s planned workforce staffing model. We note that such action is required by the Enhanced Border Security and Visa Protection Act of 2002. In addition, incorporating wait time standards would help CBP measure the extent to which it is achieving its mission of facilitating trade and travel while ensuring security. It would also allow CBP to identify airports with the greatest disparity between optimal and existing staff allocation levels. We plan to provide copies of this report to the Secretary of the Department of Homeland Security, the Commissioner of the U.S. Customs and Border Protection, and interested congressional committees. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report please contact me at (202) 512-8777. Key contributors to this report are listed in appendix III. To assess CBP’s progress in minimizing wait times for international air passengers while ensuring security, we analyzed (1) the wait times at the 20 U.S. international airports that receive most of the international traffic and factors affecting wait times; (2) the steps airports and airlines have taken to minimize passenger wait times; and (3) how CBP has managed staffing to minimize wait times across airports. Specifically, to determine the wait times at U.S. airports and factors affecting wait times, we analyzed CBP wait time data collected between October 1, 2004 and March 31, 2005. CBP’s calculation of wait time changed on January 10, 2005, and we determined the difference in wait times between the time periods of October 1, 2004, through January 9, 2005, and January 10, 2005, through March 31, 2005. We calculated average wait times and average percent of flights exceeding 60 minutes for 20 major U.S. airports based on CBP’s data. We assessed the reliability of the passenger volume, wait time, number of inspection stations and inspection staffing data by (1) reviewing existing information about the data and the systems that produced them, (2) interviewing agency official knowledgeable about the data, and (3) comparing what we observed at the selected airports visited with the data. We determined that the data were sufficiently reliable for the purposes of this report. For the purpose of calculating the percentage of flights exceeding 60 minutes for primary passenger inspection, the data are sufficiently reliable to compare airports but not sufficiently reliable as a performance measure. We found high rates at some airports of numerous flights with wait times of 59 or 60 minutes. If the performance standard was changed to 59 or 60 minutes, the percentage of flights exceeding this threshold would be different from that reported in Figure 5. The data should be viewed as limited indicators of overall wait times at airports, because the available data only spanned two and one half months of wait times and did not include the peak travel periods of June through September when wait times may be higher. To determine the factors affecting wait times, we interviewed CBP officials at both headquarters and at the port level, such as port directors, who are responsible for overall management of the port, including airports. We also interviewed selected airport and airline officials who are involved with international passenger processing and could provide perspective on what factors affected wait times at U.S. airports. In addition, we interviewed officials at airport and airline associations who provided us with international passenger volume statistics and contacts for officials at the locations we visited. To determine the steps airports and airlines have taken to minimize passenger wait times, we visited five international airports based on their unique characteristics and geographic dispersion. The airports selected were George Bush Intercontinental Airport, Dallas-Fort Worth International Airport, Washington Dulles International Airport, Baltimore- Washington International Airport, and Hartsfield Atlanta International Airport. At these five airports, we interviewed airport and airline officials who were involved in international passenger processing issues to learn how they interacted with CBP to help speed passenger processing. We also reviewed documentation provided to us by officials at three airports on assessments they had produced on the number of stations and CBP officers needed at their airports to process passengers within certain time limits. We observed the inspection facilities at each of the five airports visited to compare the capacities and constraints to passenger processing at each. Specifically, we observed facilities’ upgrades where airports had either built an entirely new facility or added inspection stations to existing facilities. To assess how CBP has managed staffing to minimize wait times across airports, we interviewed CBP officials at headquarters and from the five selected airports. For example, we interviewed port directors and other field-level officials to gather perspectives on what options are available to CBP field managers to manage staff to improve wait times at airports. To analyze how CBP’s cross-training program affects the agency’s ability to allocate staff to airports, we spoke with officials responsible for developing and delivering training curriculums to the various ports and we examined these curriculums and their delivery schedule. To determine how CBP currently allocates staff, we spoke with officials in the budget, human resource and planning offices in CBP’s Office of Field Operations. We also reviewed and evaluated documentation on CBP’s policies and procedures for allocating staff to ports. To understand and evaluate CBP’s staffing model under development, we spoke with agency officials responsible for planning and implementing the model’s development and analyzed the criteria associated with the model. We also reviewed our and the Department of Justice Inspector General’s prior work on previous models developed for U.S. Customs Service and the Immigration and Naturalization Service and compared these findings with the new model. We performed our work from October 2004 to June 2005 in accordance with generally accepted government auditing standards. Leo Barbour, Grace Coleman, Deborah Davis, Nancy Finley, Christopher Keisling, Jessica Lundberg, Robert Rivas, and Gregory Wilmoth made significant contributions to this report.
While the Enhanced Border Security and Visa Protection Act repealed a 45 minute standard for inspecting international passengers, minimizing wait times at airports remains an area of concern for U.S. Customs and Border Protection (CBP). Shortly after its creation in March 2003, CBP assumed inspection functions from the Immigration and Naturalization Service, the U.S. Customs Service, and the Department of Agriculture. The new agency's priority missions are to prevent terrorism and to facilitate travel and trade. To assess CBP's efforts to minimize wait times for international air passengers while ensuring security, this report answers the following questions: (1) What are the wait times at the 20 U.S. international airports that receive most of the international traffic and what factors affect wait times? (2) What steps have airports and airlines taken to minimize passenger wait times? (3) How has CBP managed staffing to minimize wait times across airports? The amount of time passengers from international locations have to wait before completing CBP inspections to enter the United States varies within and across airports. On average, CBP processed passengers within 45 minutes during the 2-month period for which data were available, although some flights had significantly longer wait times. Based on our observations and analysis as well as our discussions with airport and CBP officials, we determined that the primary factors affecting wait time are passenger volume, the number of inspection stations available at an airport, and the number of CBP officers available to conduct inspections. These factors, in different combinations at each airport, affect passenger wait times. Three of the five international airports we visited had built new or expanded federal inspection facilities to accommodate future growth in passenger volume and minimize wait times for internationally arriving passengers. Additionally, some airports assigned staff to assist passengers in preparing documentation to minimize wait times. Airline officials we spoke to acknowledged that large volumes of arriving passengers may increase wait times, but said that, to accommodate market demand, airlines do not spread flight arrivals throughout the day. CBP, in its efforts to minimize passenger wait times at airports, has taken steps to increase the efficient use of existing staff at airports. For example, CBP is cross-training its officers so that they can conduct different types of inspections. CBP is also developing a staffing model to allocate staff among its ports. However, the new model fails to address weaknesses identified in assessments of staffing models used previously by Customs and INS, such as not including wait times as a performance measure. CBP also has not developed milestones for completing its staffing model and cross-training program at all ports. Until these weaknesses are addressed, CBP will be hampered in forming a basis for management decision-making concerning staff allocation and staff needs and providing budget justifications.
The H-2A program was preceded by several other temporary worker programs designed to address farm labor shortages in the United States. During World War I, the Congress authorized the issuance of rules providing for the temporary admission of otherwise inadmissible aliens, and this led to the establishment of a temporary farm labor program designed to replace U.S. workers directly involved in the war effort. Similarly, initially through an agreement with Mexico, a guest worker program was authorized during World War II that brought in over 4 million Mexican workers from 1942 to 1964, called “braceros” to work on farms Although the Bracero program expanded the farm on a seasonal basis.labor supply, the program also affected domestic farm workers through reduced wages and employment, according to a 2009 Congressional Research Service report. The Bracero program has been criticized by labor groups, which identified issues such as mistreatment of workers and lax enforcement of work contracts. While the Bracero program was still in effect, the Immigration and Nationality Act of 1952 (INA) established the statutory authority for a guestworker program that included workers performing temporary services or labor, known as “H-2” after the specific provision of the law. The Immigration Reform and Control Act of 1986 amended the INA and effectively divided the H-2 program into two programs: the H-2A program expressly for agricultural employers and the H-2B program expressly for nonagricultural employers. The H-2A program was created to help agricultural employers obtain an adequate labor supply while also protecting the jobs, wages, and working conditions of U.S. farm workers. The H-2A law and regulations contain several requirements to protect U.S. workers from adverse effects associated with the hiring of temporary foreign workers and to protect foreign workers from exploitation. Under the program, employers must provide H-2A workers a minimum level of wages, benefits, and working conditions. For example, employers must pay a prescribed wage rate, provide the workers housing that meets minimum standards for health and safety, pay for workers’ travel costs to and from their home country, and guarantee workers will be paid for three-quarters of the work contract (see table 1 for more information about the even if less work is needed.conditions of employment that employers are expected to provide workers). In fiscal year 2011, Labor received about 4,900 employer applications requesting permission to hire H-2A workers. State issued about 55,000 H-2A visas in fiscal year 2011 and about 94 percent of these visas were processed by Mexican posts, according to data reported by State. Employers requested H-2A workers to help support the production of various commodities, such as fruit, vegetables, tobacco, and grain. While many of these employers requested help with general farm work, others sought workers with special skills, such as sheepherders or combine operators. Employers in some states rely more heavily on H-2A workers to meet their labor needs. In fiscal year 2011, Labor reported that over half of the H-2A positions it certified were located in five southeastern states—North Carolina, Florida, Georgia, Louisiana, and Kentucky. Although California is the largest producer of agricultural products in the country, the state is ranked thirteenth in its employment of H-2A workers, according to a recent Labor report. H-2A workers are expected to work temporarily and must leave the country once the temporary work contract is complete, but may return in future years to meet employers’ seasonal needs under specific circumstances. In fiscal year 2011, about 27 percent of H-2A employers requested H-2A workers for 6 months or less and about 73 percent of employers requested workers for 7 to 12 months. H-2A workers represent a small proportion of the approximately 1 million hired agricultural workers that the U.S. Department of Agriculture estimates are in the United States, many of whom are not legally authorized to work in the country (referred to as undocumented workers). Research suggests that about half of all U.S. agricultural workers are An employer may inadvertently hire undocumented undocumented.workers if the workers give the employer fraudulent documents. Employers may also choose to violate the law and knowingly hire undocumented workers rather than employing U.S. workers or participating in the H-2A program and meeting its associated requirements. However, employers knowingly hiring undocumented workers rather than using the legal H-2A process risk penalties or workforce disruption through DHS’s enforcement of immigration law or from state actions that may affect the availability of undocumented workers. To request H-2A workers, employers apply consecutively to their state workforce agency, Labor, and DHS; and prospective workers apply to State for H-2A visas. Under the law and Labor’s H-2A regulations, state workforce agencies, Labor, and employers are subject to specific deadlines for processing H-2A applications (see fig. 1).are not subject to processing deadlines under relevant statutes and regulations, according to agency officials. In fiscal year 2011, most employers’ applications for H-2A workers were approved, but some employers experienced delays in having their applications for H-2A workers processed. Labor approved 94 percent of the H-2A applications for foreign agricultural workers and processed 63 percent of approved applications by the statutory deadline of at least 30 days prior to the date workers were to begin work.not process 37 percent of applications by the deadline, including 7 percent of applications approved less than 15 days before workers were needed, leaving little time for employers to petition DHS and for workers to obtain visas from State. According to Labor officials, employers’ failure to provide required documentation, such as an approved housing inspection, contributes to processing delays. DHS approved 98 percent of the employer petitions for H-2A workers in fiscal year 2011, and about 72 percent of these petitions were processed within 7 days. However, 28 percent took longer and DHS took a month or longer to process 6 percent of the petitions (see fig. 2). An official at DHS told us that employers have up to 84 days plus the applicable mailing time to provide additional documentation requested by the agency, which can significantly affect how long it takes the agency to process a petition. To process applications more efficiently and provide better customer service, Labor and DHS have taken steps to create new electronic applications that will allow employers to file for H-2A workers online, but development and implementation of both applications has been delayed (see table 2). Federal law and executive orders provide that federal agencies are to be customer service-focused and executive orders provide that federal agencies use technology to improve the customer experience.Accordingly, in fiscal year 2009, Labor implemented a web-based system for two of its other labor certification programs that allows employers to file applications online and for the agency to process them electronically. Labor is currently in the process of developing an online H-2A application to add to its existing web-based filing system, but it has been delayed. Specifically, in October 2010, Labor began designing an online H-2A application for employers that it planned to deploy in August 2011. However, Labor officials told us the online application was delayed because the agency could not award the contract to develop it while operating under a provisional budget based on a continuing resolution. Since then, Labor completed the design of the online H-2A application and in June 2012 awarded the contract to develop, test, and implement it. Labor officials told us they anticipate the online H-2A application will be available for use by employers by the end of 2012 and, according to the development contract, the online application should be available to employers on November 15, 2012. According to Labor officials, the online application will allow employers to create account profiles and check the status of their H-2A applications. In addition, Labor officials said the online H-2A application would also result in faster application processing, reduced costs, better customer service, and improved data quality. DHS also plans to implement an online petition for H-2A workers, but the agency has experienced delays and is in the process of developing a schedule for completing this work. The agency planned to deploy an online H-2A petition in October 2012 as part of its Transformation Program, which aims to replace the paper-based systems currently used to process petitions with an electronic system. However, the Transformation Program itself has been delayed several times since its inception in 2005, as we have previously reported, and officials told us they have not started work on the online H-2A petition and do not know when it will be completed. In prior work on the Transformation Program, we found DHS was managing the program without specific acquisition management controls, such as reliable schedules, which contributed to missed milestones. DHS officials said they were addressing this report’s recommendations and are in the process of developing an integrated master schedule for all Transformation activities, including the online H- 2A petition, in accordance with GAO best practices outlined in the report. Once the online petition for H-2A workers is available, employers will be able to file all required documents electronically to petition for H-2A workers, create account profiles, and check the status of their applications. In addition, the agency could streamline benefits processing by eliminating redundant data entry and reducing the number of required forms. Recently and over the course of our review, in addition to taking steps to modernize the H-2A application process, federal agencies have taken a number of other steps to improve employers’ experience with the application process. Specifically, Labor made changes to its review process to informally resolve issues with employers and reduce unnecessary delays and appeals. Labor officials told us that, in 2011, they piloted using e-mail to communicate with employers in 10 states about their H-2A applications. In March 2012, Labor began using e-mail to communicate with employers in all states about their applications. Labor also changed its procedures so that it can make corrections to minor errors on an employer’s H-2A application—such as adding a missing phone number—after obtaining the employer’s permission via e- mail to correct the error. In February 2011, Labor instituted a policy that gives employers up to 5 additional days to submit required documentation on their H-2A applications rather than automatically denying them because all of the required documentation was not submitted by the deadline. In addition to the changes outlined above, since implementing its new regulations in March 2010, Labor provided employers with more guidance about the requirements of the H-2A program in a variety of formats (see table 3). Labor officials said these efforts resulted in improved timeliness and fewer appeals in recent months. Our analysis of Labor’s data showed that the agency’s timeliness remained relatively unchanged, although the percentage of applications for which deficiency notices were issued and the number of appealed decisions declined substantially over that period. For the first half of fiscal year 2012, Labor processed 61 percent of certified applications at least 30 days prior to the employer’s date of need and issued deficiency notices for 38 percent of employer applications. Sixty employer appeals were filed during the first half of fiscal year 2012. Several employers we interviewed reported that they did not understand the H-2A program requirements because Labor’s decisions seemed inconsistent. A number of the inconsistencies employers cited concerned job order terms and conditions, the acceptability of which varies by state. Labor officials told us they strive for consistency and have many checks in place to ensure consistent decisions. Specifically, they said analysts in Labor’s processing center follow detailed standard operating procedures and the center has multiple quality assurance methods to ensure consistency, including supervisory review, peer review, and a quarterly quality assurance process. In addition, according to Labor officials, processing center analysts are given an overview of the H-2A program, study the regulations and standard operating procedures, and shadow a more seasoned employee before receiving their own cases to There are also periodic training classes that address adjudicate. adjudication issues that have arisen during the last calendar year. Our internal control standards state that agency managers should identify the knowledge and skills needed for various jobs and provide necessary training, among other things. GAO, Standards for Internal Control in the Federal Government, GAO/AIMD-00.21.3.1 (Washington, D.C.: Nov. 1999). state workforce agencies are directed to apply a prevailing practice standard to determine whether the frequency with which an employer intends to pay H-2A workers is acceptable, while states can use a more subjective normal and common practice standard to determine whether job qualifications, such as how much experience is required, are acceptable (see table 4). In 1988, Labor provided states with an H-2A Program Handbook that included guidance on how to make these decisions and encouraged states to administer formal surveys to determine acceptable practices. If the state workforce agency cannot use a formal survey, Labor’s guidance suggests states make these determinations using other information sources, such as staff knowledge and experience, informal surveys, reviews of job orders used by non-H-2A employers, or consultation with experts in agriculture or farm worker advocates. In 2011, Labor began posting results from states’ prevailing practice surveys online to help employers write job orders that are consistent with prevailing, normal, and accepted practices. Labor’s guidance to states for determining acceptable practices, however, is broad and not prescriptive, leading states to apply varied methods, some of which may be insufficient. For example, the Administrative Law Judge who ruled on the Massachusetts apple and vegetable growers’ appeal of Labor’s initial decision to prohibit experience requirements did not consider the Massachusetts state workforce agency’s prevailing practice survey in his ruling because of its design flaws. Further, two employer representatives told us they considered state prevailing practice surveys to be unreliable and inconsistent in their coverage. In addition, officials in the three states we visited said they did not include questions about certain terms and conditions in formal surveys and used different methods to determine whether a particular practice was acceptable: two states reviewed job orders filed by non-H-2A employers; the other state informally surveyed non-H-2A employers in-person. One employer representative expressed frustration that neighboring states used different methods to determine acceptable practices for the same crop and that the results differed. DHS also has taken several steps to improve employers’ experience with the H-2A application process. Specifically, the agency took steps to expedite petitions for H-2A workers and provide more guidance to employers. In October 2007, DHS directed its employees to expedite the handling and adjudication of H-2A petitions. According to our analysis, the agency’s processing times have improved in recent years. From fiscal year 2006 to fiscal year 2011, the percentage of petitions approved within 1 week increased from about 34 percent to 72 percent. At the same time, the percentage of petitions that took 1 month or longer to approve declined from about 11 percent to about 6 percent. In July 2010 and June 2011, DHS invited employers to participate in teleconferences to discuss employers’ difficulties with some of its new systems and procedures. In addition, the agency posted summaries of the teleconferences and answers to employers’ frequently asked questions on its Web site. State has addressed employer concerns with the H-2A visa application process by hosting face-to-face meetings with employers and other key stakeholders, making improvements to its worker processing procedures, and taking steps to increase the capacity of its Monterrey consulate to process H-2A visas. In 2012, State officials said they reached out to Labor to discuss H-2A related issues. They also said the two agencies are formalizing working groups in part to improve information sharing. State also meets with employers and other stakeholders at annual meetings that bring together representatives from Labor, DHS, and State. Officials from Labor and DHS, and State’s contractor attended the most recent of these meetings, held in Texas in January 2012. A representative of an employer association who attended this meeting told us it was helpful to have representatives from all three agencies there to answer questions. After hearing at the January 2012 meeting that some employers had difficulties with getting their Mexican H-2A workers processed by their date of need, State directed employers with approaching dates of need to request emergency appointments for their workers to be processed at posts other than the Monterrey consulate. State officials noted that all Mexican posts have the capacity to process H-2A visa applications and suggested that applicants can visit other posts if it is difficult to get appointments at the Monterrey consulate. State also developed new procedures to better enable them to handle large groups of workers. In addition, State is expanding its Monterrey consulate, which currently handles most of the H-2A visas processed. Officials said the new facility is scheduled to open in 2014, although they were uncertain whether future staffing levels at the facility would increase. The H-2A program is a means through which agricultural employers can legally hire temporary foreign workers when there is a shortage of U.S. workers. The H-2A application process consists of a series of sequential steps conducted by varied agencies, no one of which bears responsibility for monitoring or assessing the performance of the process as a whole. Negotiating this largely paper-based process can be time consuming, complex, and challenging for employers. The associated difficulties can impose a burden on H-2A employers that is not borne by employers who break the law and hire undocumented workers. Although Labor and DHS have taken some steps to incorporate new technologies, delays in the development of electronic application filing systems continue burdening employers with paperwork and may be consuming more resources from federal agencies than necessary. In addition, the absence of systems to collect data on the reasons for processing delays makes it difficult for these agencies to identify why employer applications are initially rejected, to target their efforts to address the most important issues that challenge employers, and to improve performance. Meanwhile, employers who require workers at different points of the season must bear the additional costs of submitting paperwork to multiple agencies for each set of workers. In addition, employers continue to express confusion about how state workforce agencies and Labor are applying Labor’s new regulations. Without additional clarification and transparency, employers may continue to submit unacceptable paperwork that requires extra resources from all parties to process. As immigration rules are tightened and the economy improves for U.S. workers, more employers may need to use the H-2A program to obtain foreign workers. This potential influx of new users could exacerbate existing problems if changes are not made to improve the application process. To improve the timeliness of application processing, as part of creating new online applications, we recommend that the Secretaries of Labor and Homeland Security: develop a method of automatically collecting data on the reasons for deficiency notices, requests for additional evidence, and denials, and use this information to develop strategies to improve the timeliness of H-2A application processing. Such information could help the agencies determine whether, for example, employers may need more guidance or staff may need more training. To reduce the burden on agricultural employers and improve customer service, we recommend that the Secretary of Labor: permit the use of a single application with staggered dates of need for employers who need workers to arrive at different points of a harvest season. Employers could still be required to submit evidence of their recruitment efforts, but would not be required to resubmit a full application for each set of workers needed during the season. To promote consistency and transparency of decisions made about the acceptability of employer applications and clarify program rules, we recommend that the Secretary of Labor: review and revise, as appropriate, guidance provided to state workforce agencies on methods to determine the acceptability of employment practices. This guidance should be made available to employers and published on Labor’s Web site. We provided a draft of this report to Labor, DHS, and State for review and comment. State had no comments. Labor and DHS provided written comments which are reproduced in appendices I and II. Labor and DHS also provided technical comments, which we incorporated as appropriate. DHS concurred with our recommendation that the agency develop a method to automatically collect additional data through its forthcoming electronic application system to improve the timeliness of application processing. Similarly, Labor agreed with our recommendation that the agency develop a method of automatically collecting data on the reasons for deficiency notices and use this information to develop strategies to improve the timeliness of H-2A application processing and noted that it would explore the resources required to collect such information as part of its online application system. Labor also agreed with our recommendation that it update the guidance it provides to state workforce agencies on methods to determine the acceptability of employment practices. Labor did not agree with our recommendation that it allow employers to file a single application per season for workers arriving on different start dates, stating that the department’s regulations define the date of need as the first date the employer requires the services of all H-2A workers that are the subject of the application, not an indication of the first date of need for only some of the workers. Labor stated that having each employer file a single application with staggered dates of need would result in one recruitment for job opportunities that could begin many weeks or months after the original date of need, which could nullify the validity of the required labor market test. We are not recommending that employers conduct a single labor market test corresponding with their earliest date of need. Employers should still be required to submit evidence of their recruitment efforts for every start date listed on each application, but we believe they should not be required to resubmit a full application package for each set of workers needed during a season. Labor also expressed concern that our report points to the experiences of some employers or those of a single employer to support our conclusions. As noted earlier in this report, information obtained from our interviews cannot be generalized to all states or all agricultural employers. In addition, the illustrations used in this report highlight challenges expressed by numerous employers with whom we spoke, even when we used one employer’s experience as an example. Further, as we noted previously, agency data are not available to document the extent of some employer challenges, such as whether workers arrive by the date they are needed by employers. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretaries of Homeland Security, Labor, State, and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact me at (202) 512-7215 or moranr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In addition to the individual named above, Betty Ward-Zukerman, Assistant Director; Hedieh Rahmanou Fusfield, Jeffrey G. Miller, and Cathy Roark made key contributions to this report. Also contributing were Hiwotte Amare, James Bennett, Kathy Leslie, Jonathan McMurray, Jean McSween, Kathleen van Gelder, and Craig Winslow.
The H-2A visa program allows U.S. employers anticipating a shortage of domestic agricultural workers to hire foreign workers on a temporary basis. State workforce agencies and three federal agencies—the Departments of Labor, Homeland Security, and State review applications for such workers. GAO was asked to examine (1) any aspects of the application process that present challenges to agricultural employers, and (2) how federal agencies have addressed any employer challenges with the application process. GAO analyzed Labor and DHS data; interviewed agency officials and employer representatives; and conducted site visits in New York, North Carolina, and Washington. Over 90 percent of employer applications for H-2A workers were approved in fiscal year (FY) 2011, but some employers experienced processing delays. For example, the Department of Labor (Labor) processed 63 percent of applications in a timely manner in FY 2011, but 37 percent were processed after the deadline, including 7 percent that were approved less than 15 days before workers were needed. This left some employers little time for the second phase of the application process, which is managed by the Department of Homeland Security (DHS), and for workers to obtain visas from the Department of State (State). Although workers can apply for visas online, most of the H-2A process involves paper handling, which contributes to processing delays. In addition, employers who need workers at different times of the season must repeat the entire process for each group of workers. Although the agencies lack data on the reasons for processing delays, employers reported delays due to increased scrutiny by Labor and DHS when these agencies implemented new rules and procedures intended to improve program integrity and protect workers. For example, in FY 2011, Labor notified 63 percent of employers that their applications required changes or additional documentation to comply with its new rules, up sharply from previous years. Federal agencies are taking steps to improve the H-2A application process. Labor and DHS are developing new electronic application systems, but both agencies’ systems have been delayed. Labor also recently began using e-mail to resolve issues with employers, and all three agencies provided more information to employers to clarify program requirements. Even with these efforts, some employers view Labor’s decisions as inconsistent. For example, some employers received different decisions about issues such as whether they can require workers to have experience in farm work and questioned the methods states used to decide whether the job qualifications in their applications were acceptable. We found states used different methods to determine acceptable qualifications, which is allowed under Labor’s guidance. GAO recommends that (1) Labor and DHS use their new electronic application systems to collect data on reasons applications are delayed and use this information to improve the timeliness of application processing; (2) Labor allow employers to submit one application for groups of similar workers needed in a single season; and (3) Labor review and revise, as appropriate, its guidance to states regarding methods for determining the acceptability of employment practices in employers’ applications. DHS and Labor agreed with the recommendation to collect additional data and Labor agreed with the recommendation to update its guidance. Labor disagreed with the recommendation it allow employers to apply once per season. GAO believes the recommendation is still valid and that a single application does not preclude timely testing of the labor market as workers are needed.
Wind energy is generated when wind turbines convert the kinetic energy in the wind into mechanical power, which can be used to generate electricity or for specific tasks such as grinding grain or pumping water. Electricity generated from wind can be used for other stand-alone purposes, such as charging a battery, or can flow to consumers from the facilities where it is generated through the networks that carry electricity, including wires, substations, and transformers (i.e., the grid). For utility- scale sources of wind energy, a large number of turbines are usually built close together to form a wind farm that provides grid power. Stand-alone or distributed turbines are generally smaller scale turbines used for purposes such as powering communications equipment or generating electricity for local use by farmers. Areas with plentiful wind resources are often distant from consumer markets, and access to transmission is essential to bring electricity generated from wind to market. In addition, because of the intermittent availability of wind energy, integrating increasing amounts of wind energy into the electric grid while maintaining its reliable operation requires added efforts by federal agencies, electric grid operators, utilities, and regulators. Another important issue in the development of wind projects is the siting and permitting process, by which locations are chosen and permits are issued for wind turbines, while considering projects’ potential effects on the environment and the competing uses for the land, airspace, or waterways the projects may require. Innovation in wind energy technology takes place across a spectrum of activities, which we refer to as technology advancement activities, and which include basic research, applied research, demonstration, commercialization, and deployment. For purposes of this report, basic research includes efforts to explore and define scientific or engineering concepts or to investigate the nature of a subject without targeting any specific technology; applied research includes efforts to develop new scientific or engineering knowledge to create new and improved technologies; demonstration includes efforts to operate new or improved technologies to collect information on their performance and assess readiness for widespread use; commercialization includes efforts to transition technologies to commercial applications by bridging the gap between research and demonstration activities and venture capital funding and marketing activities; and deployment includes efforts that facilitate or achieve widespread use of technologies in the commercial market. Wind energy technology advancement activities are financed through both public and private investment. According to a Congressional Budget Office report, without public investment, the private sector’s investment in technology advancement activities is likely to be inefficiently low from society’s perspective because firms cannot easily capture the “spillover benefits” that result, particularly at the early stages of developing a technology. In these stages, technology advancement activities can create fundamental knowledge leading to numerous benefits for society as a whole but not necessarily for the firms that invested in the activities. For example, basic research can create general scientific knowledge that is not itself subject to commercialization but that can lead to multiple applications that private companies can produce and sell. As activities get closer to the commercialization and deployment stages, the private sector may increase its support because its return on investment is likely to increase. Federal investment and policies can have a significant impact on wind development. For example, a key tax incentive for the construction of wind projects—the Energy Production Credit (also known as the Production Tax Credit, or PTC)—has periodically expired and then been extended. In years following its expiration, new additions of wind energy capacity fell dramatically, as shown in figure 1 below. Many states have also enacted policies affecting the development of wind energy, in part to attract investment within their borders. These policies include tax credits, grants, loans, and mandates such as RPSs requiring a portion of the electricity consumed or generated in a state to be from renewable sources. Improvements in technology and external market factors also affect the development of wind energy. According to a report from DOE’s Office of Energy Efficiency and Renewable Energy (EERE), recent improvements in the cost and performance of wind energy technologies contributed to the growth of wind energy in 2011. For example, improvements such as taller towers and larger rotor diameters in wind turbines have improved their efficiency. However, according to the EERE report, continued low natural gas prices and modest growth in electricity demand, among other factors, may dramatically slow new installations of wind turbines in 2013. In addition, wind energy must compete in the market with other energy sources—renewable and nonrenewable—that are also receiving subsidies. For example, in a 2011 report, EIA estimated that the federal government provided nearly $6.7 billion in subsidies for coal, natural gas, petroleum liquids, and nuclear energy in fiscal year 2010. We identified 82 federal wind-related initiatives, with a variety of key characteristics, implemented by nine agencies in fiscal year 2011. Five agencies—DOE, the Department of the Interior (Interior), the Department of Agriculture (USDA), the Department of Commerce (Commerce), and Treasury—collectively implemented 73 of these initiatives. In fiscal year 2011, wind-related initiatives incurred about $2.9 billion in obligations for activities specifically related to wind. In addition to initiatives that obligated funds, Treasury’s wind-related tax expenditure initiatives provided estimated tax subsidies of at least $1.1 billion for activities specifically related to wind, although complete data on wind-related tax subsidies were not available. The initiatives supported a range of wind issues, including siting and permitting, offshore wind, and, most commonly, utility- scale and distributed land-based wind. They also supported a range of technology advancement activities, from basic and applied research to, most commonly, deployment. The majority of initiatives provided funding or other direct support for energy providers, developers, or manufacturers, and less than half of the initiatives supported other types of recipients such as public and private researchers or individuals. Initiatives supporting deployment accounted for all tax subsidies and nearly all obligations related to wind in fiscal year 2011. In particular, a tax expenditure and a grant initiative, both at Treasury, accounted for nearly all federal financial support related to wind. Of the nine agencies that implemented the 82 federal wind-related initiatives we identified in fiscal year 2011, five lead agencies—DOE, Interior, USDA, Commerce, and Treasury—were collectively responsible for 73 (89 percent) of the initiatives. (See app. II for a full list of the initiatives.) The remaining four agencies—the Environmental Protection Agency (EPA), the Federal Energy Regulatory Commission (FERC), the National Science Foundation (NSF), and the Small Business Administration (SBA)—each had three or fewer initiatives. Figure 2 shows the number of initiatives by agency. Around half of the initiatives—43 of 82—began supporting wind energy in fiscal year 2008 or before. For instance, the PTC, which provides an income tax credit based on the amount of energy produced at wind and other qualified facilities, was first enacted under the Energy Policy Act of 1992. However, several key initiatives began supporting wind energy more recently as part of the Recovery Act or other recent legislation. For instance, Section 1603 of the Recovery Act created Treasury’s Payments for Specific Energy Property in Lieu of Tax Credits initiative (Section 1603 program), which provides cash payments of up to 30 percent of the total eligible costs of wind and certain other renewable energy facilities, in lieu of tax credits for energy investment or production. Key initiatives such as these and the PTC for wind facilities have recently expired or are scheduled to expire at the end of 2013. (See app. II for a full list of initiatives, including information on their expiration dates.) The majority of the wind-related initiatives we identified supported a range of renewable energy sources in addition to wind, as well as other activities such as energy efficiency projects or rural development projects. Specifically, 16 initiatives (20 percent) supported wind energy either exclusively or primarily, and 51 initiatives (62 percent) supported other renewable energy sources or other activities either primarily or equally with wind energy. For instance, initiatives that exclusively or primarily supported wind energy included several under EERE’s Wind Energy Program that focused on research, development, and testing to improve the performance, lower the costs, and accelerate the deployment of wind technologies, and several at Interior’s Bureau of Ocean Energy Management, Regulation, and Enforcement (BOEMRE) that focused on facilitating the development of offshore wind through resource assessments, environmental impact studies, and granting of leases and rights-of-way for projects. In contrast, wind-related tax expenditures such as the PTC, as well as several initiatives enacted or expanded under the Recovery Act such as Treasury’s Section 1603 program, supported a range of renewable energy sources and, in some cases, other sources such as nuclear energy; energy efficiency projects related to buildings or vehicles; or carbon capture and storage projects involving coal or other fossil fuel sources. In responding to our questionnaire, agency officials reported that they obligated around $2.9 billion through their initiatives in fiscal year 2011 for activities specifically related to wind. These obligations data represent a mix of actual obligations and estimates. Officials for 39 initiatives (about 48 percent) reported actual obligations, officials for 13 initiatives (about 16 percent) reported estimated obligations, and officials for 21 initiatives (about 26 percent) were not able to provide estimated or actual obligations. Officials who provided estimates or were unable to provide obligations data noted that the accuracy or the availability of the obligations data was limited because, for example, isolating the obligations for activities specifically related to wind can be difficult. In addition, among the 21 initiatives for which no wind-specific obligations data were reported, agency officials for several of them reported, for example, that they recovered their costs from power customers, or they provided loan guarantees whose costs were offset by fees paid by lenders. As shown in table 1, Treasury was responsible for nearly 94 percent of the total reported obligations (about $2.7 billion of $2.9 billion), all of which were due to its Section 1603 program. In addition, Treasury’s nine other wind-related initiatives were tax expenditures that provided estimated tax subsidies totaling at least $1.1 billion for activities specifically related to wind, according to available estimates. This amount is based almost solely on subsidies provided through the PTC, which was the only one of these nine initiatives for which complete estimates of wind-specific tax subsidies were available for 2011. Both Treasury and the Joint Committee on Taxation develop estimates of tax subsidies provided through these tax expenditure initiatives; however, the initiatives support a range of renewable and other energy sources, and wind-specific estimates are not available for most of them. (See app. III for a list of these tax expenditures and the available estimates of revenue losses related to wind energy.) The 82 initiatives we identified supported a range of issues related to wind, and most initiatives supported more than one wind issue. The wind issues most commonly supported were utility-scale land-based wind (49 initiatives) and distributed land-based wind (45 initiatives). See table 2 below for the number and percentage of initiatives supporting each wind issue. Individual initiatives tended to support a range of wind issues. Specifically, 63 initiatives (77 percent) supported more than one wind issue, and 46 of these (56 percent of all initiatives) supported three or more wind issues. For instance, USDA’s High Energy Cost Grant Program provides grants for energy facilities and infrastructure serving rural communities with average home energy costs exceeding 275 percent of the national average. Officials responding to our questionnaire reported that these grants can support utility-scale and distributed land- based wind, transmission, and grid integration. Similarly, DOE’s State Energy Program provides financial and technical assistance to state governments for a variety of renewable energy-related activities across all six wind issues, according to agency officials. The 82 initiatives we identified also supported technology advancement activities ranging from basic and applied research through deployment, which was the most commonly supported technology advancement activity; 58 initiatives (71 percent) included support for deployment. See table 3 below for the number and percentage of initiatives supporting each technology advancement activity. Our analysis showed that 39 (48 percent) of the 82 initiatives supported only one type of technology advancement activity. Another 39 initiatives (48 percent) supported more than one type of technology advancement activity, and of these, 5 initiatives supported all five. For example, Commerce’s Joint Wind Energy Program: Atmospheric Velocity Gradients initiative supports a single technology advancement activity—applied research—through studies focused on improving predictions of wind energy production from winds at various heights. In contrast, Commerce’s Green Technology Pilot Program supported all types of technology advancement activities except basic research. The program supported investment in wind and other “green” technologies through expedited reviews of patent applications, allowing for earlier intellectual property protection. The majority of initiatives supported recipients, generally in the private sector, that provide electricity generated from wind to consumers, develop wind energy generation projects, or manufacture wind-related equipment, according to agency officials responding to our questionnaire. Specifically, 57 initiatives (70 percent) provided funding or other direct support to energy providers, developers, or manufacturers. Fewer than half of the initiatives provided funding or other direct support to recipients such as public and private researchers (35 initiatives), states and other governmental organizations (34 initiatives), and individuals (12 initiatives). Around half of the initiatives—44 initiatives (54 percent)—supported one type of recipient, while the remaining 38 initiatives supported multiple types of recipients. In terms of federal financial support, deployment was the primary focus of federal efforts to promote wind energy. Of the reported $2.9 billion in actual and estimated obligations for wind-related activities in fiscal year 2011, $2.86 billion (99 percent) was obligated by the 58 initiatives that included support for deployment. As previously noted, approximately 94 percent of total wind-related obligations—just over $2.7 billion—was obligated by Treasury’s Section 1603 program for the deployment of projects. Other initiatives that supported deployment activities obligated $147 million. In addition to obligations, all nine of Treasury’s wind-related tax expenditures—with estimated tax subsidies of at least $1.1 billion for activities specifically related to wind—included support for deployment. These tax subsidies were primarily provided through the PTC, which, in 2011, provided an income tax credit of 2.2 cents per kilowatt hour for energy produced from wind and certain other renewable energy sources. In addition, deployment was the most commonly supported technology advancement activity at the five lead agencies. See figure 3 below for the number of initiatives supporting each technology advancement activity at these agencies. In addition to initiatives that support deployment of wind energy technologies by directly funding or providing tax subsidies for the construction or operation of wind facilities, some initiatives that we identified supported the deployment of these technologies indirectly. This indirect support includes facilitating the buying and selling of wind technologies or wind energy, or encouraging deployment through policies and regulations. Examples include the following: Commerce’s International Trade Administration implements an International Buyer Program, which supports U.S. companies at trade shows—including several major shows focused on renewable energy—by recruiting and escorting foreign buyer delegations to meet with U.S. companies. EPA’s Green Power Partnership supports deployment of wind and other renewable energy technologies by encouraging organizations and individuals to purchase renewable energy through outreach, education, and technical support. DOE’s Division of Permitting, Siting, and Analysis implements an initiative providing technical and financial assistance to state and regional entities, such as public utility commissions and state legislatures, to help develop renewable energy policies and portfolio standards, among other things. The 82 wind-related initiatives we identified were fragmented across agencies, most had overlapping characteristics and, though half reported formally coordinating, several financing deployment of wind facilities have provided some duplicative financial support. The initiatives were fragmented because they were implemented across nine agencies and were involved in the same broad area of national need. Most initiatives overlapped to some degree with at least one other initiative because they had at least one wind issue, technology advancement activity, type of recipient, and type of goal in common, but such overlap did not necessarily lead to duplication of efforts because initiatives sometimes differed in meaningful ways. In addition, officials from about half of all initiatives reported formally coordinating with other wind-related initiatives. Such coordination can, in principle, reduce the risk of unnecessary duplication and improve the effectiveness of federal efforts. However, we identified seven initiatives that have provided duplicative support— financial support from multiple initiatives to the same recipient for deployment of a single project. Specifically, wind project developers have, in many cases, combined the support of more than one Treasury initiative and, in some cases, have received additional support from smaller grant or loan guarantee programs at DOE or USDA. We also identified three other initiatives that did not fund any wind projects in fiscal year 2011 but that could, on the basis of the initiatives’ eligibility criteria, be combined with one or more initiatives to provide duplicative support. Of the 10 initiatives, those at Treasury accounted for over 95 percent of the federal financial support for wind in fiscal year 2011. The 82 wind-related initiatives we identified were fragmented because they were implemented across nine agencies and were involved in the same broad area of national need: promoting or enabling the development of wind energy. We found that initiatives supporting deployment in particular were spread about evenly across the five lead agencies—each had between 10 and 13 initiatives that supported deployment. In March 2011, we reported that fragmentation has the potential to result in duplication of resources. However, such fragmentation is, by itself, not an indication that unnecessary duplication of efforts or activities exists. For example, in our March 2011 report, we stated that there can be advantages to having multiple federal agencies involved in a broad area of national need—agencies can tailor initiatives to suit their specific missions and needs, among other things. Across all initiatives, we found that 68 (83 percent) overlapped to some degree with at least one other initiative because they supported similar wind issues, technology advancement activities, and recipients, and had similar goals. The following are several examples of overlapping initiatives: Deployment of utility-scale land-based wind facilities by the energy industry. Seventeen initiatives provided financial support for the construction or use of utility-scale land-based wind facilities to energy companies. For instance, USDA’s Direct and Guaranteed Electric Loan Program provides loans and loan guarantees to establish and improve electric service in rural areas, including through utility-scale on-grid wind and other renewable energy systems. Similarly, the SBA’s Certified Development Company/Section 504 Loans initiative guarantees SBA loans to businesses for, among other things, energy efficiency and renewable energy projects, including utility-scale land- based wind projects. Applied research to facilitate the integration of wind energy into the electric power grid. Five initiatives provided funding or support to public or private researchers to conduct applied research related to the integration of wind energy into the electric grid. For instance, DOE’s Grid-Scale Rampable Intermittent Dispatchable Storage program funds efforts to develop new technologies that enable widespread use of cost-effective grid-scale energy storage, particularly technologies that mitigate variability in energy generated from renewable sources such as wind and solar. Similarly, the National Science Foundation’s Emerging Frontiers in Research and Innovation initiative provides grants for interdisciplinary engineering research with the potential to create a significant impact in meeting national needs. Specifically, in fiscal year 2011, the initiative funded research on compressed air technology for storing excess energy from offshore wind turbines to alleviate power supply and demand imbalances on the electric grid during the day. Deployment of offshore wind technologies by state, local, and other governmental organizations. Five initiatives helped address policy and regulatory barriers to deployment of offshore wind through support for state, local, and other governmental organizations. For instance, under Interior’s Renewable Energy Program there is a Development and Implementation initiative through which BOEMRE worked to authorize orderly, safe, and environmentally responsible renewable energy development on the outer continental shelf, while complementing ongoing state and local renewable energy efforts. BOEMRE’s efforts include assisting states in meeting goals established in RPSs through studies of potential development of particular states’ offshore areas, as well as coordination and information exchange with states and regional organizations. Similarly, Commerce’s MarineCadastre.gov initiative supports wind energy development and other uses of the outer continental shelf by providing mapping information for project planning and siting, which is intended to help developers identify and avoid potentially conflicting uses before creating development plans. Commerce’s efforts under this initiative in fiscal year 2011 included developing maps for a federal-state task force to facilitate decisions on wind energy development in federal waters. In addition, we identified several types of state initiatives that encourage development of wind and other renewable energy sources and share key characteristics with federal wind-related initiatives. Along with federal agencies, state governments implement initiatives that help energy companies finance deployment of utility-scale land-based wind facilities. These initiatives include state tax incentives such as production and investment tax credits. They also include state grant and loan programs, some of which were federally funded, according to DSIRE data. See figure 4 below for examples of these state and federal initiatives available in fiscal year 2011. States have also enacted a number of rules, regulations, or policies that encourage deployment of wind and other renewable energy sources, most notably RPSs. Such standards do not provide direct financial support to particular wind projects; however, by requiring or encouraging that a percentage of the electricity consumed in a state be generated from renewable sources, they are designed to create market demand for electricity from sources such as wind. Recent economic studies we reviewed suggest that certain RPSs have increased development of renewable energy. Several financial professionals and agency officials with whom we spoke cited RPSs as strongly influencing wind energy development. They said that by creating demand for wind and other renewable energy, RPSs complement federal initiatives such as the PTC, which reduce the price of this energy. Currently, 37 states have RPSs that include wind. For instance in California—which led the nation in new wind capacity added in 2011—state law requires electric utilities in the state to have 33 percent of their retail sales derived from eligible renewable energy resources, including wind, by 2020. In Texas—which led the nation in total installed wind capacity in 2011—state law requires 5,880 megawatts of total installed renewable energy capacity by 2015, including up to 5,380 megawatts of wind energy capacity. Overlap among initiatives does not necessarily lead to duplication because initiatives sometimes differ in meaningful ways. For instance, certain of Treasury’s tax expenditures that support deployment of utility- scale land-based wind facilities by the energy industry differ in the type of organization eligible for their support. Treasury’s Credit for Holding New Clean Renewable Energy Bonds, for example, helps tax-exempt entities such as not-for-profit electric utilities or cooperative electric companies finance capital expenditures for wind and other renewable energy facilities by providing tax credits to holders of bonds issued by those entities. In contrast, the majority of Treasury’s other wind-related tax expenditures that support the deployment of utility-scale land-based wind facilities do so by providing taxable organizations with tax credits or other tax incentives for such projects. In addition, certain overlapping initiatives may provide a cumulative benefit for the deployment of wind projects but do not meet our definition of duplication because they do not provide financial support to the same recipient for a single wind project. For instance, Treasury officials noted that the Qualifying Advanced Energy Project Credit and the Section 1603 program could provide support across multiple stages of wind energy deployment if, for example, a manufacturing plant that produces wind turbines receives the credit and a wind facility that uses those turbines receives a Section 1603 grant. However, for purposes of our report, these initiatives do not provide duplicative support because they have different direct recipients. Officials from 43 (52 percent) of the 82 initiatives reported coordinating formally with other federal wind-related initiatives. Coordination was most prevalent among initiatives that reported having wind-specific goals—such as reducing the cost of wind technologies or facilitating the siting, leasing, and construction of new offshore wind projects. Specifically, of the 25 initiatives that reported having such goals, officials from 20 initiatives (80 percent) reported coordinating with other wind- related initiatives. In contrast, of the 57 initiatives that did not report having wind-specific goals, officials from 23 initiatives (40 percent) reported such coordination. Most of the initiatives for which officials reported coordinating—36 of 43—included coordination efforts with wind- related initiatives in other federal departments and independent agencies. For example, officials from several agencies reported coordinating through the Interagency Rapid Response Team for Transmission, which was based on a 2009 joint memorandum of understanding between nine federal agencies. This team aims to improve coordination of federal permitting and reviews of transmission infrastructure projects that will help integrate wind and other renewable energy sources into the electric grid. Officials from 22 initiatives also reported coordinating their wind-related initiatives with state governments. As we have previously reported, coordination may reduce the risk of unnecessary duplication, and a lack of coordination can waste scarce funds and limit the overall effectiveness of the federal effort. We have also previously reported that while agencies face a range of barriers when they attempt to collaborate with other agencies—such as differing missions and incompatible processes—certain key practices can help agencies enhance and sustain federal collaboration. Several lead agencies implementing wind-related initiatives have formally coordinated overlapping initiatives, in some cases in a manner consistent with key practices, such as in the following examples: Identifying and addressing needs by leveraging resources: Agency officials reported leveraging resources such as the knowledge and expertise of other agencies in developing and implementing their own wind energy initiatives. For example, DOE has drawn on Treasury’s expertise through required consultations regarding the terms and conditions of loan guarantees DOE provides to applicants. In addition, USDA officials consulted with Treasury regarding the potential impact of tax laws on new provisions USDA was drafting for awarding certain grants. For some initiatives, agency officials reported leveraging the financial resources of other agencies to help ensure prudent use of funds, particularly in the case of loan guarantee programs. For example, according to an official for DOE’s loan guarantee programs, if a project qualifies for a Treasury Section 1603 grant, it is typical for DOE to require that a portion of the grant proceeds be used to repay the DOE-guaranteed debt. In addition, a USDA official for the Business and Industry Guaranteed Loan Program—which provides guaranteed loans to rural borrowers for projects that improve the economic and environmental climate in rural communities—said that the program provides information to applicants regarding other sources of funding available for wind projects, such as SBA loans or grants, or state-level sources of funding. Developing mechanisms to monitor, evaluate, and report on results: Agencies also engaged in collaborative efforts to create the means to monitor and evaluate their wind-related initiatives and report on their activities. For example, USDA reported coordinating programs that provide loans, grants, and loan guarantees for projects in rural communities both internally and with other agencies such as Commerce, DOE, and EPA through USDA’s Energy Council Coordinating Committee. Agency officials participating in this council share general information on energy-related programs, which helps support common performance reporting and budgeting processes. In addition, USDA data on its grants, loans, and other awards are available to other agencies and the general public through a web- based mapping tool that shows agencies and potential recipients where USDA is supporting renewable energy projects. We identified examples of utility-scale land-based wind projects that received duplicative support—financial support from multiple initiatives to the same recipient for a single project—from initiatives supporting deployment of wind facilities. Specifically, we identified 10 initiatives that have provided or could provide duplicative support, as follows: Seven initiatives provided some duplicative support for wind projects, including three tax expenditures and a grant program implemented by Treasury, a loan guarantee program implemented by DOE, and two programs that provide grants, loans, and loan guarantees implemented by USDA. Although all seven of these initiatives cannot be combined to support the same project, each of them has been combined with one or more of the others, with some limitations, to support a single project. Three other initiatives did not actually fund any wind projects in fiscal year 2011 but could provide duplicative support for wind projects going forward, based on the types of projects eligible for their support. These three initiatives were DOE’s Section 1703 Loan Guarantee Program (Section 1703 program) and USDA’s Business and Industry Guaranteed Loan Program and High Energy Cost Grant Program. Some of these 10 initiatives have recently expired, such as Treasury’s Section 1603 program and DOE’s Section 1705 Loan Guarantee Program (Section 1705 program), and several others are scheduled to expire for wind projects at the end of 2013, such as Treasury’s tax credits. However, the types of mechanisms these initiatives employ—tax expenditures, grants, and loan guarantees—are employed by other of the initiatives that are not expiring and may be considered by policymakers as a means for supporting wind energy through future initiatives. In addition, duplication of financial support among these initiatives may not be limited to wind projects because all of these initiatives supported a range of renewable energy projects. Of the 10 initiatives, Treasury’s 4 initiatives accounted for over 95 percent of the total federal financial support for wind in fiscal year 2011. See table 4 for brief descriptions of these initiatives. (For more detailed descriptions of these initiatives, including information on their expiration dates, see app. II.) According to interviews with agency officials and financial professionals and information from agency websites, wind project developers have used various combinations of these initiatives to help finance specific wind projects. For instance, in many cases, developers combined the support of more than one Treasury initiative and, in some cases, they received additional support from smaller DOE or USDA grant or loan guarantee programs. Among Treasury initiatives, although the PTC, Energy Investment Credit (also known as the Energy Investment Tax Credit, or ITC), and Section 1603 program cannot be combined for a specific project, they all support wind projects for which developers also typically claim Accelerated Depreciation Recovery Periods for Specific Energy Property (accelerated depreciation), according to financial professionals and a Treasury official. In addition, DOE’s Section 1705 program has provided loan guarantees for four utility-scale wind generation projects, all of which have received grants under Treasury’s Section 1603 program, and all of which are eligible to claim accelerated depreciation. Similarly, USDA’s grant and loan programs have supported projects that also received support under other initiatives. For example, USDA’s Direct and Guaranteed Electric Loan Program provided a loan guarantee for a $204 million loan for a wind project that was also awarded an $88 million grant under the Section 1603 program. In addition, USDA’s Rural Energy for America Program (REAP) provides grants and loan guarantees for renewable energy projects. A 2006 report by DOE’s Lawrence Berkeley National Laboratory (LBNL) found that nearly all wind projects with a capacity of over 100 kilowatts that received REAP grants from 2003 through 2005 also intended to claim tax credits under the PTC. Although these initiatives have, in some cases, provided duplicative support, their support may address different needs of wind project developers or the communities they serve, according to agency officials and financial professionals with whom we spoke, and analyses by DOE’s national laboratories. For example, unlike the PTC and ITC, the Section 1603 program allows wind project developers to directly claim a cash grant regardless of their tax liability, thus avoiding the potential need to partner with financial institutions or other investors who provide tax equity. The Section 1603 program was created in part to address a perceived lack of tax equity following the recent financial crisis, according to Treasury guidance and financial professionals. Furthermore, by providing a cash grant, the program allows developers to receive the full amount of the government subsidy rather than sharing this subsidy with tax equity investors. DOE’s Section 1705 program, meanwhile, provided financing in many cases for innovative projects that were seen as too risky to obtain affordable financing from private lenders, according to DOE officials who administered the program. In addition, as with the Section 1703 program, Section 1705 loan guarantees can address projects’ needs for construction and long-term debt financing, while grants under the Section 1603 program and support from Treasury’s tax expenditures are available only when the related project has been constructed and is operational. Therefore, the loan guarantees helped support many projects that might not otherwise have reached the development stage—such as being placed in service or beginning to generate electricity—required to receive tax credits or Section 1603 grants. In addition, USDA’s grant, loan, and loan guarantee initiatives are designed to encourage projects that serve the needs of rural communities, including by providing reliable, affordable electricity, and more generally stimulating rural economic development. Moreover, although these initiatives can be used together in various combinations to help finance the same wind project, several include provisions—often referred to as antidouble-dipping provisions—that limit the amount of financial support provided to a wind project when combined with another initiative. For instance, the PTC includes a provision requiring that the amount of the credit be reduced for federal or state grants, tax-exempt bond financing, subsidized energy financing, or other federal tax credits received for use in connection with the project. LBNL’s financial modeling of wind projects for its 2006 report suggests that large wind projects receiving REAP grants and claiming the PTC would have seen the value of the PTC reduced by from 11 to 46 percent of the grant’s face value, depending on the project’s capital cost and capacity factor. Similarly, antidouble-dipping provisions reduce the value of the ITC and Section 1603 grants—through reductions to the portion of project costs on which they are calculated—for projects that also receive government grants that are not taxed as income, including, in some cases, REAP or other federal and state grants for wind projects. Grants not taxed as income also reduce a project’s depreciable basis, or the dollar amount that can be depreciated for tax purposes. In addition to limitations on combining Treasury’s tax expenditures with other sources of financial support, officials from USDA and DOE told us that their agencies consider some other sources of federal support a wind project has received or will receive in determining whether or how much to award under their grant, loan, and loan guarantee programs. The officials for some of these programs said that they limit the value of support they provide, while officials from other programs, by law, must deny support altogether when applicants are receiving funding from other federal sources. For instance, the appropriations laws applicable to the Section 1703 program prohibit the issuance of loan guarantees for projects that are expected to receive certain other sources of federal support. Such sources of support include grants from certain USDA initiatives and federal “off-take arrangements,” whereby federal agencies agree to purchase power from the projects. Similarly, USDA officials said that, under REAP, the total amount of grants provided for projects from REAP and other federal sources generally cannot exceed 25 percent of project costs. However, this limit does not apply to grants provided under Treasury’s Section 1603 program, according to USDA officials. In addition to these 10 federal initiatives, the state tax credits, grant programs, and loan programs previously discussed can be used, in some cases, to provide financial support for deployment of a wind project in combination with one or more federal initiatives. For instance, it is possible for a single wind project to receive federal support from a Section 1603 grant, accelerated depreciation, and a DOE loan guarantee, along with state support from tax incentives and indirect subsidies due to a state RPS. Furthermore, DSIRE staff and a financial professional with whom we spoke said that states may often structure their initiatives so that recipients can fully leverage sources of federal support, such as by designing the initiatives to avoid triggering federal antidouble-dipping provisions. In addition, under Treasury’s published guidance on the PTC provision reducing the amount of the credit for certain other sources of federal or state support, state or local tax credits do not trigger a reduction in the value of the PTC. Even with antidouble-dipping provisions and other limitations on combining financial support from multiple initiatives for the same project, federal initiatives have provided cumulative financial support worth about half of project costs for many wind projects according to financial professionals. For instance, financial professionals we spoke with estimated that the PTC and accelerated depreciation together provide nearly half of the capital costs required for a typical wind farm. Of this amount, 30 percent or more of the total capital costs is due to the PTC, according to financial professionals’ estimates. For projects receiving support from other federal grant or loan initiatives in addition to the PTC and accelerated depreciation, the value of federal financial support would comprise a larger portion of project costs. Also, as noted earlier, wind projects may receive financial support from state initiatives. For instance, according to a briefing memorandum from White House staff, the total estimated federal and state financial support for a large wind project in Oregon—including a Section 1705 loan guarantee, a Section 1603 grant, accelerated depreciation at the federal and state level, state tax credits, and an estimated premium paid for power due to a state RPS—are worth 65 percent or more of the project’s capital costs. In another example, estimates developed by management consultants for the energy industry and other clients suggest that federal financial support for a New Hampshire wind farm—including a Section 1603 grant, a Section 1705 loan guarantee, and accelerated depreciation—in combination with financial support from state initiatives is worth over half of the project’s capital costs. Agencies implementing the 10 initiatives that have provided or could provide duplicative support allocate support to projects on the basis of the initiatives’ goals or eligibility criteria, but the extent to which agencies assess applicant need for the support is unclear because we found they do not document assessments. DOE and USDA—which have discretion, to the extent allowed by their statutory authority, over the projects they support through 6 of the 10 initiatives—allocate support to projects based on the projects’ ability to meet initiative goals such as reducing emissions or benefitting rural communities, as well as other criteria such as financial and technological feasibility. Treasury, meanwhile, provides support to projects through the remaining 4 initiatives based on the eligibility criteria in the tax code. DOE and USDA consider applicant need for the financial support of some initiatives, according to officials. However, we found that neither agency documents assessments of applicant need for any of their initiatives; therefore, the extent to which they use such assessments to determine how much support to provide is unclear. Treasury does not generally have discretion in allocating support to projects and, as such, does not assess need for the support of its initiatives. While the support of these initiatives may be necessary, in many cases, for wind projects to be built, because the agencies do not document assessments of need, it is unclear, in some cases, whether the entire amount of federal support provided was necessary to build wind projects. In the event that some wind projects receive more federal funding than is required to induce them to be built, this additional funding could potentially be used to induce additional projects to be built or simply withheld, thereby reducing federal expenditures. Through their six initiatives, DOE and USDA allocate support to projects based on projects’ ability to meet initiative goals, along with other criteria such as financial and technological feasibility. For instance, DOE’s loan guarantee solicitation for its Section 1703 program set forth initial screening criteria for projects including that they employ new or significantly improved technology compared to commercially available technologies, and that they be ready to proceed through the loan approval process (i.e. equity has been committed to the project, construction and other contracts have been negotiated, and permits have been secured). For evaluating and scoring projects that meet the initial screening criteria, DOE’s solicitation also set forth two equally weighted criteria related to the program’s goals: a project’s expected reduction or avoidance of greenhouse gas emissions relative to its cost, and a project’s support for clean energy jobs and manufacturing. In line with program goals, USDA allocates the support of its four initiatives to projects based on their expected benefits for rural and other eligible communities, along with other factors such as technological feasibility and expected performance. For instance, under its High Energy Cost Grant Program, USDA evaluates which projects to support based on their abilities to address community needs such as those related to economic hardships, their technological design and feasibility, their expected performance measures including the amount of renewable energy they will produce, and other factors. Similarly, USDA evaluates applications for REAP grants or loan guarantees based on factors such as projects’ support for small agricultural producers or businesses, expected energy production, and technical merit. As with DOE’s Section 1703 program, USDA’s loan guarantee programs also allocate support to projects based on their ability to repay their debt. Unlike DOE and USDA, Treasury generally does not have any discretion regarding which projects receive the support of its initiatives. Taxpayers who are eligible for support under the Internal Revenue Code are generally entitled to that support. According to agency officials and program guidance, DOE and USDA consider applicant need for the financial support of some of their initiatives. For instance, the solicitation for loan guarantee applications under DOE’s Section 1703 program states that DOE will view unfavorably applications for projects that could be fully financed on a long-term basis by commercial banks or others without a federal loan guarantee. DOE officials told us they require applicants to provide a letter stating whether their projects can be financed without a federal loan guarantee, although this self-certification by applicants does not require they document any efforts to obtain private financing. In addition to these letters, DOE officials said their conversations with wind project developers, along with their broader understanding of the lending community and project risks, allow them to determine whether projects would likely be able to obtain private financing without a loan guarantee. USDA also considers applicants’ need for support from some of its initiatives according to agency officials. For example, application guidance for USDA’s High Energy Cost Grant Program states that the program assesses project information including other sources of funding expected for the project to determine its financial viability, the level of community support for the project, and the community’s need for funds. Similarly, officials from the Direct and Guaranteed Electric Loan Program said that prior to loan approval they assess projects’ financial information to determine their financial feasibility and to avoid lending more than is necessary for project completion. However, unlike DOE’s Section 1703 program, we did not identify program documents for USDA’s initiatives—such as guidance for applicants or criteria for evaluating projects—stating that applicant need is a factor in allocating support. Treasury generally does not have any discretion in allocating support to projects and, as such, does not assess applicant need for the support of its initiatives. Even with these DOE and USDA efforts, it is unclear to what extent DOE and USDA assess applicant need for the financial support of their initiatives because we found they do not document such assessments. The federal standards for internal control include control activities—such as documentation of significant transactions—which are essential for proper stewardship and accountability for government resources. Because, as we found, the agencies do not document these assessments, it is unclear to outside parties how they considered the financial need of applicants when determining what amount of support to provide. Moreover, it is unclear if the incremental support some initiatives provided was always necessary to build projects. In the event that some wind projects receive more federal funding than is required to induce them to be built, this additional funding could potentially be used to induce additional projects to be built or simply withheld, thereby reducing federal expenditures. The following are examples where it was unclear whether initiatives’ incremental support was needed for projects to be built: According to the White House briefing memorandum noted above, because of the tax subsidies and other federal and state support for the Oregon wind project, the return on the private equity invested in the project was estimated to be relatively high—around 30 percent. The memorandum further stated that this estimated return suggests the project would “likely move without the loan guarantee,” and “the alternative of private financing would not make the project non-viable.” It is unclear from our review whether the loan guarantee was needed for the project to be built because we found DOE made no documented assessment of need. In addition, a separate analysis of the same wind project by DOE suggested that officials concluded, given the amount of the project’s debt, it would have sufficient cash flow to repay its guaranteed loan without the incremental support of a Section 1603 grant, which it later received. Specifically, DOE approved a loan guarantee for the project in part based on its credit analysis, which was made under the assumption that the project would not need to make use of a Section 1603 grant to repay debt, and that neither DOE nor lenders for the project would have any claim on the grant. However, it is unclear whether the Section 1603 grant was needed for the project to be built because we found no documented assessment of need was made. Though the analyses from the White House memorandum and DOE question the project’s need for the combined support of the Section 1705 loan guarantee and Section 1603 grant, neither analysis questioned whether the project would have been built without either source of support. In another example, a developer of a wind project in Maine provided documentation in 2011 that it had sufficient funds to complete construction of the project without any additional source of capital, though it subsequently received a Section 1603 grant and a Section 1705 loan guarantee and was eligible for accelerated depreciation. Specifically, the developer provided this information to document its financial capacity in support of its permit application to the Maine Department of Environmental Protection, which later approved the permit for the project. However, because we found no documented assessments of need were made for the federal support this project received, it is unclear whether it could have been built with less support. Nonetheless, the incremental support agencies’ initiatives provide may be necessary for wind projects to be built, according to agency officials and financial professionals. For instance, concerning DOE’s initiatives, its loan guarantees allow developers to leverage federal resources to attract sources of equity and debt that would otherwise not be invested, according to DOE. Officials from the loan guarantee programs said that, without loan guarantees, wind project developers can have difficulty obtaining private loans due to the relatively long term of the fixed rate loans they use to finance their projects. Title XVII of the Energy Policy Act of 2005 allows DOE to provide guarantees for loans with terms of up to 30 years. According to DOE officials, there are constraints on the supply of private financing for large projects, and private lenders may consider such long-term loans to have greater risks and may be less likely to lend to such projects in the amounts required to fully finance the transactions. In addition, in commenting on a draft of this report, DOE officials said that long-term financing is necessary in order for debt payments to align with projects’ proceeds from agreements to sell their power over the long term, and is also necessary to avoid risks associated with changing interest rates and other risks that can arise from using shorter-term financing. For instance, they said that the Maine wind project developer’s filings with the state did not address any long-term financing needs for the project beyond its construction phase. With respect to USDA’s initiatives, USDA officials for some initiatives told us that their incremental support may be necessary for wind projects to be built, and for the projects to fully benefit rural communities. For instance, officials noted that, although well-qualified projects can generally find the financing they need in the private market, the cost of private financing would be higher than the cost of financing available through USDA’s loan and loan guarantee programs, which would likely impact electric utility rates for rural ratepayers. They also said that projects receiving support from the High Energy Cost Grant Program may not be built without its support, as it tends to serve isolated communities where available funding for such projects may be limited. Regarding the PTC and other Treasury initiatives, several financial professionals with whom we spoke said that the initiatives provide financial support for many projects that would likely not be built otherwise. For instance, they said that the PTC is necessary in order for many wind projects to be financially viable. Furthermore, although it was extended in 2013, prior to this extension, the financial professionals said the PTC’s scheduled expiration at the end of 2012 had caused developers and investors to suspend plans for future construction of or investment in wind projects. This expected slowdown in deployment of new wind projects is in line with historical evidence of prior PTC expirations being followed by decreases in new wind energy capacity additions. Treasury’s Section 1603 program has also been shown to support wind projects that would otherwise likely not have been built. According to an LBNL analysis, the program supported projects that would likely not have been built using the PTC if the grant were not available—projects that added as much as 2,400 megawatts of wind energy capacity in 2009. More broadly, according to financial professionals, wind project developers and investors evaluate the returns they could make on a range of potential projects. If the expected returns for wind projects are lower due to a decrease in federal support, developers and investors are more likely to pursue other types of projects—including solar or other renewable energy projects, as well as nonenergy projects—that benefit from federal subsidies and could provide higher returns. Faced with concerns about the nation’s reliance on imported oil, as well as fossil fuels’ contribution to global climate change, among other things, federal policymakers have increased the federal focus on and support for development of renewable energy sources, especially wind energy. At the same time, states have created demand for energy from renewable sources through initiatives such as RPSs, supplementing support provided by federal agencies. In fiscal year 2011, wind-related initiatives implemented by federal agencies were fragmented and, in many cases, overlapping. Further, we identified 10 initiatives that have provided or could provide duplicative support to deploy wind facilities. Though some of the 10 initiatives have recently expired or are scheduled to expire, other initiatives employing similar mechanisms such as tax expenditures, grants, and loan guarantees remain in place, and similar initiatives may be considered in the future as a means for supporting wind and other renewable energy sources. In the current fiscally constrained environment, it is especially important to allocate scarce resources where they can be most effective. In this context, it is important that agencies with discretion in implementing initiatives that have provided or could provide duplicative support—DOE and USDA—ensure that they allocate support through these initiatives to projects that would not be built otherwise. However, these agencies do not make documented assessments of whether or how much of their initiatives’ financial support is needed for projects to be built and, as a result, it is unclear to what extent they assess need in order to determine what amount of support to provide. Moreover, it is unclear whether the incremental support some initiatives provided was always necessary for wind projects to be built. To support federal agencies’ efforts to effectively allocate resources among wind projects, we recommend that the Secretaries of Energy and Agriculture, to the extent possible within their statutory authority, formally assess and document whether the incremental financial support of their initiatives is needed in order for applicants’ projects to be built and take this information into account in determining whether, or how much, support to provide. Such assessments could include, for example, information on the investors’ and developers’ projected rates of return on these projects, or documentation of applicants’ inability to secure private financing for projects. In addition, such assessments should consider the financial support available or provided to projects from other federal sources including tax expenditures and, to the extent practical, from state sources. In the event agencies lack discretion to consider this information in determining what financial support to provide, they may want to report this limitation to Congress. We provided a draft of this report to the Secretaries of Energy, Agriculture, and the Treasury for review and comment. DOE provided written comments, in which it agreed with our recommendation; these comments are summarized below and reprinted in appendix V. USDA’s Rural Development provided comments by e-mail on February 11, 2013, stating that USDA generally concurred with the information in our report related to its initiatives. In addition, DOE, USDA, and Treasury provided technical and clarifying comments, which we incorporated as appropriate. DOE stated in its written comments that it will now formally document its evaluation of applicants’ assertions regarding their inability to finance their projects without a federal loan guarantee, and it will clarify how it considers the financial need of applicants when determining what amount of support to provide. With regard to financing wind projects, DOE noted that Section 1603 grants do not provide capital for developers to use to construct projects, but rather the proceeds from the grants are only available when the related project construction is complete and the project is operational. In contrast, DOE noted that its loan guarantees provide construction and long-term debt financing. As we note in the report, these initiatives may address different needs of wind project developers, including the need for project financing prior to reaching the development stage required to receive tax credits or grants under the Section 1603 program. To emphasize DOE’s point, however, we added language to the report to make it clear that grants do not provide project sponsors with capital to construct their projects. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Energy, Agriculture, and the Treasury; the appropriate congressional committees; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Our objectives were to (1) identify wind-related initiatives implemented by federal agencies in fiscal year 2011 and their key characteristics; (2) assess the extent of fragmentation, overlap, and duplication, if any, among these initiatives, and the extent to which they were coordinated; and (3) examine how agencies allocate support to projects through their initiatives and the extent to which they assess applicant need for support. To inform our objectives, we reviewed our February 2012 report that identified federal agencies’ renewable energy initiatives and examined the federal roles the agencies’ initiatives supported in fiscal year 2010. That report identified nearly 700 initiatives that were implemented across the federal government, of which 296 initiatives supported wind energy. For purposes of this report, we generally only included those wind-related initiatives categorized under the research and development or commercialization and deployment federal roles in our February 2012 report. However, we included some initiatives categorized under the regulation, permitting, and compliance federal role if they had a clear focus on deployment of wind energy, such as through streamlining regulatory processes or fast-tracking permitting or other processes for wind projects. From this list of initiatives, we excluded those at certain agencies—such as the Departments of Defense, Homeland Security, and State—whose initiatives generally focused on development of wind energy and other technologies for use in a military, border security, or international aid setting, rather than for use in the domestic commercial energy market. For the remaining agencies and initiatives, we developed an initial questionnaire to collect information from officials regarding whether the fiscal year 2010 initiatives were still active and whether wind energy still received or was eligible for support under the initiatives in fiscal year 2011. We also asked officials to identify any additional initiatives that were active and for which wind energy was eligible for support in fiscal year 2011. If officials wanted to remove an initiative from our list, we asked for additional information to support the removal. Using the responses from this questionnaire, we identified 82 wind-related initiatives at nine agencies. To identify and describe the key characteristics of wind-related initiatives implemented by federal agencies, we developed a second questionnaire to collect information from officials responsible for the 82 initiatives. The questionnaire was prepopulated with information that was obtained from the agencies for GAO’s report on fiscal year 2010 renewable energy initiatives, including the initiative name, description, recipient type, and expiration or sunset date. We asked officials to confirm or modify this information as appropriate for fiscal year 2011. We requested additional information on the initiatives including their obligations, or revenue losses from tax expenditures for activities specifically related to wind; year in which they first supported wind energy; type of wind issues and technology advancement activities supported; initiative-wide and wind- specific goals; and efforts to coordinate with other wind-related initiatives. For a copy of our questionnaire, see appendix IV. We conducted pretests with officials from 12 initiatives across three agencies to ensure that respondents interpreted our questions in the way we intended (e.g., the questions were clear and unambiguous and terminology was used correctly), that the questionnaire was comprehensive and unbiased, and that respondents had the necessary information and ability to respond to the questions. An independent GAO reviewer also reviewed a draft of the questionnaire prior to its administration. On the basis of feedback from these pretests and independent review, we revised the questionnaire in order to improve its clarity. After completing the pretests, we sent the finalized questionnaires to the appropriate agency liaisons, who in turn sent the questionnaires to the appropriate officials. We received questionnaire responses for each of the 82 initiatives, resulting in a response rate of 100 percent. After reviewing the responses, we conducted follow-up e-mail exchanges or telephone discussions with agency officials when responses were unclear or conflicting. When necessary, we used the clarifying information provided by agency officials to update answers to questions to improve the accuracy and completeness of the data. To assess the reliability of obligations data, our questionnaire included questions on the data systems used to generate that data and any methodologies agencies used to develop estimates of obligations for their initiatives. In addition, to assess the reliability of data on tax subsidies provided by wind-related tax expenditures, we interviewed officials from the Department of the Treasury regarding the how the data were developed, and compared the data between the two publicly available sources from the Joint Committee on Taxation and the Office of Management and Budget. We determined that the obligations and tax subsidy data used in this report were of sufficient quality for our purposes. Because this effort was not a sample survey, it has no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question, sources of information available to respondents, or entering data into a database or analyzing them can introduce unwanted variability into the survey results. However, we took steps to minimize such nonsampling errors in developing the questionnaire—including using a social science survey specialist to help design and pretest the questionnaire. We also minimized the nonsampling errors when analyzing the data, including using a computer program for analysis, and using an independent analyst to review the computer program. Finally, we verified the accuracy of a small sample of keypunched records by comparing them with their corresponding questionnaires, and we corrected the errors we found. Less than 0.5 percent of the data items we checked had random keypunch errors that would not have been corrected during data processing. To assess the extent of fragmentation, overlap, and duplication of wind- related initiatives, we first defined these terms based on definitions established in our prior reports. Specifically, for purposes of this report, fragmentation, overlap, and duplication, were defined as follows: Fragmentation occurs when more than one federal agency, or more than one organization within an agency, is involved in the same broad area of national need. Overlap occurs when multiple initiatives support similar wind issues, similar technology advancement activities, and similar recipients, as well as having similar goals. Duplication occurs when multiple initiatives provide financial support to the same recipient for a single wind project. Duplication as we have defined it may be necessary in some cases for specific wind projects to be built. However, in other cases, duplication may result in ineffective use of federal financial support—that is, it may result in some amount of support being provided for specific wind projects that is not needed for them to be built. To determine the extent of fragmentation, we used agencies’ questionnaire responses to confirm the number of federal agencies that supported wind-related initiatives. To determine the extent of overlap, we first analyzed the questionnaire responses to categorize initiatives’ recipient types into four categories, as follows: Energy providers, developers, and manufacturers. This category includes organizations in the energy industry that provide electricity produced by wind energy, develop wind energy generation projects, or manufacture equipment associated with wind turbines or other wind-related technologies. Public and private researchers. This category includes researchers employed by or associated with federal, state, or other governmental entities (such as national laboratories), academic institutions, nonprofit organizations, or private companies. State, local, tribal, and other governmental organizations. This category includes nonfederal governmental organizations, such as state and local governments and quasi-governmental entities, and federally recognized American Indian tribes. Individuals. This category includes members of the general public who produce, develop, or use wind energy, and who receive support independently of their affiliation with a private, governmental or other organization. We then analyzed information on the initiatives’ descriptions and goals provided in the questionnaire responses and categorized initiatives into all applicable categories that we developed for types of goals. These categories included initiatives that facilitated the assessment of wind resources; initiatives that fostered technological improvements or cost- reduction in wind technologies; initiatives that financed the construction or use of wind facilities; and initiatives that addressed policy and regulatory barriers to wind energy development. Once these categories were defined, two staff independently read through each initiative’s description and goals and identified all categories that likely applied to the initiative. They then discussed the categorizations about which they disagreed and came to agreement about whether or not the category applied to the initiative. Using agency-provided data on wind issues and technology advancement activities supported, and our categorizations of the initiatives’ recipients and types of goals, we identified overlapping initiatives as those sharing at least one common wind issue, technology advancement activity, recipient type, and type of goal. To identify duplication of federal support, we focused our review on those initiatives with the largest estimated obligations or revenue losses in 2011 for activities specifically related to wind. Specifically, we first focused our analysis on the five lead agencies, which implemented 89 percent of initiatives comprising 99.9 percent of estimated obligations and all estimated revenue losses in 2011—the Departments of Energy (DOE), the Interior, Agriculture (USDA), Commerce, and the Treasury. Second, we focused our analysis on initiatives that included support for deployment, which were responsible for 99 percent of obligations and all estimated revenue losses in 2011. Third, because of the relatively large number of and variety in the initiatives that supported deployment, we further focused our analysis on those deployment initiatives that provided financial support for construction or operation of wind facilities. Fourth, we narrowed our focus to initiatives that included a focus on utility-scale land- based wind—the most commonly supported wind issue—and fifth, we narrowed our focus to initiatives with recipients that included energy providers, developers, or manufacturers—the most commonly support recipient type. Applying all of these criteria resulted in a list of 15 initiatives, which represented 96 percent of estimated obligations and all revenue losses, according to best available estimates. From this list of 15 initiatives, we reviewed agencies’ questionnaire responses, agency documents, and laws and regulations related to the initiatives, and spoke with agency officials and outside experts about them. Based on this review, we determined that there was only a small potential that duplicative support was provided by the four Treasury initiatives because eligibility for their support was explicitly limited to tax-exempt entities, which were generally not supported by other initiatives such as Treasury’s other tax expenditures. In addition, on the basis of our review of documents and discussions with agency officials and others, we determined that there was only a small potential for duplication of another initiative on our list—Treasury’s Qualifying Advanced Energy Project Credit—because its eligibility criteria limit its support to manufacturing facilities, rather than the energy generation facilities that are generally supported by the other initiatives we identified that have provided or could provide duplicative support. In addition, all available credits under the initiative were allocated by the end of 2010. For initiatives we identified that have provided or could provide duplicative support, we collected information from agency websites on financial support provided for projects, and we interviewed agency officials and reviewed program guidance and regulations for information on how agencies allocate support to projects through the initiatives, and efforts by the agencies to assess applicant need for the support of their initiatives. We also reviewed studies of the initiatives by DOE’s national laboratories, the Congressional Research Service, and other experts. In addition, we interviewed six financial professionals from several of the major financial institutions and legal firms active in wind energy project financing in recent years regarding the support for wind projects provided by the initiatives. We identified these individuals based on their presentations at the annual national wind industry conference held by the American Wind Energy Association and through reviews of industry reports, newsletters, and other publications. To obtain additional information about the types of support available to wind project developers from state governments, we collected and analyzed data from the Database of State Incentives for Renewables and Efficiency (DSIRE), a comprehensive source of information on state incentives and policies that promote renewable energy and energy efficiency, which is funded by DOE. We interviewed researchers who developed and maintain DSIRE regarding their methodology for collecting and summarizing information on state incentives and policies and their processes for ensuring the data are accurate and up-to-date, and we determined the data were sufficiently reliable for our purposes. We also interviewed agency officials and financial professionals for additional information on state initiatives. We conducted this performance audit from February 2012 to March 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Tables 5 through 13 below provide descriptions, by agency, of the 82 federal wind-related initiatives we identified. The tables also provide information reported by agencies on initiatives that will or have expired, in full or in part, due to an expiration of legislative authority, depletion of available appropriations, or some other expiration under the law as of fall of 2012. Table 14 below reflects the fiscal year 2011 revenue loss estimates for Treasury’s wind-related tax expenditures—both their total estimated revenue losses and their estimated revenue losses for activities specifically related to wind. In addition to the individual named above, Dan Haas, Assistant Director; Krista Anderson; Keya Chateauneuf; Cindy Gilbert; Miles Ingram; Cynthia Norris; Jerome Sandau; MaryLynn Sergent; Maria Stattel; Anne Stevens; Barbara Timmerman; and Jack Wang made key contributions to this report.
Wind energy has been the fastest growing source of U.S. electric power generation in recent years. The increase in federal funding for wind technologies and involvement of multiple agencies has raised questions about fragmented, overlapping, or duplicative federal support. In this report, GAO examines federal wind-related initiatives--programs or groups of agency activities that promoted wind energy through a specific emphasis or focus. GAO (1) identifies wind-related initiatives implemented by federal agencies in fiscal year 2011 and their key characteristics; (2) assesses the extent of fragmentation, overlap, and duplication, if any, among these initiatives, and the extent to which they were coordinated; and (3) examines how agencies allocate support to projects through their initiatives and the extent to which they assess applicant need for support. GAO sent a questionnaire to agencies to identify wind-related initiatives and to obtain data on their characteristics; potential for fragmentation, overlap, or duplication; and related coordination. GAO also reviewed studies of the initiatives and interviewed agency officials and financial professionals. GAO identified 82 federal wind-related initiatives, with a variety of key characteristics, implemented by nine agencies in fiscal year 2011. Five agencies--the Departments of Energy (DOE), the Interior, Agriculture (USDA), Commerce, and the Treasury--collectively implemented 73 of the initiatives. The 82 initiatives incurred about $2.9 billion in wind-related obligations and provided estimated wind-related tax subsidies totaling at least $1.1 billion in fiscal year 2011, although complete data on wind-related tax subsidies were not available. Initiatives supporting deployment of wind facilities, such as those financing their construction or use, constituted the majority of initiatives and accounted for nearly all obligations and estimated tax subsidies related to wind in fiscal year 2011. In particular, a tax expenditure and a grant initiative, both administered by Treasury, accounted for nearly all federal financial support for wind energy. The 82 wind-related initiatives GAO identified were fragmented across agencies, most had overlapping characteristics, and several that financed deployment of wind facilities provided some duplicative financial support. The 82 initiatives were fragmented because they were implemented across nine agencies, and 68 overlapped with at least one other initiative because of shared characteristics. About half of all initiatives reported formal coordination. Such coordination can, in principle, reduce the risk of unnecessary duplication and improve the effectiveness of federal efforts. However, GAO identified 7 initiatives that have provided duplicative support--financial support from multiple initiatives to the same recipient for deployment of a single project. Specifically, wind project developers have in many cases combined the support of more than 1 Treasury initiative and, in some cases, have received additional support from smaller grant or loan guarantee programs at DOE or USDA. GAO also identified 3 other initiatives that did not fund any wind projects in fiscal year 2011 but that could, based on their eligibility criteria, be combined with 1 or more initiatives to provide duplicative support. Of the 10 initiatives, those at Treasury accounted for over 95 percent of the federal financial support for wind in fiscal year 2011. Agencies implementing the 10 initiatives allocate support to projects on the basis of the initiatives' goals or eligibility criteria, but the extent to which applicant financial need is considered is unclear. DOE and USDA--which have some discretion over the projects they support through their initiatives--allocate support based on projects' ability to meet initiative goals such as reducing emissions or benefitting rural communities, as well as other criteria. Both agencies also consider applicant need for the support of some initiatives, according to officials. However, GAO found that neither agency documents assessments of applicant need; therefore the extent to which they use such assessments to determine how much support to provide is unclear. Unlike DOE and USDA, Treasury generally supports projects based on the tax code's eligibility criteria and does not have discretion to allocate support to projects based on need. While the support of these initiatives may be necessary in many cases for wind projects to be built, because agencies do not document assessments of need, it is unclear, in some cases, if the entire amount of federal support provided was necessary. Federal support in excess of what is needed to induce projects to be built could instead be used to induce other projects to be built or simply withheld, thereby reducing federal expenditures. GAO recommends that to the extent possible within their statutory authority DOE and USDA formally assess and document whether the federal financial support of their initiatives is needed for applicants' wind projects to be built. DOE agreed with the recommendation and USDA generally concurred with the findings related to its initiatives.
An improper payment is any payment that should not have been made or that was made in an incorrect amount (including overpayments and underpayments) under statutory, contractual, administrative, or other legally applicable requirements. This definition includes any payment to an ineligible recipient, any payment for an ineligible good or service, any duplicate payment, any payment for a good or service not received (except where authorized by law), and any payment that does not account for credit for applicable discounts. Improper Payments Elimination and Recovery Act of 2010, Pub. L. No. 111- 204, § 2(e), 124 Stat. 2224, 2227 (2010) (codified at 31 U.S.C. § 3321 note). Office of Management and Budget guidance also instructs agencies to report as improper payments any payments for which insufficient or no documentation was found. the greatest financial risk to Medicare (see Table 1). However, the contractors have varying roles and levels of CMS direction and oversight in identifying claims for review. MACs process and pay claims and conduct prepayment and postpayment reviews for their established geographic regions. As of January, 2016, 12 MACs—referred to as A/B MACs—processed and reviewed Medicare Part A and Part B claims, and 4 MACs—referred to as DME MACs— processed and reviewed DME claims. MACs are responsible for identifying both high-risk providers and services for claim reviews, and CMS has generally given the MACs broad discretion to identify claims for review. Each individual MAC is responsible for developing a claim review strategy to target high-risk claims.20 In their role of processing and paying claims, the MACs also take action based on claim review findings. The MACs deny payment on claims when they or other contractors identify payment errors during prepayment claim reviews. When MACs or other claim review contractors identify overpayments using postpayment reviews, the MACs seek to recover the overpayment by sending providers what is referred to as a demand letter. In the event of underpayments, the MACs return the balance to the provider in a future reimbursement. For additional information on the MAC roles and responsibilities, see GAO, Medicare Administrative Contractors: CMS Should Consider Whether Alternative Approaches Could Enhance Contractor Performance, GAO-15-372 (Washington, D.C.: Apr. 2015). Congress established per beneficiary Medicare limits for therapy services, which took effect in 1999. However, Congress imposed temporary moratoria on the limits several times until 2006, when it required CMS to implement an exceptions process in which exceptions to the limits are allowed for reasonable and necessary therapy services. Starting in 2012, the exceptions process has applied a claim review requirement on claims after a beneficiary’s annual incurred expenses reach certain thresholds. For additional information on the therapy service limits, see GAO, Medicare Outpatient Therapy: Implementation of the 2012 Manual Medical Review Process, GAO-13-613 (Washington, D.C.: July, 2013). As required by law, the RAs are paid on a contingent basis from recovered overpayments. The contingency fees generally range from 9.0 percent to 17.5 percent, and vary by RA region, the type of service reviewed, and the way in which the provider remits the overpayment. Because the RAs are paid from recovered funds rather than appropriated funds, the use of RAs expands CMS’s capacity for claim reviews without placing additional demands on the agency’s budget. The RAs are allowed to target high-dollar claims that they believe have a high risk of improper payments, though they are not allowed to identify claims for review solely because they are high-dollar claims. The RAs are also subject to limits that only allow them to review a certain percentage or number of a given provider’s claims. The RAs initially identified high rates of error for short inpatient hospital stays and targeted those claims for review. Certain hospital services, particularly services that require short hospital stays, can be provided in both an inpatient and outpatient setting, though inpatient services generally have higher Medicare reimbursement amounts. The RAs found that many inpatient services should have been provided on an outpatient basis and denied many claims for having been rendered in a medically unnecessary setting.23 Medicare has a process that allows for the appeal of claim denials, and hospitals appealed many of the short inpatient stay claims denied by RAs. Hospital appeals of RA claim denials helped contribute to a significant backlog in the Medicare appeals system. determining whether RA prepayment reviews could prevent fraud and the resulting improper payments and, in turn, lower the FFS improper payment rate. From 2012 through 2014, operating under this waiver authority, CMS conducted the RA Prepayment Review Demonstration in 11 states. In these states, CMS directed the RAs to conduct prepayment claim reviews for specific inpatient hospital services. Additionally, the RAs conducted prepayment reviews of therapy claims that exceeded the annual per beneficiary limit in the 11 demonstration states. Under the demonstration, instead of being paid a contingency fee based on recovered overpayments, the RAs were paid contingency fees based on claim denial amounts. In anticipation of awarding new RA contracts, CMS began limiting the number of RA claim reviews and discontinued the RA Prepayment Review Demonstration in 2014. CMS required the RAs to stop sending requests for medical documentation to providers in February 2014, so that the RAs could complete all outstanding claim reviews by the end of their contracts. However, in June 2015, CMS cancelled the procurement for the next round of RA contracts, which had been delayed because of bid protests. Instead, CMS modified the existing RA contracts to allow the RAs to continue claim review activities through July 31, 2016. In November 2015, CMS issued new requests for proposals for the next round of RA contracts and, according to CMS officials, plans to award them in 2016. The SMRC conducts nationwide postpayment claim reviews as part of CMS-directed studies aimed at lowering improper payment rates. The SMRC studies often focus on issues related to specific services at high risk for improper payments, and provide CMS with information on the prevalence of the issues and recommendations on how to address them. Although CMS directs the types of services and improper payment issues that the SMRC examines, the SMRC identifies the specific claims that are reviewed as part of the studies. CMS’s CERT program annually estimates the amount and rate of improper payments in the Medicare FFS program, and CMS uses the CERT results, in part, to direct and oversee the work of claim review contractors, including the MACs, RAs, and SMRC. CMS’s CERT program develops its estimates by using a contractor to conduct postpayment claim reviews on a statistically valid random sample of claims. The CERT program develops the estimates as part of CMS’s efforts to comply with the Improper Payments Information Act, which requires agencies to annually identify programs susceptible to significant improper payments, estimate amounts improperly paid, and report these estimates and actions taken to reduce them.25 In addition, the CERT program estimates improper payment rates specific to Medicare service and provider types and identifies services that may be particularly at risk for improper payments. See Improper Payments Information Act of 2002 (IPIA), Pub. L. No. 107-300, 116 Stat. 2350 (2002) (codified, as amended, at 31 U.S.C. § 3321 note). The IPIA was subsequently amended by the Improper Payments Elimination and Recovery Act of 2010, Pub. L. No. 111-204, 124 Stat. 2224 (2010), and the Improper Payments Elimination and Recovery Improvement Act of 2012, Pub. L. No. 112-248, 126 Stat. 2390 (2013). We have also reported that prepayment controls are generally more cost-effective than postpayment controls and help avoid costs associated with the “pay and chase” process. See GAO, A Framework for Managing Fraud Risks in Federal Programs, GAO-15-593SP (Washington, D.C.: July 28, 2015). CMS is not always able to collect overpayments identified through postpayment reviews. A 2013 HHS OIG study found that each year over the period from fiscal year 2007 to fiscal year 2010, approximately 6 to 9 percent of all overpayments identified by claim review contractors were deemed not collectible.27 Postpayment reviews require more administrative resources compared to prepayment reviews. Once overpayments are identified on a postpayment basis, CMS requires contractors to take timely efforts to collect the overpayments. HHS OIG reported that the process for recovering overpayments can involve creating and managing accounts receivables for the overpayments, tracking provider invoices and payments, and managing extended repayment plans for certain providers. In contrast, contractors do not need to take these steps, and expend the associated resources, for prepayment reviews, which deny claims before overpayments are made. Key stakeholders we interviewed identified few significant differences in conducting and responding to prepayment and postpayment reviews. Specifically, CMS, MAC, and RA officials stated that prepayment and postpayment review activities are generally conducted by claim review contractors in similar ways. Officials we interviewed from health care provider organizations told us that providers generally respond to prepayment and postpayment reviews similarly, as both types of review occur after a service has been rendered, and involve similar medical documentation requirements and appeal rights. These statistics are based on CMS summary financial data, and the currently not collectable classification for overpayments can vary based on when overpayments are identified and demanded, and if overpayments are under appeal. See Department of Health and Human Services, Office of Inspector General, Medicare’s Currently Not Collectible Overpayments, OEI-03-11-00670 (Washington, D.C.: June 2013). hold discussions with the RAs for postpayment review findings, and CMS recently implemented the option for SMRC findings as well. The discussions offer providers the opportunity to give additional information before payment determinations are made and before providers potentially enter the Medicare claims appeals process. Several of the provider organizations we interviewed found the RA discussions helpful, stating that some providers have been able to get RA overpayment determinations reversed. Such discussions are not available for RA prepayment claim reviews or for MAC reviews. CMS officials stated that the discussions are not feasible for prepayment claim reviews due to timing difficulties, as the MACs and RAs are required to make payment determinations within 30 days after receiving providers’ medical records. Second, providers stated that they may face certain cash flow burdens with prepayment claim reviews that they do not face with postpayment reviews due to how the claims are treated in the Medicare appeals process.29 When appealing postpayment review overpayment determinations, providers keep their Medicare payment through the first two levels of appeal before CMS recovers the identified overpayment. If the overpayment determinations are overturned at a higher appeal level, CMS must pay back the recovered amount with interest accrued for the period in which the amount was recouped. In contrast, providers do not receive payment for claims denied on a prepayment basis and, if prepayment denials are overturned on appeal, providers do not receive interest on the payments for the duration the payments were held by CMS. The Medicare FFS appeals process consists of five levels of review that include CMS contractors, staff divisions within HHS, and ultimately, the federal judicial system, allowing appellants who are dissatisfied with the decision at one level to appeal to the next level. claims deemed most critical by each MAC to address and a description of plans to address them. During the same time period, the MACs conducted approximately 76,000 postpayment claim reviews, though some MACs did not conduct any postpayment claims reviews. Prior to the establishment of the national RA program, the MACs conducted a greater proportion of postpayment reviews. However, the MACs have shifted nearly all of their focus to conducting prepayment reviews, as responsibility for conducting postpayment reviews has generally shifted to the RAs. According to CMS officials, the MACs currently use postpayment reviews to analyze billing patterns to inform other review activities, including future prepayment reviews, and to help determine where to conduct educational outreach for specific providers. CMS has also encouraged the MACs to use postpayment reviews to perform extrapolation, a process in which the MACs estimate an overpayment amount for a large number of claims based on a sample of claim reviews. According to CMS officials, extrapolation is not used often but is an effective strategy for providers that submit large volumes of low-dollar claims with high improper payment rates. The SMRC is focused on examining Medicare billing and payment issues at the direction of CMS, and all of its approximately 178,000 reviews in 2013 and 2014 were postpayment reviews. The SMRC uses postpayment reviews because its studies involve developing sampling methodologies to examine issues with specific services or specific providers. For example, in 2013, CMS directed the SMRC to complete a national review of home health agencies, which involved reviewing five claims from every home health agency in the country. CMS had the SMRC conduct this study to examine issues arising from a new coverage requirement that raised the improper payment rate for home health services.30 Additionally, a number of SMRC studies used postpayment sampling to perform extrapolation to determine overpayment amounts for certain providers. The RAs generally conducted postpayment reviews, though they conducted prepayment reviews under the Prepayment Review Demonstration. The RAs conducted approximately 85 percent of their claim reviews on a postpayment basis in 2013 and 2014—accounting for approximately 1.7 million postpayment claim reviews—with the other 15 percent being prepayment reviews conducted under the demonstration. CMS is no longer using the RAs to conduct prepayment reviews because the demonstration ended. Outside of a demonstration, CMS must pay the RAs from recovered overpayments, which effectively limits the RAs to postpayment reviews. CMS and RA officials who we interviewed generally considered the demonstration a success, and CMS officials told us that they included prepayment reviews as a potential work activity in the requests for proposals for the next round of RA contracts, in the event that the agency is given the authority to pay RAs on a different basis. However, the President’s fiscal year budget proposals for 2015 through 2017 did not contain any legislative proposals that CMS be provided such authority. Obtaining the authority to allow the RAs to conduct prepayment reviews would align with CMS’s strategy to pay claims properly the first time. In not seeking the authority, CMS may be missing an opportunity to reduce the amount of uncollectable overpayments from RA reviews and save administrative resources associated with recovering overpayments. The rate of improper payments for home health services rose from 6.1 percent in fiscal year 2012 to 17.3 percent in fiscal year 2013and to 51.4 percent in fiscal year 2014. According to CMS, the increase in improper payments occurred primarily because of CMS’s implementation of a requirement that home health agencies have documentation showing that referring providers conducted a face-to-face examination of beneficiaries before certifying them as eligible for home health services. Our analysis of RA claim review data shows that the RAs focused on reviewing inpatient claims in 2013 and 2014, though this focus was not consistent with the degree to which inpatient services constituted improper payments, or with CMS’s expectation that the RAs review all claim types. In 2013, a significant majority—78 percent—of all RA claim reviews were for inpatient claims, and in 2014, nearly half—47 percent— of all RA claim reviews were for inpatient claims (see Table 3). For RA postpayment reviews specifically, which excludes reviews conducted as part of the RA Prepayment Review Demonstration, 87 percent of RA reviews were for inpatient claims in 2013, and 64 percent were for inpatient claims in 2014. Inpatient services had high amounts of improper payments relative to other types of services—with over $8 billion in improper payments in fiscal year 2012 and over $10 billion in fiscal year 2013—which reflect the costs of providing these services. However, inpatient services did not have a high improper payment rate relative to other services and constituted about 30 percent of overall Medicare FFS improper payments in both years. As will be discussed, the proportion of inpatient reviews in 2014 would likely have been higher if CMS—first under its own authority and then as required by law—had not prohibited the RAs from conducting reviews of claims for short inpatient hospital stays at the beginning of fiscal year 2014. The RAs conducted about 1 million fewer claim reviews in 2014 compared to 2013, and nearly all of the decrease can be attributed to fewer reviews of inpatient claims. In general, the RAs have discretion to select the claims they review, and their focus on reviewing inpatient claims is consistent with the financial incentives associated with the contingency fees they receive, as inpatient claims generally have higher payment amounts compared to other claim types. By law, RAs receive a portion of the recovered overpayments they identify, and RA officials told us that they generally focus their claim reviews on audit issues that have the greatest potential returns. Our analysis found that RA claim reviews for inpatient services had higher average identified improper payment amounts per postpayment claim review relative to other claim types in 2013 and 2014 (see Table 4). For example, in 2013, the RAs identified about 10 times the amount per postpayment claim review for inpatient claims compared to claim reviews for physicians. Although CMS expects the RAs to review all claim types, CMS’s oversight of the RAs did not ensure that the RAs distributed their reviews across claim types in 2013 and 2014. According to CMS officials, the agency’s approval of RA audit issues is the primary way in which CMS controls the type of claims that the RAs review. However, the officials said they generally focus on the appropriateness of the review methodology when determining whether to approve the audit issues, instead of on whether the RA’s claim review strategy encompasses all claim types. The RAs generally determine the types of audit issues that they present to CMS for approval, and based on our analysis of RA audit issues data, we found that from the inception of the RA program to May 2015, 80 percent of the audit issues approved by CMS were for inpatient claims. Additionally, CMS generally gives RAs discretion regarding the claims that they select for review among approved audit issues. Effective October 1, 2013, CMS changed the coverage requirements for short inpatient hospital stays. As a result, CMS prohibited RA claim reviews related to the appropriateness of inpatient admissions for claims with dates of admission between October 1, 2013 and September 30, 2014. In April 2014 and April 2015, Congress enacted legislation directing CMS to continue the prohibition of RA claim reviews related to the appropriateness of inpatient admissions for claims with dates of admission through September 30, 2015, unless there was evidence of fraud and abuse. Protecting Access to Medicare Act of 2014, Pub. L. No. 113-93, § 111, 128 Stat.1040, 1044 (2014); Medicare Access and CHIP Reauthorization Act of 2015, Pub. L. No. 114-10, § 521, 129 Stat. 87, 176 (2015). In July 2015, CMS announced that it would not allow such RA claim reviews for claims with dates of admission of October 1, 2015 through December 31, 2015. The RAs were allowed to continue reviews of short stay inpatient claims for reasons other than reviewing inpatient status, such as reviews related to coding requirements. Beginning on October 1, 2015, Quality Improvement Organizations assumed responsibility for conducting initial claim reviews related to the appropriateness of inpatient hospital admissions. Starting January 1, 2016, the Quality Improvement Organizations will refer providers exhibiting persistent noncompliance with Medicare policies to the RAs for potential further review. CMS stated that it will monitor the extent to which the RAs are reviewing all claim types, may impose a minimum percentage of reviews by claim type, and may take corrective action against RAs that do not review all claim types. CMS has also taken steps to provide incentives for the RAs to review other types of claims. To encourage the RAs to review DME claims— which had the highest rates of improper payments in fiscal years 2012 and 2013—CMS officials stated that they increased the contingency fee percentage paid to the RAs for DME claims. Further, in the requests for proposals for the next round of RA contracts, CMS included a request for a national RA that will specifically review DME, home health agency, and hospice claims. CMS officials told us that they are procuring this new RA because the existing four regional RAs reviewed a relatively small number of these types of claims. Although DME, home health agency, and hospice claims combined represented more than 25 percent of improper payments in both 2013 and 2014, they constituted 5 percent of RA reviews in 2013 and 6 percent of reviews in 2014. In 2013 and 2014, the MACs focused their claim reviews on physician and DME claims. Physician claims accounted for 49 percent of MAC claim reviews in 2013 and 55 percent of reviews in 2014, while representing 30 percent of improper payments in fiscal year 2012 and 26 percent in fiscal year 2013 (see Table 5). DME claims accounted for 29 percent of their reviews in 2013 and 26 percent in 2014, while representing 22 percent of total improper payments in fiscal year 2013 and 16 percent of improper payments in fiscal year 2014. DME claims also had the highest rates of improper payments in both years. According to CMS officials, the MACs focused their claim reviews on physician claims—a category which encompasses a large variety of provider types, including labs, ambulances, and individual physician offices—because they constitute a significant majority of all Medicare claims. CMS officials also told us that they direct MAC claim review resources to DME claims in particular because of their high improper payment rate. Further CMS officials told us that the MACs’ focus on reviewing physician and DME claims was in part due to how CMS structures the MAC claim review workload. CMS official noted that each A/B MAC is responsible for addressing improper payments for both Medicare Part A and Part B, and MAC Part B claim reviews largely focus on physician claims. Additionally, 4 of the 16 MACs are DME MACs that focus their reviews solely on DME claims. CMS officials also noted that MAC reviews of inpatient claims were likely lowered during this period because of CMS’s implementation of new coverage policies for inpatient admissions. Similar to the RAs, the MACs were limited in conducting reviews for short inpatient hospital stays after October 1, 2013. The focus of the SMRC’s claim reviews depended on the studies that CMS directed the contractor to conduct in 2013 and 2014. In 2013, the SMRC focused its claim reviews on outpatient and physician claims, with physician claims accounting for half of all SMRC reviews (see Table 6). Physician claims accounted for 30 percent—the largest percentage—of the total amount of estimated improper payments in fiscal year 2012. In 2014, the SMRC focused 46 percent of its reviews on home health agency claims and 44 percent of its claim reviews on DME claims, which had the two highest improper payment rates in fiscal year 2013. CMS generally directs the SMRC to conduct studies examining specific services, and the number of claims reviewed by claim type is highly dependent on the methodologies of the studies. For example, one SMRC study involved reviewing nearly 50,000 DME claims for suppliers deemed high risk for having improperly billed for diabetic test strips. In 2014, the claim reviews for this study accounted for all of the SMRC’s DME claim reviews and nearly half of all the SMRC claim reviews. Additionally, in 2014, the SMRC reviewed more than 50,000 claims as part of its study that examined five claims from every home health agency. The study followed a significant increase in the improper payment rate for home health agencies from 2012 to 2013, from 6 percent to 17 percent. In some cases, SMRC studies focused on specific providers. For example, a 2013 SMRC study reviewed claims for a single hospital to follow up on billing issues previously identified by the HHS OIG. The RAs were paid an average of $158 per claim review conducted in 2013 and 2014 and identified $14 in improper payments, on average, per dollar paid by CMS in contingency fees (see Table 7). The cost to CMS in RA contingency fees per review decreased from $178 in 2013 to $101 in 2014 because the average identified improper payment amount per review decreased from $2,549 to $1,509. The decrease in the average identified improper payment amount per review likely resulted from the RAs conducting proportionately fewer reviews of inpatient claims in 2014 compared to 2013. The SMRC was paid an average of $256 per claim review conducted in studies initiated in fiscal years 2013 and 2014, though the amount paid per claim review varied by study and varied between years (see Table 8). In particular, the amount paid to the SMRC is significantly higher for studies that involve extrapolation for providers who had their claims reviewed as part of the studies and were found to have a high error rate. Based on our analysis, the higher average amount paid per review in 2014—$346 compared to $110 in 2013—can in part be attributed to the SMRC conducting proportionally more studies involving extrapolation in 2014. As well as increasing study costs, the use of extrapolation can significantly increase the associated amounts of identified improper payments per study. For example, the SMRC study on diabetic test strips involved extrapolation and included reviews of nearly 50,000 claims from 500 providers. It cost CMS more than $23 million to complete, but the SMRC identified more than $63 million in extrapolated improper payments. According to CMS officials, the agency has the SMRC perform extrapolation as part of its studies when it is cost effective—that is, when anticipated extrapolated overpayment amounts are greater than the costs associated with having the SMRC conduct the extrapolations. The amount the SMRC was paid per review also varied based on the type of service being reviewed and the number of reviews conducted. CMS pays the SMRC more for claim reviews for Part A services, such as inpatient and home health claims, than for claim reviews for Part B services, such as physician and DME claims, because CMS officials said that claim reviews of Part A services are generally more resource- intensive. Additionally, CMS gets a volume discount on SMRC claim reviews, with the cost per review decreasing once the SMRC reaches certain thresholds for the number of claim reviews in a given year. The SMRC identified $7 in improper payments per dollar paid by the agency, on average, in 2013 and 2014, though the average amount varied considerably by study and varied for 2013 and 2014. In 2013, the SMRC averaged $25 in improper payments per dollar paid, while in 2014, it averaged $4. The larger figure for 2013 is primarily attributed to two SMRC studies that involved claim reviews of inpatient claims that identified more than $160 million in improper payments but cost CMS less than $1 million in total to conduct. We were unable to determine the cost per review and the amount of improper payments identified by the MACs per dollar paid by CMS because the agency does not have reliable data on funding of MAC claim reviews for 2013 and 2014, and the agency collects inconsistent data on the savings from prepayment claim denials. For an agency to achieve its objectives, federal internal control standards provide that an agency must obtain relevant data to evaluate performance towards achieving agency goals.38 By not collecting reliable data on claim review funding and by not having consistent data on identified improper payments, CMS does not have the information it needs to evaluate MAC cost effectiveness and performance in protecting Medicare funds. GAO/AIMD-00-21.3.1. higher-level, broader contractual work activities. CMS officials told us that they have not required the MACs to report data on specific funds spent to conduct prepayment and postpayment claim reviews. However, as of February 2016, CMS officials told us that all MACs are either currently reporting specific data on prepayment and postpayment claim review costs or planning to do so soon. We also found that data on savings from MAC prepayment reviews were not consistent across the MACs. In particular, the MACs use different methods to calculate and report savings associated with prepayment claim denials, which represented about 98 percent of MAC claim review activity in 2013 and 2014. According to CMS and MAC officials, claims that are denied on a prepayment basis are never fully processed, and the Medicare payment amounts associated with the claims are never calculated. In the absence of processed payment amounts, the MACs use different methods for calculating prepayment savings. According to the MACs: Two MACs use the amount that providers bill to Medicare to calculate savings from prepayment claim denials. However, the amount that providers bill to Medicare is often significantly higher than and not necessarily related to how much Medicare pays for particular services. One MAC estimated that billed amounts can be, on average, three to four times higher than allowable amounts. Accordingly, calculated savings based on provider billed amounts can greatly inflate the estimated amount that Medicare saves from claim denials. Nine MACs calculate prepayment savings by using the Medicare “allowed amount.” The allowed amount is the total amount that providers are paid for claims for particular services, though it is generally marginally higher than the amount that Medicare pays, as it includes the amount Medicare pays, cost sharing that beneficiaries are responsible for paying, and amounts that third parties are responsible for paying. Additionally, the allowed amounts may not account for Medicare payment policies that may reduce provider payments, such as bundled payments. Five MACs compare denied claims with similar claims that were paid to estimate what Medicare would have paid. CMS has not provided the MACs with documented guidance or other instructions for how to calculate savings from prepayment reviews. Federal internal controls standards provide that an agency must document guidance that has a significant impact on the agency’s ability to achieve its goals. In reviewing MAC claim review program documentation, including the Medicare Program Integrity Manual and MAC contract statements of work, we were unable to identify any instructions on how the MACs should calculate savings from prepayment claim denials. Further, several MACs we interviewed indicated that they have not been provided guidance for calculating savings from prepayment denials. CMS officials told us that they were under the impression that all of the MACs were reporting prepayment savings data based on the amount that providers bill to Medicare, which can significantly overestimate the amount that Medicare saves from prepayment claim denials. Because CMS has not provided documented guidance on how to calculate savings from prepayment claim review, the agency lacks consistent and reliable information on the performance of MAC claim reviews. In particular, CMS does not have reliable information on the extent to which MAC claim reviews protect Medicare funds or on how the MACs’ performance compares to other contractors conducting similar activities. CMS contracts with claim review contractors that use varying degrees of prepayment and postpayment reviews to identify improper payments and protect the integrity of the Medicare program. Though we found few differences in how contractors conduct and how providers respond to the two review types, prepayment reviews are generally more cost-effective because they prevent improper payments and limit the need to recover overpayments through the “pay and chase” process, which requires administrative resources and is not always successful. Although CMS considered the Prepayment Review Demonstration a success, and having the RAs conduct prepayment reviews would align with CMS’s strategy to pay claims properly the first time, the agency has not requested legislative authority to allow the RAs to do so. Accordingly, CMS may be missing an opportunity to better protect Medicare funds and agency resources. Inconsistent with federal internal control standards, CMS has not provided the MACs with documented guidance or other instructions for how to calculate savings from prepayment reviews. As a result, CMS does not have reliable data on the amount of improper payments identified by the MACs, which limits CMS’s ability to evaluate MAC performance in preventing improper payments. CMS uses claim review contractors that have different roles and take different approaches to preventing improper payments. However, the essential task of reviewing claims is similar across the different contractors and, without better data, CMS is not in a position to evaluate the performance and cost effectiveness of these different approaches. We recommend that the Secretary of HHS direct the Acting Administrator of CMS to take the following two actions: In order to better ensure proper Medicare payments and protect Medicare funds, CMS should seek legislative authority to allow the RAs to conduct prepayment claim reviews. In order to ensure that CMS has the information it needs to evaluate MAC effectiveness in preventing improper payments and to evaluate and compare contractor performance across its Medicare claim review program, CMS should provide the MACs with written guidance on how to accurately calculate and report savings from prepayment claim reviews. We provided a copy of a draft of this report to HHS for review and comment. HHS provided written comments, which are reprinted in appendix I. In its comments, HHS disagreed with our first recommendation, but it concurred with our second recommendation. HHS also provided us with technical comments, which we incorporated in the report as appropriate. HHS disagreed with our first recommendation that CMS seek legislative authority to allow the RAs to conduct prepayment claim reviews. HHS noted that other claim review contractors conduct prepayment reviews and CMS has implemented other programs as part of its strategy to move away from the “pay and chase” process of recovering overpayments, such as prior authorization initiatives and enhanced provider enrollment screening. However, we found that prepayment reviews better protect agency funds compared with postpayment reviews, and believe that seeking the authority to allow the RAs to conduct prepayment reviews is consistent with CMS’s strategy. HHS concurred with our second recommendation that CMS provide the MACs with written guidance on how to accurately calculate and report savings from prepayment claim reviews. HHS stated that it will develop a uniform method to calculate savings from prepayment claim reviews and issue guidance to the MACs. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Acting Administrator of CMS, appropriate congressional requesters, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or at kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix II. Kathleen M. King, (202) 512-7114, kingk@gao.gov. In addition to the contact named above, Lori Achman, Assistant Director; Michael Erhardt; Krister Friday; Richard Lipinski; Kate Tussey; and Jennifer Whitworth made key contributions to this report.
CMS uses several types of claim review contractors to help reduce improper payments and protect the integrity of the Medicare program. CMS pays its contractors differently—the agency is required by law to pay RAs contingency fees from recovered overpayments, while other contractors are paid based on cost. Questions have been raised about the focus of RA reviews because of the incentives associated with the contingency fees. GAO was asked to examine the review activities of the different Medicare claim review contractors. This report examines (1) differences between prepayment and postpayment reviews and the extent to which contractors use them; (2) the extent to which the claim review contractors focus their reviews on different types of claims; and (3) CMS's cost per review and amount of improper payments identified by the claim review contractors per dollar paid by CMS. GAO reviewed CMS documents; analyzed CMS and contractor claim review and funding data for 2013 and 2014; interviewed CMS officials, claim review contractors, and health care provider organizations; and assessed CMS's oversight against federal internal control standards. The Centers for Medicare & Medicaid Services (CMS) uses different types of contractors to conduct prepayment and postpayment reviews of Medicare fee-for-service claims at high risk for improper payments. Medicare Administrative Contractors (MAC) conduct prepayment and postpayment reviews; Recovery Auditors (RA) generally conduct postpayment reviews; and the Supplemental Medical Review Contractor (SMRC) conducts postpayment reviews as part of studies directed by CMS. CMS, its contractors, and provider organizations identified few significant differences between conducting and responding to prepayment and postpayment reviews. Using prepayment reviews to deny improper claims and prevent overpayments is consistent with CMS's goal to pay claims correctly the first time and can better protect Medicare funds because not all overpayments can be collected. In 2013 and 2014, 98 percent of MAC claim reviews were prepayment, and 85 percent of RA claim reviews and 100 percent of SMRC reviews were postpayment. Because CMS is required by law to pay RAs contingency fees from recovered overpayments, the RAs can only conduct prepayment reviews under a demonstration. From 2012 through 2014, CMS conducted a demonstration in which the RAs conducted prepayment reviews and were paid contingency fees based on claim denial amounts. CMS officials considered the demonstration a success. However, CMS has not requested legislation that would allow for RA prepayment reviews by amending existing payment requirements and thus may be missing an opportunity to better protect Medicare funds. The contractors focused their reviews on different types of claims. In 2013 and 2014, the RAs focused their reviews on inpatient claims, which represented about 30 percent of Medicare improper payments. In 2013 and 2014, inpatient claim reviews accounted for 78 and 47 percent, respectively, of all RA claim reviews. Inpatient claims had high average identified improper payment amounts, reflecting the costs of the services. The RAs' focus on inpatient claims was consistent with the financial incentives from their contingency fees, which are based on the amount of identified overpayments, but the focus was not consistent with CMS's expectations that RAs review all claim types. CMS has since taken steps to limit the RAs' focus on inpatient claims and broaden the types of claims being reviewed. The MACs focused their reviews on physician and durable medical equipment claims, the latter of which had the highest rate of improper payments. The focus of the SMRC's claim reviews varied. In 2013 and 2014, the RAs had an average cost per review to CMS of $158 and identified $14 in improper payments per dollar paid by CMS to the RAs. The SMRC had an average cost per review of $256 and identified $7 in improper payments per dollar paid by CMS. GAO was unable to determine the cost per review and amount of improper payments identified by the MACs per dollar paid by CMS because of unreliable data on costs and claim review savings. Inconsistent with federal internal control standards, CMS has not provided written guidance on how the MACs should calculate savings from prepayment reviews. Without reliable savings data, CMS does not have the information it needs to evaluate the MACs' performance and cost effectiveness in preventing improper payments, and CMS cannot compare performance across contractors. GAO recommends that CMS (1) request legislation to allow the RAs to conduct prepayment claim reviews, and (2) provide written guidance on calculating savings from prepayment reviews. The Department of Health and Human Services disagreed with the first recommendation, but concurred with the second. GAO continues to believe the first recommendation is valid as discussed in the report.
Driver licenses have become widely accepted identity documents because they generally include features that make them difficult to counterfeit or alter and may contain identifying information such as the licensees’ legal name, photograph, physical description, and signature. Currently, about 188 million drivers are licensed in the United States, and states issue an additional 73 million licenses and identification cards each year. Individuals can apply to obtain licenses at about 3,800 locations across the United States. Authority for designing and administering driver licensing programs, as well as for verifying the identity information of licensees, lies with individual states. Accordingly, driver licensing agencies face the challenge of determining whether the identity documents individuals provide (1) are authentic and contain information that agrees with the issuing agency’s records and (2) actually belong to the person presenting them. To promote uniformity among driver licensing programs, AAMVA provides states with guidance on documents it recommends as acceptable proof of identity, as well as best practices for verifying the documents. Not surprisingly, the SSN is key to any verification process because each SSN is unique to its owner. In February 2002, we reported that 45 states collect SSN information from driver license applicants. Individuals obtain SSNs by applying to SSA and providing evidence of their age, identity, and U.S. citizenship or lawful alien status. As the agency responsible for assigning SSNs and issuing social security cards, SSA provides a service to the states to verify those numbers. SSA provides two methods for driver licensing agencies to verify SSNs: batch and on-line. States use the batch method to submit an aggregate group of SSN requests directly to SSA, and SSA generally responds within 24 to 48 hours. Those states using the on-line method submit individual SSN requests and receive immediate “real time” responses from SSA. On-line users transmit and receive information to and from SSA through a network maintained by AAMVA. SSA charges states a fee to cover its costs (basically system processing and personnel) for providing this service. Batch users pay $0.0015 per transaction while on-line users are charged $0.03 per transaction. For fiscal year 2002, the total billings for batch and on-line users were about $39,000 and $193,000, respectively. SSA collects payments directly from the batch users, while it bills and collects payments from the on-line users through AAMVA. SSA followed Privacy Act requirements in deciding what information it would disclose to driver licensing agencies. Under its current disclosure policy, if the SSN, name, and date of birth submitted to SSA by a driver licensing agency match SSA’s records, SSA will verify the match to the state driver licensing agency. If one or more elements do not match, SSA will inform the agency of the nonmatch but will not disclose further information. match only establishes that the information agrees with SSA’s records and is not proof that the individual using the SSN is the person to whom SSA assigned the number. Beyond SSA’s verification service, the federal government plays a role in several other key areas of states’ driver licensing programs. For example, within the Department of Transportation (DOT), the National Highway Traffic Safety Administration (NHTSA) operates the National Driver Register (NDR), a national database containing identity information on 39 million problem drivers that states are required to use when making licensing decisions. Also, to remove unsafe commercial drivers from the highways, the federal government established the Commercial Drivers License Information System (CDLIS), a nationwide database of 11 million records that states must use to exchange information on applicants who may hold commercial licenses in other states or have driving infractions that make them ineligible for licensing. DOT, the federal agency charged with establishing the CDLIS database, contracts with AAMVA to operate it. The federal government also provides grants to help states improve their highway safety programs. Furthermore, states’ receipt of federal funds for their state child support enforcement programs are contingent on the collection of individuals’ SSNs during the driver licensing process. This provision enables licensing agencies to assist states in locating and obtaining child support payments from noncustodial parents. Twenty-five states have used either the batch or on-line verification method and the extent to which they regularly use the on-line service varies. States that used the batch method generally use it for a short period then switch to the on-line process exclusively. Although states’ use of SSA’s on-line service has increased steadily over the last several years, 5 states submitted over 70 percent of all on-line verification requests. Factors such as cost, system performance, and individual state priorities play a role in determining whether states opt to use SSA’s verification service and the frequency in which it is used. As of March 2003, driver licensing agencies in 25 states have used the batch or on-line method to verify SSNs with SSA. States generally use the batch method for a short-term period, but states are more likely to use the on-line service on a continuous basis. About three-fourths of the states that rely on SSA’s verification service used the on-line method or a combination of the on-line and batch method, while the remaining states used the batch method exclusively. (See fig. 1.) Over the last several years, states estimated submitting over 84 million requests to SSA using the batch method. Similarly, states submitted a total of 13 million requests using the on-line method. Two-thirds of these on-line requests were submitted in the last 2 fiscal years. SSA officials told us that the batch method offers advantages in circumstances where a real-time verification response is unnecessary. For example, some states have used the batch method to “clean-up” SSNs in their existing records and address any discrepancies prior to the license coming due for renewal at a later date. A number of states that have used the batch method in this manner subsequently used the on-line method exclusively. For example, one state that used the batch method in 2001 to verify over 8.3 million existing records has since used the on-line method exclusively. SSA officials noted that only one state currently uses the batch method on a continuous basis to verify SSNs for all of its customers. For states that issue permanent licenses on the spot, the on-line service also offers an advantage, namely, the ability to instantly verify the SSN and other key information submitted by individuals seeking initial licenses, as well as those converting out-of-state licenses. Between fiscal years 1998 and 2002, the number of states participating in SSA’s on-line service grew by about 3 states each year. As shown in figure 2, the volume of on- line verification requests processed by SSA has also increased significantly from 300,000 in fiscal year 1998 to 5.5 million in fiscal year 2002. Although the volume of on-line requests grew between 1998 and 2002, usage varied significantly among states and within individual states from year to year. As shown in figure 3, 5 states accounted for over 70 percent of the total transactions over a 5-year period, and a single state was responsible for submitting about one-third of the total transactions. In addition, in some states, the use of the on-line service varied from year to year. For example, one state sent in about 250,000 requests in 1 year and about half that number the following year. Various factors—such as costs, performance problems, and state priorities—may affect states’ decisions about whether or not to use SSA’s verification service. The nonverifying states we contacted frequently cited cost as a reason why they did not use SSA’s verification service. In addition to the per-transaction fees that SSA charges, states may incur additional costs to set up and use SSA’s service, including the cost for computer programming, equipment, staffing, training, and so forth. State estimates associated with establishing an on-line SSN verification process with SSA varied considerably based on factors such as the system modifications they planned to make. For example, one state we contacted estimated that it would cost approximately $770,000 to implement the on- line service. Another state estimated that using the on-line service would have a start-up cost of about $230,000. Many nonverifying states we contacted expressed a reluctance to use SSA’s verification service based on performance problems they had heard were encountered by other states. Some states cited concerns about frequent outages and slowness of the on-line system. Other states mentioned that the extra time to verify and resolve SSN problems could increase customer waiting times because a driver license would not be issued until verification was complete. States’ decisions about whether to use SSA’s service, or the extent to which to use it, are also driven by internal policies, priorities, and other concerns. For example, some of the states we visited have policies requiring their driving licensing agencies to verify all customers’ SSNs. Officials in one of these states acknowledged that the growing prevalence of identity theft and the events of September 11, 2001, directly affected their decision to begin using SSA’s service. Conversely, another state we visited that had submitted only 51 transactions over a 3-year period told us that it was delaying full use of SSA’s service until spring 2003 to coincide with the roll-out of its new driver-license issuance system. Finally, we found that states may limit their use of the on-line method to certain targeted populations. For example, one state reported that its policy was to use the on-line method only if fraud was suspected, while another used the service only for initial licenses and out-of-state conversions, but not for renewals of in-state licenses. Weaknesses in the design and management of SSA’s on-line verification service have contributed to capacity and performance problems and ultimately limited its usefulness. SSA recently took steps to increase systems capacity and to give more management attention to the service; however, problems remain. In designing the system, SSA used an available infrastructure and encountered capacity problems early on. Although the problems worsened after the pilot phase, SSA did not monitor or modify the system to improve its performance. Beyond system design problems, SSA’s day-to-day management of the service has also been problematic. This lack of management attention to the service is evidenced by the fact that SSA has failed to bill and collect in a timely fashion more than $370,000 from AAMVA over the last several years. SSA officials have taken some steps to address system capacity problems, but the agency still lacks key performance goals for the on-line service. Despite an increased focus on daily management and oversight of the service, SSA still has not addressed other problem areas such as a high nonmatch rate or states’ vulnerability to fraud associated with individuals who use the SSNs of deceased individuals to obtain licenses. These issues may affect states’ willingness to use the service and expose them to a higher risk of fraud. Weaknesses in the design and management of SSA’s on-line system have contributed to capacity and performance problems. In designing the system, SSA connected its server to AAMVA’s network, to which driver licensing agencies across the country were linked. SSA connected the two systems using a low-speed data communication line. In 1997, SSA piloted the on-line service with three states participating. A joint SSA and AAMVA evaluation of the pilot estimated that the on-line service could verify 43,200 requests in a 12-hour period or 12.5 million per year. It was also estimated that states would submit 7.7 million requests in 1998. While the system experienced some problems during the pilot—such as slow response times and outages—SSA expressed confidence that its system would be sufficient to handle all requests. SSA acknowledged that only limited capacity testing was done. However, SSA planned to monitor the system’s performance as needed to ensure it could meet states’ needs. Following the pilot phase, problems worsened as more states began using SSA’s service. AAMVA’s data show that in 1999 the system experienced an average of three major outages per month, increasing to an average of five per month in 2000. More recent AAMVA data showed that from August 2002 through March 2003, outages continued to occur frequently and lasted from about 30 minutes to as long as 1 day. Such outages can affect customer service because employees in one state told us that when the service is down, they cannot process customers’ transactions. However, because SSA did not collect or monitor performance data on response times and outages, SSA did not know the magnitude or specifics of the problem. The capacity problems inherent in the design of the on-line system have affected states’ use of SSA’s verification service. Officials in one state told us that they have been forced to scale back their use of the system because they were told by SSA that the volume of transactions was overloading the system. In addition, AAMVA representatives told us that because of concerns about performance and reliability, they have not allowed new states to use the service since the summer of 2002. At the time of our review, 10 states had signed agreements with SSA and were waiting to use the on-line system, and 17 states had received funds from DOT for the purpose of verifying SSNs with SSA. It is uncertain how many of the 17 states will ultimately opt to use SSA’s on-line service. However, even if they signed agreements with SSA today, they may not be able to use the service until the backlog of waiting states is addressed. In addition to design weaknesses, SSA did not sufficiently focus on the management of its service. In particular, SSA previously lacked a designated person to oversee the day-to-day operations of the service and to coordinate with AAMVA on various management issues. As a result, AAMVA lacked a focal point within SSA to resolve persistent performance problems that arose with the system. AAMVA officials told us they would start by calling SSA’s general help desk, as directed by SSA, but would end up calling several different components within the agency. This situation impeded the timely and effective resolution of problems necessary to meet states’ verification needs. SSA’s lack of management attention to the service is also evidenced by the fact that the agency failed to timely bill and collect fees from AAMVA over the last several years. Each year SSA is required to reach agreement with AAMVA on the per transaction cost of its service. However, for several years SSA and AAMVA have not done this. Under the agreement, SSA is also required to send AAMVA a final billing each year based on the number of transactions processed. SSA billed and collected payments from AAMVA for the first 2 fiscal years—1997 and 1998. However, between fiscal years 1999 and 2002, SSA failed to bill and collect more than $370,000 it calculated as being due from AAMVA. SSA and AAMVA officials have acknowledged problems stemming from the design and management of the on-line service and have made some necessary improvements. For example, according to SSA, in April 2003 the service began using software that AAMVA recently revised to increase the volume of transactions states could submit and receive through AAMVA’s network. About the same time, SSA completed an upgrade of its data communication line and server to enhance its system capacity and response time. SSA officials told us these upgrades should reduce outages and enhance performance. SSA provided us with information showing that in May 2003, 2 states had increased their volume of transmissions and an additional 3 states had begun using the service. SSA plans to add 4 new states that are currently testing the on-line system. AAMVA estimates that 2003 verification requests may increase to 28 million, more than five times the number received in 2002. Despite this projection, however, at the time of our review, SSA still had not established key goals for the level of service it will provide driver licensing agencies. SSA officials told us they are currently monitoring the volume of transactions and response times as new states are added. However, until SSA establishes key goals, the quality and effectiveness of SSA’s on-line service cannot be fully assessed. More recently, SSA also designated a project manager responsible for overseeing the day-to-day operation of its service, as well as an individual responsible for the billing and collection of AAMVA payments. At the time of our review, SSA had collected $330,000 from AAMVA for fiscal years 1999-2002. SSA officials told us that they are in the process of updating the cost estimates and payments for fiscal year 2003. Despite SSA’s recent efforts to focus more management attention on its verification service, problems regarding the high nonmatch rate and states’ continued vulnerability to fraud associated with the use of SSNs of deceased individuals by driver license applicants remain. These problems pose a concern for states because of the additional workloads associated with resolving discrepancies between SSA and states’ driver records as well as the potential for identity theft. SSA’s data over the last 5 years show that an average of 11 percent of all transactions submitted by states failed to verify with SSA’s records. Some states have experienced nonmatch rates as high as 30 percent. In fiscal year 2002, about 800,000 records failed verification. Generally, about one-half of these failed because the name submitted with the SSN did not match the name in SSA’s records. Such mismatches may occur, for example, if a person’s SSN record lists a maiden name, but the person is applying for a license under a married name. The states and AAMVA have voiced their concerns to SSA about the need for additional disclosure of information. In a May 2001 letter to one state, SSA’s Acting Deputy Commissioner specified the agency’s disclosure policy for driver licensing agencies and stated that SSA closely scrutinizes requests involving SSN use for purposes not related to the Social Security program. In doing so, SSA has decided to provide its verification service in a limited manner by informing driver licensing agencies which data elements match or do not match. State concerns about the potential workloads associated with resolving nonmatch issues may affect their willingness to fully use SSA’s service. Officials in one state told us that a planned start up of the on-line service may be delayed due to concerns about the high nonmatch rate they have experienced using SSA’s batch service. Officials in another state indicated that they have not done a batch clean up of their existing databases because they are unable to devote the additional funding and staff resources to address nonverification issues. SSA officials told us that they are aware of states’ concerns and have recently begun discussions to address disclosure issues with the states. In reviewing SSA’s verification service, we also identified a key weakness in the batch method that exposes states to a higher risk of fraud by allowing them to inadvertently issue licenses to individuals using the SSNs of deceased individuals. Unlike the on-line service, SSA does not match batch requests against its death records. As a result, the batch method will not identify and prevent the issuance of a license in cases where an SSN of a deceased individual is being used. SSA officials told us that they initially developed the batch method several years ago, and they did not design the system to match SSNs against its death files. However, a death match was built into the on-line system. At the time of our review, SSA acknowledged that it had not explicitly informed states about the limitation of the batch service. Our own analysis of 1 month of SSN transactions submitted to SSA by one state using the batch method identified at least 44 cases in which individuals used the SSN, name, and date of birth of persons listed as deceased in SSA’s records to obtain a license or an identification card. We forwarded this information to state investigators who quickly confirmed that licenses or identification cards had been issued in 41 cases and were continuing to investigate the others. To further assess states’ vulnerability in this area, our own investigators, working in an undercover capacity, were able to obtain licenses in two batch states using a counterfeit out-of-state license and other fraudulent documents and the SSNs of deceased persons. In both states, driver licensing employees accepted the documents we submitted as valid. Our investigators completed the transactions in one state and left with the new valid license. In the second state, the new permanent license arrived by mail within weeks. The ease in which they were able to obtain these licenses confirmed states’ vulnerability to accepting fraudulent documents, and for those states that use SSA’s batch process, to issuing licenses to individuals using SSNs of deceased individuals. SSA officials have told us that the agency has not made a decision about whether the current batch system will be modified to include a death match. Our field work shows that licensing officials in states that use or have used the batch process were often unaware that SSA did not match SSNs against its death records. As a result, these states lacked information that they could have used to make more informed decisions in choosing either the batch or on-line method or to seek alternative strategies to avoid issuing licenses to individuals using SSNs of deceased persons. Moreover, states that have used the batch method in prior years to clean up their records and to verify the SSNs of millions of driver license holders, may have also unwittingly left themselves open to identity theft and fraud. States may use tools beyond visual inspection to verify documents, but lack the ability to systematically exchange identity information on all drivers with other states. Although driver licensing agencies rely primarily on visual inspection of documents to verify applicants’ identity information, states may employ more extensive measures such as using independent sources to corroborate applicants’ identity information. Despite the extra measures, states remain vulnerable to identity fraud because they lack a systematic means to exchange information on all drivers. As a result, states may unknowingly accept false out-of-state licenses as valid identity documents or license individuals who use the identity information of others. In the states we visited, driver-licensing agencies rely primarily on visual inspection to determine the authenticity of documents provided by applicants. As proof of identity, applicants must present one or more state- approved documents that are generally inspected by staff. Applicants may present a variety of documents, such as a social security card, a U.S. birth certificate, a driver license from another state, or passport. For noncitizen applicants, staff also review a myriad of passports and U.S. immigration documents. In reviewing identity documents, staff look for security features such as watermarks and raised seals that are difficult to counterfeit and are designed to reveal evidence of tampering. They also inspect documents for other indications of authenticity such as signs of appropriate aging. If employees are unsure if a particular document is authentic or if it actually belongs to the applicant, they may use interviewing techniques to ensure that the individual can corroborate key information. In the states we visited, staff responsible for processing driver license applications generally received some training and basic assistance to support the visual inspection. For example, all of the states provided training to help employees distinguish between authentic and fraudulent documents. This generally occurred once or twice a year and was sometimes presented as part of a larger training module covering other policies and procedures of the agencies. In addition to training, office managers and supervisors with more experience in detecting false documents were available on site to help with the visual inspection if needed. In several states, supervisors and office managers told us that they have directly contacted issuing agencies to determine whether documents, such as birth certificates, were valid. However, this was not routinely done because it can be a time-consuming and labor-intensive process. Nearly every state we visited provided staff with some basic tools to help with the visual inspection, such as reference manuals describing the security features included in various state and federal government issued identity documents. Other tools such as black lights and magnifying glasses were also commonly available to help staff view the security features embedded in certain documents. However, we found that the extent to which staff actually used these tools varied. Despite the training and other measures to aid visual inspection, these approaches are often not enough for employees to make a definitive determination of a document’s authenticity. Staff and managers we interviewed frequently expressed concern that the variety of valid state birth certificates, social security cards, out-of-state licenses and immigration documents, made it extremely difficult to catch those that are forged, short of them being obvious fakes. They also frequently expressed a need for better access to automated means of verifying these documents. Because of the vulnerabilities associated with the visual inspection of documents, states employ more extensive safeguards to better deter and detect identity theft and fraud. These include seeking out independent third-party data sources to corroborate identity information and documents provided by driver license applicants, utilizing computer systems to strengthen the integrity of their licensing process, and using other innovative tools to better verify applicants’ identity information and deter fraud. At the time of our review, a number of states we visited were either using or pursuing the use of other tools to electronically verify identity information with issuing agencies and other independent third parties. Officials in several states we visited told us that they wanted access to the Department of Homeland Security (DHS) immigration information to verify the identity documents of noncitizen applicants. Further, a state with a large immigrant and noncitizen population had contracted with DHS to routinely authenticate immigration documents and other information relevant to a person’s citizenship and immigration status. A second state was in the process of negotiating access to these records. Statewide birth and death information was also viewed by state administrators as key to the identity verification process. Accordingly, several of the states we visited have periodically used electronic queries or data matches to access birth or death records. Three of the nine states we visited were pilot-testing or considering the use of private vendors to strengthen their identity verification and fraud detection procedures. These private vendors typically access various information sources, including civil and criminal records, credit information, address information, state driver records, and state birth and death data to help driver licensing agencies corroborate information provided by applicants and correctly issue licenses. At the time of our review, one state was pilot-testing on-line access to a private vendor in a limited number of sites. AAMVA officials did not have national data on the extent to which other states are using innovative third-party verification tools to strengthen the integrity of their licensing procedures. However, they generally noted that such practices are not routinely used to supplement states’ primary practice of visually inspecting documents. Several states we visited made extensive use of computer systems to prevent identity theft and fraud. Several states have computer systems capable of screening for multiple individuals in their state with the same or similar identity information. For example, one state’s computer system automatically cross-matches first-time applicants’ personal information against existing driver records in the database to search for such situations. When states do not have the capability to routinely perform such cross-matches, employees may inadvertently issue licenses to individuals who may be using the identity information of someone the state has previously licensed. Some states’ computer systems are designed to prevent the issuance of a license in certain high-risk situations. For example, one state’s system terminates the processing of a transaction if identity information does not verify with SSA, or if staff attempt to by-pass this verification step. Staff are also prevented from overriding the system and issuing the license unless an authorized person—generally a higher-level official—intervenes. Similarly, some states had systems that could prevent issuance of a license if an individual’s personal information already existed in the states’ driver records, or DHS information failed to verify. Further, in cases where fraud is suspected, most states’ systems—although not all—are capable of flagging the transaction and automatically transmitting this information to other offices within the state to prevent persons from “shopping” sites once they were denied at the first location. Officials in one state that lacked this protection told us that in cases of suspected fraud, staff relied on manual processes such as telephone calls and e-mails to alert other offices about suspicious individuals and false documents. Finally, to varying degrees, the states we visited have instituted additional controls to better address identity theft and fraud issues. Due to concerns about the quality and integrity of other state licensing systems, three states prohibit or limit the acceptance of out-of-state licenses as a sole or primary identity document. Officials from another state told us that they would not accept such documents from 20 states that they have determined to have less stringent verification processes. A few other states have also instituted policies requiring that two employees review or sign-off on the authenticity of documents provided by applicants before a license can be issued. This separation of responsibilities provides for additional scrutiny of documents and may act as a further check against employee fraud. Another common practice among several states was to copy all identity documents if during the application process, fraud was suspected. This provides the licensing agency with key information for investigating the individual’s alleged identity. An official in one state told us that staff are trained to collect and copy identity documents upfront regardless of whether fraud is suspected at the time. All nine states we visited also store and transmit information such as digital photographs and signatures for verification purposes. Two states also captured fingerprints at the time of application, but only one of them used biometric technology to electronically verify this identity information for individuals renewing licenses. Another safeguard used by two states is the issuance of temporary licenses when identity information has not been corroborated at the time of application. Such licenses lack photographs and security features common to permanent licenses or clearly state that they are not valid for identity verification purposes. However, a third state’s temporary license looks the same and includes identical information as its permanent license. As a result, this license could continue to be presented as an identity document by individuals even if the circumstances under which it was issued are ultimately determined to be fraudulent. Despite the additional safeguards taken by some states, licensing agencies lack a systematic means to exchange information on all drivers nationwide, limiting their ability to deter identity theft and fraud. Currently, states have automated access and are required to use the NDR, which is a DOT database of 39 million problem drivers. With this system, licensing agencies have the ability to simultaneously query all 50 states to determine whether an applicant’s name appears in the database. For commercial drivers, states obtain information on their licensing, identification, and disqualification from the CDLIS database of 11 million records. States are required to input driver information into CDLIS and to use the system to verify commercial driver record information during the licensing process. Because the NDR and CDLIS target specific driver populations and do not include the records and identity information of the approximately 188 million drivers operating in the United States, state driver licensing agencies lack a single inquiry process to determine whether or not a person has ever been issued a license. Numerous officials in the states we visited told us that having a more efficient means of electronic interstate communications, that included the electronic transfer of identity information such as digital photographs and signatures, would improve the integrity of their licensing process. Officials in the states we visited were particularly concerned about individuals using licenses issued by other states as identity documents and their inability to quickly query all states’ databases to corroborate key information. As a result, states are limited in their ability to determine whether other states’ identity documents are authentic or to identify multiple individuals using the same personal identifying information in other states. Our analysis of one state’s data demonstrates the potential vulnerabilities driver licensing agencies currently face when accepting out-of-state licenses as proof of identity. We examined data from one state’s internal state cross-match of its existing driving records and identified numerous instances where the same out-of-state license number had been used by multiple individuals with different names and dates of birth to apply for and obtain a new license. We forwarded about 100 of these license numbers to the alleged issuing state and asked them to provide us with key information on the owner of record. We found 96 cases of potential identity fraud involving 52 of the driver licenses numbers. For example, states reported some license numbers as invalid or as being issued to someone other than the persons that had used them. One state reported back that the license number we submitted to them was actually a zip code, rather than a genuine state-issued license number. Another license was reported by the issuing state to be a valid number that had been counterfeited and used in several states. A July 2001 report to the Congress prepared by DOT in cooperation with AAMVA, identified alternatives to improving state data exchanges and discussed various options for change. The specialized nature of NDR and CDLIS does not allow states to verify licenses for all drivers—a means to identify potential identity fraud. However, the report concluded that an alternative system encompassing all driver records could operate efficiently using existing programs developed for CDLIS and on hardware that is currently in use. However, the report also concluded that before such a system could be developed, several potential obstacles should be addressed. These include agreeing on the use of a unique identifier by which to query all state driving records, ensuring that all states participate, defining the role of the federal government, and funding the costs of developing and converting to an all-driver system. The report also acknowledged that state resources for development and implementation would be necessary to cover projected costs, which AAMVA has estimated to be about $78 million over 3 years. However, the report concluded that, once operational, user fees similar to those imposed for CDLIS could be levied by states to cover operational expenses. The driver license is a key identity document that can be used by individuals to obtain a range of public and private services nationwide. Accordingly, state driver license agencies face a daunting task in ensuring that the identity information of those to whom they issue licenses is verified. However, states’ effectiveness in this area is often dependent on several factors, including the receipt of timely and accurate identity information from SSA, the extent to which they implement additional identity verification and fraud detection tools, and their ability to quickly and systematically share key driving record information with other state licensing systems. Deficiencies in any of these areas may weaken states’ efforts to ensure the integrity of their licensing decisions. Unfortunately, design and management weaknesses associated with SSA’s verification service have limited its effectiveness. States that are unable to take full advantage of the service and others that are waiting for the opportunity to use it remain vulnerable to identity theft and fraud. SSA’s recent efforts to refocus management attention on improving its service represents a positive step and may be key to moving more state licensing agencies away from processes that rely heavily on fraud-prone visual inspections of identity documents, to one in which information such as an individual’s SSN, name, and date of birth can be quickly and independently corroborated. However, sustained attention to improving its service is needed. Furthermore, states that continue to rely primarily or partly on SSA’s batch verification service still risk issuing licenses to individuals using the SSNs and other identity information of deceased individuals. This remains a critical flaw in SSA’s service and states’ efforts to strengthen the integrity of the driver license. Since September 11th, more state driver licensing agencies have begun to reassess their prior view that driver licenses are simply an authorization to operate a motor vehicle and have taken aggressive actions to strengthen the integrity of this important identity document. However, licensing programs remain state-administered and may vary considerably in the tools provided to front-line staff to verify identity information, such as access to automated independent third-party data sources. This has potentially serious consequences for the numerous public and private sector service providers who rely on the driver license as an identity document, but may be unaware that not all states’ licenses are equal in terms of the integrity of the identifying information included on them. Beyond the actions taken by individual states, coordination and data sharing is key to addressing many of the factors that allow identity theft and fraud to continue in the driver licensing process. No single state has overarching authority to require information sharing nationwide, define minimum standards for proof of identity, or mandate the development of a systematic means for interstate communication. However, cooperative efforts between the federal government, the states, and AAMVA have identified and facilitated technological options for improving the exchange of driver record data among all states. We recognize that potential barriers related to system’s design, funding, privacy rights, and states’ willingness to use such a tool have yet to be fully resolved. However, given the potential economic and national security implications associated with identity theft at the point of driver licensing, sustained leadership at the federal level could be the catalyst for needed change. In light of the homeland security implications associated with states’ inability to systematically exchange driver license identity information and the need for sustained leadership in this area, the Congress, in partnership with the states, should consider authorizing the development of a national data sharing system for driver records. Considering the significant increase in the number of on-line requests that SSA anticipates receiving from states, as well as the weaknesses that we identified in SSA’s service that may increase states’ vulnerability to identity fraud, we recommend that the Commissioner of Social Security take the following actions: Develop performance measures essential to assessing the quality of the service provided. Develop a strategy for improving the nonmatch rate for SSA’s verification service. This should include identifying additional information it can reasonably and legally disclose to state driver-licensing agencies as well as actions states can take to prevent nonmatches. Modify SSA’s batch verification method to include a match against its nationwide death records. We obtained written comments on a draft of this report from the Commissioner of SSA. SSA’s comments are reproduced in appendix II. SSA also provided additional technical comments, which we incorporated in the report as appropriate. We also requested that AAMVA officials review the technical accuracy of our discussion of AAMVA’s role in the SSN verification process, as well as our characterization of states’ identity verification and fraud prevention activities. We incorporated AAMVA’s comments in the report as appropriate. SSA generally agreed with our findings regarding its SSN verification service and said that recent improvements have increased states’ use of the service. The agency noted that it is continuing to investigate the sequence of events surrounding our ability to obtain driver licenses with counterfeit documents and the SSNs of deceased individuals. SSA also said that its service only offers confirmation that SSNs and other identity information provided by driver license applicants are consistent with its records and should not be perceived as a means for verifying identity. Also, SSA said that any attempts to reduce the nonmatch rate for its service by relaxing the match criteria would be inconsistent with the need for “tighter match requirements” and increased security in the post 9/11 era. We agree that SSA’s service does not allow states to definitively determine the identity of driver license applicants and have made small changes to ensure that our report will not be misinterpreted. However, we continue to believe that the verification service, in combination with other verification tools used by the states, is key to corroborating the identity information presented by driver license applicants. We also are not suggesting that SSA compromise the integrity of its verification service in order to reduce the nonmatch rate. However, our report shows that about half of all verification failures are for name mismatches. Such mismatches are thought to commonly occur due to changes in marital status. We continue to believe that opportunities exist for SSA to work with the states to explore options for addressing this issue and to ultimately improve the overall quality of its service. In response to our specific recommendations, SSA disagreed that it should develop measures for assessing the quality of its SSN on-line verification service. Instead, SSA said that it plans to develop a performance baseline for enumeration accuracy to measure whether applicants were entitled to receive an SSN based on supporting documentation. SSA did not believe that developing performance measures specifically for its verification service would result in improved identity authentication. However, we continue to believe that the verification service, in combination with other tools used by the states, is key to corroborating driver license applicants’ identity information. As our report notes, performance concerns and issues often affected the extent to which states used SSA’s verification service, or whether they opted to use the service at all. Thus, some states lacked a key tool for corroborating the identity information of driver license applicants. We continue to believe that SSA should develop measures for its service to monitor and assess systems availability, outages, response times and other key aspects of performance. Without such measures, SSA lacks a means to identify performance problems and take corrective actions when needed. SSA agreed with our recommendations that it develop a strategy for improving the nonmatch rate for its service and that it modify the batch process to include a match against its death records. However, the agency said that factors such as legal restrictions on the information it may disclose to states and limited systems resources could restrict the actions it can take. Indeed, we encourage SSA to work within the existing law to develop policies to reduce nonmatches and to better assist states when they occur. Also, in view of states’ vulnerability to licensing individuals using deceased persons’ SSN information and the volume of batch verification requests submitted to SSA by the states, we believe immediate action is needed. We are sending copies of this report to the Commissioner of SSA and other interested parties. Copies will also be made available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report, please call me on (202) 512-7215. The major contributors to this report are listed in appendix III. This appendix provides additional details about our analysis of the Social Security Administration’s (SSA) verification services and states’ practices for verifying the identity of driver license applicants. To attain our objectives, we obtained and reviewed various reports related to the issue of identity verification from state auditors, SSA’s Office of Inspector General, and the American Association of Motor Vehicle Administrators (AAMVA). We reviewed federal requirements governing social security number (SSN) use in the driver licensing process, SSA’s policies for disclosing identity information to licensing agencies, and numerous verification agreements between SSA and the states. We analyzed nationwide data on states’ use of SSA’s verification service, including the volume of records submitted, trends in usage, and the rate at which SSNs failed to verify between October 1997 through May 2003. We interviewed SSA officials responsible for the SSN verification data with regard to the reliability of the data, and determined the data to be sufficiently reliable for our reporting purposes. We telephoned or visited states that were not using SSA’s service to obtain general information about their identity verification practices, as well as their plans for using SSA’s service in the future. To obtain more specific information on the design and management of SSA’s batch and on-line verification service, we interviewed key SSA line and management officials as well as AAMVA officials responsible for co- managing the on-line service. We also reviewed an SSA/AAMVA evaluation of a pilot of the on-line method. To determine batch service states’ vulnerability to individuals who may use deceased persons’ SSNs to obtain a license, we matched approximately 500,000 batch verification requests submitted by one state for the month of December 2002 against SSA’s Master Death file. We identified 44 instances in which SSA verified an SSN submitted by the state that matched an SSN in the death record where the death occurred before December 2002. In order to determine whether these individuals actually received a license or identity card, we submitted the 44 cases to the state licensing agency for its review. The state officials confirmed that licenses or identification cards had been issued in 41 cases and are currently reviewing the remaining cases. Because we selected a judgmental sample of cases to review, our findings are not generalizable to the entire state over time or to any other state. To gain more in-depth information on specific challenges states may encounter in their efforts to verify applicant identity documents, as well as their policies and procedures for doing so, we conducted field work in California, Florida, Georgia, Maine, Maryland, Massachusetts, Ohio, Pennsylvania, and Tennessee. At these locations we interviewed key management and line staff and obtained data and documents relative to their verification processes and tools. We selected states that were geographically dispersed to obtain a mix that (1) did, and did not, issue temporary licenses before issuing permanent licenses, and (2) have, and have not, used one or both of SSA’s verification services. We also chose some states that had large immigrant populations or were identified as using innovative practices to verify identity. We also interviewed and obtained information from representatives of private businesses that offer commercial services to assist driver licensing agencies in verifying identity information. Finally, to assess states’ vulnerability to accepting fraudulent out-of-state driving licenses as an identity document, we used one state’s listing representing numerous instances where the same out-of-state license number was used multiple times to obtain a license in another state. We selected about 100 cases where the name and date of birth of the individual were clearly different from one record to the next and submitted about 100 of them to the original issuing states. We obtained information from the states identifying the name and date of birth of the owner of the driver license to determine whether there was possible identification fraud. We conducted internal reliability checks for data received from state driver licensing agencies. Because we selected a judgmental sample of cases to review, our findings are not generalizable. We conducted our work from July 2002 through May 2003 in accordance with generally accepted government auditing standards. In addition to those named above, the following team members contributed to this report throughout all aspects of its development: Raun Lazier, Caterina Pisciotta, and Dorothy Yee. In addition, Daniel Schwimer, Mary Dorsey, Shana Wallace, Raymond Wessmiller, and Corrina Nicolaou made contributions. Social Security Numbers: Ensuring the Integrity of the SSN. GAO-03- 941T. Washington, D.C.: July 10, 2003. Social Security Numbers: Government Benefits from SSN Use but Could Provide Better Safeguards. GAO-02-352. Washington, D.C.: May 31, 2002. Social Security Numbers: SSNs Are Widely Used by Government and Could Be Better Protected. GAO-02-691T. Washington, D.C.: April 29, 2002. Child Support Enforcement: Most States Collect Drivers’ SSNs and Use Them to Enforce Child Support. GAO-02-239. Washington, D.C.: February 15, 2002. Responses to Questions From May 18th Hearing on Uses of Social Security Numbers. HEHS/AIMD-00-289R. Washington, D.C.: August 21, 2000. Social Security Numbers: Subcommittee Questions Concerning the Use of the Number for Purposes Not Related to Social Security. HEHS/AIMD-00-253R. Washington, D.C.: July 7, 2000. Social Security: Government and Other Uses of the Social Security Number are Widespread. GAO/T-HEHS-00-120. Washington, D.C.: May 18, 2000. Social Security: Use of the Social Security Number is Widespread. GAO/T-HEHS-00-111. Washington, D.C.: May 9, 2000. Social Security: Government and Commercial Use of the Social Security Number Is Widespread. GAO/HEHS-99-28. Washington, D.C.: February 16, 1999.
Since September 11, 2001, more attention has been focused on the importance of identifying people who use false identity information or documents to obtain a driver license. The Social Security Administration (SSA) offers states a service to verify social security numbers (SSNs) collected during the driver licensing process. This report examines states' use of SSA's verification service, factors that may affect the usefulness of the service, and other tools states use or need to verify identity. GAO found that 25 states have used either one or both of the methods SSA offers for requesting SSN verification. Over the last several years, states used the batch and on-line method to submit over 84 million and 13 million requests, respectively. Although on-line use has been increasing, usage varied significantly among states, with 5 out of 18 states submitting over 70 percent of all requests. States decide to use SSA's service based on various factors, such as costs and state priorities. Weaknesses in SSA's design and management of its SSN verification service have contributed to capacity and performance problems and limited its usefulness. While SSA recently increased systems capacity and reduced outages, problems remain. For example, the level of service cannot be assessed because SSA has not established key performance measures. States are concerned that the high verification failure rate adds to their workloads. Several states noted that some of the failures could be prevented if SSA disclosed more information to states. States using the batch method are vulnerable to licensing individuals using SSNs of deceased persons because SSA does not match requests against its death files. In fact, GAO obtained licenses using fraudulent documents and deceased persons' SSNs in 2 states. Driver licensing agencies rely primarily on visual inspection of documents such as birth certificates, driver licenses, and U.S. immigration documents to verify applicants' identity. While states may use safeguards beyond visual inspection to verify documents, they lack the ability to systematically exchange identity information on all drivers with other states. Without a means to readily share all driver records, states face a greater risk for identity theft and fraud in the driver licensing process. A recent Department of Transportation report to Congress identified options that would provide states a system for exchanging records on all drivers and could help mitigate identity fraud.
Cargo containers are an important segment of maritime commerce. Approximately 90 percent of the world’s cargo moves by container. In 2002, approximately 7 million containers arrived at U.S seaports, carrying more than 95 percent of the nation’s non-North American trade by weight and 75 percent by value. Many experts on terrorism—including those at the Federal Bureau of Investigation and at academic, think tank and business organizations—have concluded that oceangoing cargo containers are vulnerable to some form of terrorist action. A terrorist incident at a seaport, in addition to killing people and causing physical damage, could have serious economic consequences. In a 2002 simulation of a terrorist attack involving cargo containers, every seaport in the United States was shut down, resulting in a simulated loss of $58 billion in revenue to the U.S. economy, including spoilage, loss of sales, and manufacturing slowdowns and halts in production. CBP is responsible for preventing terrorists and weapons of mass destruction from entering the United States. As part of its responsibility, it has the mission to address the potential threat posed by the movement of oceangoing containers. To perform this mission, CBP has inspectors at the ports of entry into the United States. Inspectors assigned to seaports help determine which containers entering the country will undergo inspections, and then perform physical inspections of such containers. These determinations are not just based on concerns about terrorism, but also concerns about illegal narcotics and/or other contraband. The CBP Commissioner said that the large volume of imports and CBP’s limited resources make it impossible to physically inspect all oceangoing containers without disrupting the flow of commerce. The Commissioner also said it is unrealistic to expect that all containers warrant such inspection because each container poses a different level of risk based on a number of factors including the exporter, the transportation providers, and the importer. These concerns led to CBP implementing a layered approach that attempts to focus resources on potentially risky cargo containers while allowing other cargo containers to proceed without disrupting commerce. As part of its layered approach, CBP employs its Automated Targeting System (ATS) computer model to review documentation on all arriving containers and help select or target containers for additional scrutiny. The ATS was originally designed to help identify illegal narcotics in cargo containers, but was modified to help identify all types of illegal contraband used by smugglers or terrorists. In addition, CBP has a program, called the Supply Chain Stratified Examination, which supplements ATS by randomly selecting additional containers to be physically examined. The results of the random inspection program are to be compared with the results of ATS inspections to improve targeting. If CBP officials decide to inspect a particular container, they might first conduct a nonintrusive inspection with equipment such as the Vehicle and Cargo Inspection System (VACIS), which takes a gamma-ray image of the container so inspectors can detect any visual anomalies. With or without VACIS, inspectors can open a container and physically examine its contents. Other components of the layered approach include the Container Security Initiative (CSI) and the Customs-Trade Partnership Against Terrorism (C- TPAT). CSI is an initiative whereby CBP places staff at designated foreign seaports to work with foreign counterparts to identify and inspect high- risk containers for weapons of mass destruction before they are shipped to the United States. C-TPAT is a cooperative program between CBP and members of the international trade community in which private companies agree to improve the security of their supply chains in return for a reduced likelihood that their containers will be inspected. A supply chain consists of all stages involved in fulfilling a customer request, including stages conducted by manufacturers, suppliers, transporters, retailers, and customers. Risk management is a systematic process to analyze the threats, vulnerabilities, and criticality (or relative importance) of assets in a program to better support key decisions linking resources and program results. Risk management is used by many organizations in both government and the private sector. In recent years, we have consistently advocated the use of a risk management approach to help implement and assess responses to various national security and terrorism issues. We have concluded that without a risk management approach that provides insights about the present threat and vulnerabilities as well as the organizational and technical requirements necessary to achieve a program’s goals, there is little assurance that programs to combat terrorism are prioritized and properly focused. Risk management helps to more effectively and efficiently prepare defenses against acts of terrorism and other threats. Key elements of a risk management approach are listed below. Threat assessment: A threat assessment identifies adverse events that can affect an entity, and may be present at the global, national, or local level. Criticality assessment: A criticality assessment identifies and evaluates an entity’s assets or operations based on a variety of factors, including importance of an asset or function. Vulnerability assessment: A vulnerability assessment identifies weaknesses in physical structures, personnel protection systems, processes, or other areas that may be exploited by terrorists. Risk assessment: A risk assessment qualitatively and/or quantitatively determines the likelihood of an adverse event occurring and the severity, or impact, of its consequences. Risk characterization: Risk characterization involves designating risk on a scale, for example, low, medium, or high. Risk characterization forms the basis for deciding which actions are best suited to mitigate risk. Mitigation evaluation: Mitigation evaluation is the identification of mitigating alternatives to assess the effectiveness of the alternatives. The alternatives should be evaluated for their likely effect on the risk and their cost. Mitigation selection: Mitigation selection involves a management decision on which mitigation alternatives should be implemented. Selection among alternatives should be based on preconsidered criteria. Systems approach: An integrated systems approach to risk management encompasses taking action in all organizational areas, including personnel, processes, technology, infrastructure, and governance. Monitoring and evaluation: Monitoring and evaluation is a continuous repetitive assessment process to keep risk management current and relevant. It includes external peer review, testing, and validation. Modeling can be an important part of a risk management approach. To assess modeling practices related to ATS, we interviewed terrorism experts and representatives of the international trade community who were familiar with modeling related to terrorism and/or ATS and reviewed relevant literature. There are at least four recognized modeling practices that are applicable to ATS as a decision support tool. Conducting external peer review: External peer review is a process that includes an assessment of the model by independent and qualified external peers. While external peer reviews cannot ensure the success of a model, they can increase the probability of success by improving the technical quality of projects and the credibility of the decision- making process. Incorporating additional types of information: To identify documentary inconsistencies, targeting models need to incorporate various types of information to perform complex “linkage” analyses. Using only one type of information will not be sufficient to yield reliable targeting results. Testing and validating through simulated terrorist events: A model needs to be tested by staging simulated events to validate it as a targeting tool. Simulated events could include “red teams” that devise and deploy tactics in an attempt to define a system’s weaknesses, and “blue teams” that devise ways to mitigate the resulting vulnerabilities identified by the red team. Using random inspections to supplement targeting: A random selection process can help identify and mitigate residual risk (i.e., the risk remaining after the model-generated inspections have been done), but also help evaluate the performance of the model relative to other approaches. CBP has recognized the potential vulnerability of oceangoing cargo containers and has reviewed and updated some aspects of its layered targeting strategy. According to CBP officials, several of the steps that CBP has taken to improve its targeting strategy have resulted in more focused targeting of cargo containers that may hold weapons of mass destruction. CBP officials told us that, given the urgency to take steps to protect against terrorism after the September 11, 2001, terrorist attacks, they had to take an “implement and amend” approach. That is, they had to immediately implement targeting activities with the knowledge they would have to amend them later. Steps taken by CBP include the following: In November 2001, the U.S. Customs Service established the National Targeting Center to support its targeting initiatives. Among other things, the National Targeting Center interacts with the intelligence community and manages a national targeting training program for CBP targeters. In August 2002, CBP modified the ATS as an antiterrorism tool by developing terrorism-related targeting rules and implementing them nationally. CBP is now in the process of enhancing the ATS terrorism- related rules. In 2002, CBP also developed a 2-week national training course to train staff in targeting techniques. The course is intended to help ensure that seaport targeters have the necessary knowledge and ability to conduct effective targeting. The course is voluntary and is conducted periodically during the year at the Los Angeles, Long Beach, and Miami ports, and in the future it will also be conducted at the National Targeting Center. In February 2003, CBP began enforcing new regulations about cargo manifests—called the “24 hour rule”—which requires the submission of complete and accurate manifest information 24 hours before a container is loaded on a ship at a foreign port. Penalties for non- compliance can include a CBP order not to load a container on a ship at the port of origin or monetary fines. The rule is intended to improve the quality and the timeliness of manifest information submitted to CBP, which is important because CBP relies extensively on manifest information for targeting. According to CBP officials we contacted, although no formal evaluations have been done, the 24-hour rule is beginning to improve both the quality and timeliness of manifest information. CBP officials acknowledged, however, that although improved, manifest information still is not always accurate or reliable data for targeting purposes. While CBP’s targeting strategy incorporates some elements of risk management, our discussions with terrorism experts and our comparison of CBP’s targeting system with recognized risk management practices showed that the strategy does not fully incorporate all key elements of a risk management framework. Elements not fully incorporated are discussed below. CBP has not performed a comprehensive set of assessments for cargo containers. CBP has attempted to assess the threat of cargo containers through contact with governmental and nongovernmental sources. However, it has not assessed the vulnerability of cargo containers to tampering or exploitation throughout the supply chain, nor has it assessed which port assets are the most critical to carrying out its mission—and therefore in the most need of protection. These assessments, in addition to threat assessments, are needed to understand and identify actions to mitigate risk. CBP has not conducted a risk characterization for different forms of cargo or the different modes of transportation used to import cargo. Further, CBP has not performed a risk characterization to assess the overall risk of cargo containers. These characterizations would enable CBP to better assess and prioritize the risks posed by oceangoing cargo containers and incorporate mitigation activities in an overall strategy. CBP actions at the ports to mitigate risk are not part of an integrated systems approach. Risk mitigation encompasses taking action in all organizational areas, including personnel, processes, technology, infrastructure, and governance. An integrated approach would help ensure that taking action in one or more areas would not create unintended consequences in another. For example, taking action in the areas of personnel and technology—adding inspectors and scanning equipment at a port—without at the same time ensuring that the port’s infrastructure is appropriately reconfigured to accept these additions and their potential impact (e.g., more physical examinations of containers), could add to already crowded conditions at that port and ultimately defeat the purpose of the original actions. We recognize that CBP implemented the ATS terrorist targeting rules in August 2002 because of the pressing need to utilize a targeting strategy to protect cargo containers against terrorism, and that CBP intends to amend the strategy as necessary. In doing so, implementing a comprehensive risk management framework would help CBP ensure that information is available to management to make choices about the best use of limited resources. This type of information would help CBP obtain optimal results and would identify potential enhancements that are well conceived, cost- effective, and work in tandem with other system components. Thus, it is important for CBP to amend its targeting strategy within a risk management framework that takes into account all of the system’s components and their vital linkages. Interviews with terrorism experts and representatives from the international trade community who are familiar with CBP’s targeting strategy and/or terrorism modeling told us that ATS is not fully consistent with recognized modeling practices. Challenges exist in each of the four recognized modeling practice areas that these individuals identified: external peer review, incorporating different types of information, testing and validating through simulated events, and using random inspections to supplement targeting. With respect to external review, CBP had limited external consultations when developing the ATS rules related to terrorism. With respect to the sources and types of information, ATS relies on the manifest as one of its sources of data, and CBP does not mandate the transmission of entry data before a container’s risk level is assigned. Terrorism experts, members of the international trade community, and CBP inspectors at the ports we visited characterized the ship’s manifest as one of the least reliable or useful types of information for targeting purposes. In this regard, one expert cautioned that even if ATS were an otherwise competent targeting model, there is no compensating for poor input data. Accordingly, if the input data are poor, the outputs (i.e., the risk assessed targets) are not likely to be of high quality. Another problem with manifests is that shippers can revise them up to 60 days after the arrival of the cargo container. These problems with manifest data increase the potential value of additional types of information. With respect to testing and validation, the only two known instances of simulated tests of the targeting system were conducted without CBP’s approval or knowledge by the American Broadcast Company (ABC) News in 2002 and 2003. In an attempt to simulate a terrorist smuggling highly enriched uranium into the United States, ABC News sealed depleted uranium into a lead-lined pipe that was placed in a suitcase and later put into a cargo container. In both instances, CBP targeted the container that ABC News used to import the uranium, but it did not detect a visual anomaly from the lead-lined pipe using VACIS and therefore did not open the container. With respect to instituting random inspections, CBP has a program to randomly select and examine containers regardless of their risk, titled the Supply Chain Stratified Examination. However, our review disclosed methodological problems with this program. We found a number of deficiencies in CBP’s national system for reporting and analyzing inspection statistics. While officials at all the ports we visited provided us with inspection data, we observed problems with the available data. In addition, we had to contact ports several times to obtain these data, indicating that basic data on inspections were not readily available. Separately, CBP officials said that they are trying to capture the results of cargo inspections through an enhancement to ATS. These enhancements were not implemented to an extent that we could evaluate their potential effectiveness. CBP does not have an adequate mechanism to test or certify the competence of targeters in their national targeting training program. The targeters taking the training must have a thorough understanding of course contents and their application at the ports. Because the targeters who complete the training are not tested or certified on course materials, CPB has little assurance that the targeters could perform their duties effectively or that they could train others to perform effectively. One of the key components of the CBP targeting and inspection process is the use of nonintrusive inspection equipment. CBP uses nonintrusive inspection equipment, including VACIS gamma-ray imaging technology, to screen selected cargo containers and to help inspectors decide which containers to further examine. A number of factors constrain the use of inspection equipment, including crowded port terminals, mechanical breakdowns, inclement weather conditions, and the safety concerns of longshoremen at some ports. Some of these constraints, such as space limitations and inclement weather conditions, are difficult if not impossible to avoid. According to CBP and union officials we contacted, concern about the safety of VACIS is a constraint to using inspection equipment. Union officials representing longshoremen at some ports expressed concerns about the safety of driving cargo containers through VACIS because it emits gamma rays when taking an image of the inside of the cargo container. Towing cargo containers through a stationary VACIS unit reportedly takes less time and physical space than moving the VACIS equipment over stationary cargo containers that have been staged for inspection purposes. As a result of these continuing safety concerns, some longshoremen are unwilling to drive containers through VACIS. CBP’s response to these longshoremen’s concerns has been to stage containers away from the dock, arraying containers in rows at port terminals so that the VACIS can be driven over a group of containers for scanning purposes. However, as seaports and port terminals are often crowded, and there is often limited space to expand operations, it can be space-intensive and time-consuming to stage containers. Not all longshoremen’s unions have safety concerns regarding VACIS inspections. For example, at the Port of New York/New Jersey, longshoremen’s concerns over the safety of operating VACIS were addressed after the union contacted a consultant and received assurances about the safety of the equipment. Similar efforts by CBP to convince longshoremen’s unions about the safety of VACIS have not been successful at some of the other ports we visited. One legacy of the September 11, 2001 terrorist attacks is uncertainty. It is unclear if, where, when, and how other attacks might occur and what steps should be taken to best protect national security. In the context of possible smuggling of weapons of mass destruction in cargo containers at our nation’s seaports, it is vital that CBP use its resources to maximize the effectiveness of its targeting strategy to reduce this uncertainty. Without incorporating all elements of a risk management framework and utilizing recognized modeling practices, CBP cannot be sure that its targeting strategy is properly focused and prioritized. In addition, risk management and the use of recognized modeling practices will not ensure security if there are lapses in implementing these practices at the ports. Finally, without instituting a national inspection reporting system, testing and certifying CBP officials that receive the targeting training, and resolving the safety concerns of longshoremen unions, the targeting system’s effectiveness as a risk management tool may be limited. Our Limited Official Use report contains several recommendations to DHS on how to better incorporate key elements of a risk management framework and recognized modeling practices. Additionally, the report contains recommendations to improve management controls to better implement the targeting strategy at seaports. This concludes my statement. I would now be pleased to answer any questions for the subcommittee. For further information about this testimony, please contact me at (202) 512-8816. Seto Bagdoyan, Stephen L. Caldwell, Kathleen Ebert, Jim Russell, and Brian Sklar also made key contributions to this statement. Additional assistance was provided by David Alexander, Katherine Davis, Scott Farrrow, Ann Finley, and Keith Rhodes. To assess whether CBP’s development of its targeting strategy is consistent with recognized risk management and modeling practices, we compiled a risk management framework and a list of recognized modeling practices, drawn from an extensive review of relevant public and private sector work, prior GAO work on risk management, and our interviews with terrorism experts. We selected these individuals based on their involvement with issues related to terrorism, specifically concerning containerized cargo, ATS, and modeling. Several of the individuals that we interviewed were referred from within the expert community, while others were chosen from public texts on the record. We did not assess ATS’s hardware or software, the quality of the threat assessments that CBP has received from the intelligence community, or the appropriateness or risk weighting of its targeting rules. To assess how well the targeting strategy has been implemented at selected seaports in the country, we visited various CBP facilities and the Miami, Los Angeles-Long Beach, Philadelphia, New York-New Jersey, New Orleans, and Seattle seaports. These seaports were selected based on the number of cargo containers processed and their geographic dispersion. At these locations, we observed targeting and inspection operations; met with CBP management and inspectors to discuss issues related to targeting and the subsequent physical inspection of containers; and reviewed relevant documents, including training and operational manuals, and statistical reports of targeted and inspected containers. We used these statistical reports to determine the type of data available; we did not assess the reliability of the data or use it to make any projections. At the seaports, we also met with representatives of shipping lines, operators of private cargo terminals, the local port authorities, and Coast Guard personnel responsible for the ports’ physical security. We also met with terrorism experts and representatives from the international trade community to obtain a better understanding of the potential threat posed by cargo containers and possible approaches to countering the threat, such as risk management. We conducted our work from January 2003 to February 2004 in accordance with generally accepted government auditing standards. This appendix details the risk management framework that GAO developed in order to assess CBP’s overall targeting strategy. In recent years, GAO has consistently advocated the use of a risk management approach as an iterative analytical tool to help implement and assess responses to various national security and terrorism issues. We have concluded that without a risk management approach, there is little assurance that programs to combat terrorism are prioritized and properly focused. Risk management principles acknowledge that while risk cannot be eliminated, enhancing protection from known or potential threats can help reduce it. Drawing on this precedent, we compiled a risk management framework—outlined below—to help assess the U.S. government’s response to homeland security and terrorism risk. One way in which the Department of Homeland Security’s U.S. Customs and Border Protection has already begun to manage risk is by developing and implementing the Automated Targeting System to target high-risk oceangoing containerized cargo for inspection. Applied to homeland security and terrorism risk, the framework assumes that the principal classes of risk from terrorism are to (1) the general public; (2) organizational, governmental, and societal infrastructure; (3) cyber and physical infrastructure; and (4) economic sectors/structures. Terrorism risk is framed by and is a function of (1) a strategic intent of inflicting extreme damage and disruption; (2) operational, logistical, and technological capabilities including the ability to obtain and deploy various classes of weapons against targets of least resistance (targets are chosen and prioritized according to their attractiveness or utility, based, in turn, on the potential for economic or human loss, their symbolic value, and name recognition); and (3) rational responses to moves designed to counteract them. This last aspect includes the identification and exploitation of loopholes in the response. A principal example of potential homeland security or terrorism risk is the global supply chain, a complex system of multiple interacting components with interdependent risk, and with the potential for this risk to be transferred from any weak links in the chain. The risk posed to the supply chain at the operational, or tactical, level is manifested, for example, in the movement of oceangoing containerized cargo. In terms of the importance of risk management, an entity exists to provide value for its stakeholders in an environment of uncertainty, which is a function of the ability to determine the likelihood of events occurring and quantify the resulting outcomes. As applied to homeland security, “value” is realized as protection (security) provided by the U.S. government against terrorism risk at an acceptable cost (function of time and money) for the recipients of the valued service (for example, the general public and the business community). This value might, on occasion, be at risk (worst-case loss scenario) that needs to be managed, thus risk management can be viewed as an integral part of managing homeland security. In terms of its benefits, risk management enables entities to operate more effectively in environments filled with risks by providing the discipline and structure to address them; risk management is not an end in itself but an important means of an entity’s management process. As such, it is interrelated with, among other things, an entity’s governance, performance management, and internal control. Further, risk management provides the rigor necessary to identify and select among alternative risk responses whose cumulative effect is intended to reduce risk, and the methodologies and techniques for making selection decisions. Also, risk management enables entities to have an enhanced capability to identify potential events, assess risks, and establish integrated responses to reduce “surprises,” and related costs and losses. In terms of its limitations, ultimately, risk management cannot eliminate risk and the environment of uncertainty that helps sustain it, but risk management can help reduce risk, with a goal of providing reasonable assurance that an entity’s objectives will be achieved. Risk management combines elements of science and judgment (human dimension to conflict), and ultimately relies on a set of estimates about risk that lies in the future, which is inherently uncertain. Accordingly, the results of risk management might be called into question because of, among other things, the potential for human errors in judgment and the potentially poor quality of information driving the risk management process. The framework is a composite of risk management best practices gleaned from our interviews with terrorism and risk-modeling experts and our extensive review of relevant reports on risk management, such as those by GAO, the Congressional Research Service, Booz Allen Hamilton (on contract to the U.S. intelligence community), and the Committee of the Sponsoring Organizations of the Treadway Commission (in conjunction with PricewaterhouseCoopers). For purposes of the risk management framework, we used the following definitions: Risk—an event that has a potentially negative impact, and the possibility that such an event will occur and adversely affect an entity’s assets and activities and operations, as well as the achievement of its mission and strategic objectives. As applied to the homeland security context, risk is most prominently manifested as “catastrophic” or “extreme” events related to terrorism, i.e., those involving more that $1 billion in damage or loss and/or more than 500 casualties. Risk management—a continuous process of managing, through a series of mitigating actions that permeate an entity’s activities, the likelihood of an adverse event happening and having a negative impact. In general, risk is managed as a portfolio, addressing entity-wide risk within the entire scope of activities. Risk management addresses “inherent,” or pre-action, risk (i.e., risk that would exist absent any mitigating action) as well as “residual,” or post-action, risk (i.e., the risk that remains even after mitigating actions have been taken). The risk management framework—which is based on the proposition that a threat to a vulnerable asset results in risk—consists of the following components: Internal (or implementing) environment—the internal environment is the institutional “driver” of risk management, serving as the foundation of all elements of the risk management process. The internal environment includes an entity’s organizational and management structure and processes that provide the framework to plan, execute, and control and monitor an entity’s activities, including risk management. Within the organizational and management structure, an operational unit that is independent of all other operational (business) units is responsible for implementing the entity’s risk management function. This unit is supported by and directly accountable to an entity’s senior management. For its part, senior management (1) defines the entity’s risk tolerance (i.e., how much risk is an entity willing to assume in order to accomplish its mission and related objectives) and (2) establishes the entity’s risk management philosophy and culture (i.e., how an entity’s values and attitudes view risk and how its activities and practices are managed to deal with risk). The operational unit (1) designs and implements the entity’s risk management process and (2) coordinates internal and external evaluation of the process and helps implement any corrective action. Threat (event) assessment—threat is defined as a potential intent to cause harm or damage to an asset (e.g., natural environment, people, man- made infrastructures, and activities and operations). Threat assessments consist of the identification of adverse events that can affect an entity. Threats might be present at the global, national, or local level, and their sources include terrorists and criminal enterprises. Threat information emanates from “open” sources and intelligence (both strategic and tactical). Intelligence information is characterized as “reported” (or raw) and “finished” (fully fused and analyzed). As applied to homeland security and terrorism risk, and from the perspective of the source of the threat (for example, a terrorist), beginning with intent (the basis of the threat), adverse event scenarios consist of six stages, as shown in table 1. Criticality assessment—criticality is defined as an asset’s relative importance. Criticality assessments identify and evaluate an entity’s assets based on a variety of factors, including the importance of its mission or function, the extent to which people are at risk, or the significance of a structure or system in terms of, for example, national security, economic activity, or public safety. Criticality assessments are important because they provide, in combination with the framework’s other assessments, the basis for prioritizing which assets require greater or special protection relative to finite resources. Vulnerability assessment—vulnerability is defined as the inherent state (either physical, technical, or operational) of an asset that can be exploited by an adversary to cause harm or damage. Vulnerability assessments identify these inherent states and the extent of their susceptibility to exploitation, relative to the existence of any countermeasures. As applied to the global supply chain, a vulnerability assessment might involve, first, establishing a comprehensive understanding of the business and commercial aspects of the chain (as a complex system with multiple interacting participants); and, second, “mapping” the chain and identifying vulnerability points that could be exploited. Risk assessment—risk assessment is a qualitative and/or quantitative determination of the likelihood (probability) of occurrence of an adverse event and the severity, or impact, of its consequences. Risk assessments include scenarios under which two or more risks interact creating greater or lesser impacts. Risk characterization—risk characterization involves designating risk as, for example, low, medium, or high (other scales, such as numeric, are also be used). Risk characterization is a function of the probability of an adverse event occurring and the severity of its consequences. Risk characterization is the crucial link between assessments of risk and the implementation of mitigation actions, given that not all risks can be addressed because resources are inherently scarce; accordingly, risk characterization forms the basis for deciding which actions are best suited to mitigate the assessed risk. Mitigation evaluation. Mitigation evaluation is the identification of mitigation alternatives to assess the effectiveness of the alternatives. The alternatives should be evaluated for their likely effect on risk and their cost. Mitigation selection. Mitigation selection involves a management decision on which mitigation alternatives should be implemented among alternatives, taking into account risk, costs, and the effectiveness of mitigation alternatives. Selection among mitigation alternatives should be based upon preconsidered criteria. There are as of yet no clearly preferred selection criteria, although potential factors might include risk reduction, net benefits, equality of treatment, or other stated values. Mitigation selection does not necessarily involve prioritizing all resources to the highest-risk area, but in attempting to balance overall risk and available resources. Risk mitigation—Risk mitigation is the implementation of mitigation actions, in priority order and commensurate with assessed risk; depending on its risk tolerance, an entity may choose not to take any action to mitigate risk (this is characterized as risk acceptance). If the entity does choose to take action, such action falls into three categories: (1) risk avoidance (exiting activities that expose the entity to risk), (2) risk reduction (implementing actions that reduce likelihood or impact of risk), and (3) risk sharing (implementing actions that reduce likelihood or impact by transferring or sharing risk). In each category, the entity implements actions as part of an integrated “systems” approach, with built-in redundancy to help address residual risk (the risk that remains after actions have been implemented). The systems approach consists of taking actions in personnel (e.g., training, deployment), processes (e.g., operational procedures), technology (e.g., software or hardware), infrastructure (e.g., institutional or operational—such as port configurations), and governance (e.g., management and internal control and assurance). In selecting actions, the entity assesses their costs and benefits, where the amount of risk reduction is weighed against the cost involved and identifies potential financing options for the actions chosen. Monitoring and evaluation of risk mitigation—Monitoring and evaluation of risk mitigation entails the assessment of the functioning of actions against strategic objectives and performance measures to make necessary changes. Monitoring and evaluation includes, where and when appropriate, peer review and testing and validation; and an evaluation of the impact of the actions on future options; and identification of unintended consequences that, in turn, would need to be mitigated. Monitoring and evaluation helps ensure that the entire risk management process remains current and relevant, and reflects changes in (1) the effectiveness of the actions and (2) the risk environment in which the entity operates—risk is dynamic and threats are adaptive. The risk management process should be repeated periodically, restarting the “loop” of assessment, mitigation, and monitoring and evaluation. This appendix details the recognized modeling practices that GAO used to assess CBP’s computerized targeting model, known as the ATS. CBP characterized ATS as a knowledge, or rule-based, expert system or model that serves as a “decision support tool” in implementing its targeting strategy. Accordingly, for purposes of this report, we identified four practices that are applicable to our review of ATS as such a tool. We identified these practices through our interviews with terrorism experts and representatives of the international trade community—who were familiar with modeling related to terrorism or to ATS—and GAO’s chief scientist; and our review of relevant literature, such as reports by the U.S. Department of Energy’s Office of Science and Technology and the National Research Council (part of the National Academies) and GAO. The four practices are Initiating an external peer review of ATS. Many agencies conduct various types of internal reviews of projects and programs. However, these reviews are usually conducted by managers or supervisors and thus are not independent. Peer review is a process that includes an independent, documented, critical assessment of the technical, scientific merit of research or programs by external peers who are highly qualified scientists with knowledge and expertise equal to that of those whose work they review. In this regard, peers must be capable of making independent judgments about the merit and relevance of what they are reviewing and have no conflicts of interest. If the results are to be used in programmatic decision making, peer reviews can improve the technical quality of projects by recognizing technical weaknesses and suggesting improvements that might be overlooked by those too close to the project; peer review can also enhance the credibility of the decision-making process by offering frank assessments not constrained by organizational concerns and by avoiding the reality and the perception of conflicts of interest. Peer review cannot ensure the success of a program, but it can increase the probability of success. Instituting a process of random inspections to supplement targeting. The experts we spoke with told us that the absence of a process to randomly select containerized cargo for screening or physical examination to supplement ATS was a shortcoming of CBP’s targeting strategy. Randomness pertains to a process whose outcome or value depends on chance or on a process that simulates chance, with the implication that all possible outcomes or values have a known, non-zero probability of occurrence—for example, the outcome of flipping a coin or executing a computer-programmed random number generator. A random selection process would not only help mitigate residual risk (i.e., the risk remaining after the original risk mitigation actions have been implemented), but also help evaluate the performance of targeting relative to other approaches. Enhancing the sources and types of information input into ATS. Terrorism experts and representatives of the international trade community told us that ATS needed to incorporate additional types of information in order to be able to perform complex “linkage” analyses in an attempt to identify documentary inconsistencies that must be detected to target suspicious containers. They also told us that the ship’s manifest (or transportation document that lists a summary of the cargo on board) does not contain enough information in sufficient detail to be useful, by itself, in targeting suspicious containers. These individuals further told us that the movement of containers through the global supply chain generated an additional amount of commercial documentation that could be used for this purpose. Examples of commercial documentation that could be used include purchase orders, commercial invoices, shippers’ letters of instruction, and certificates of origin. Testing and validating ATS by staging simulated terrorist events. The experts we spoke with emphasized the need to test ATS by staging simulated terrorist events in order to validate it as a targeting tool. Simulated events could include “red teams” attempting to smuggle a fake WMD into the United States hidden in an oceangoing cargo container. Red teaming is an approach to “model” a system’s adversary and define its weaknesses by devising attack tactics. A blue team may also be used to devise ways to mitigate vulnerabilities in an attempt to defend against the red team. Simulated events would determine whether ATS targeted the suspicious container for screening and/or physical examination, and whether the subsequent screening or examination actually detected the fake WMD. Maritime Security: Progress Made in Implementing Maritime Transportation Security Act, but Concerns Remain. GAO-03-1155T. Washington, D.C.: September 9, 2003. Container Security: Expansion of Key Customs Programs Will Require Greater Attention to Critical Success Factors. GAO-03-770. Washington, D.C.: July 25, 2003. Homeland Security: Challenges Facing the Department of Homeland Security in Balancing its Border Security and Trade Facilitation Missions. GAO-03-902T. Washington, D.C.: June 16, 2003. Container Security: Current Efforts to Detect Nuclear Material, New Initiatives, and Challenges (GAO-03-297T. Washington, D.C.: November 18, 2002. Customs Service: Acquisition and Deployment of Radiation Detection Equipment. GAO-03-235T. Washington, D.C.: October 17, 2002. Port Security: Nation Faces Formidable Challenges in Making New Initiatives Successful. GAO-02-993T. Washington, D.C.: August 5, 2002. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: October. 12, 2001. Federal Research: Peer Review Practices at Federal Science Agencies Vary. GAO/RCED-99-99. Washington, D.C.: March 17, 1999. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
After the attacks of September 11, 2001, concerns intensified that terrorists would attempt to smuggle a weapon of mass destruction into the United States. One possible method is to use one of the 7 million cargo containers that arrive at our seaports each year. Addressing the potential threat posed by the movement of oceangoing cargo containers falls to the Department of Homeland Security's U.S. Customs and Border Protection (CBP). Since CBP cannot inspect all arriving cargo containers, it uses a targeting strategy, including an Automated Targeting System. This system targets containers for inspection based on perceived level of risk. In this testimony, GAO summarizes its work on (1) whether the development of CBP's targeting strategy is consistent with recognized key risk management and modeling practices and (2) how well the strategy has been implemented at selected seaports. CBP has taken steps to address the terrorism risks posed by oceangoing cargo containers, but its strategy neither incorporates all key elements of a risk management framework nor is it entirely consistent with recognized modeling practices. Actions CBP has taken included refining the Automated Targeting System to target cargo containers that are a high risk for terrorism, or other smuggling, for physical inspection. CBP has also implemented national targeting training and sought to improve the quality and timeliness of manifest information, which is one of the inputs for its Automated Targeting System. However, regarding risk management, CPB has not performed a comprehensive set of assessments vital for determining the level of risk for oceangoing cargo containers and the types of responses necessary to mitigate that risk. Regarding recognized modeling practices, CBP has not subjected the Automated Targeting System to adequate external peer review or testing. It has also not fully implemented a process to randomly examine containers in order to test the targeting strategy. Without incorporating all key elements of a risk management framework and recognized modeling practices, CBP cannot be reasonably sure that its targeting strategy provides the best method to protect against weapons of mass destruction entering the United States at its seaports. GAO's visits to selected seaports found that the implementation of CBP's targeting strategy faces a number of challenges. Although port officials said that inspectors were able to inspect all containers designated by the Automated Targeting System as high-risk, GAO's requests for documentation raised concerns about the adequacy of CBP's data to document these inspections. CBP lacks an adequate mechanism to test or certify the competence of students who participate in their national targeting training. Additionally, CBP has not been able to fully address longshoremen's safety concerns related to inspection equipment. Addressing these concerns is important to ensure that cargo inspections are conducted safely and efficiently. Challenges to both the development and the implementation of CBP's targeting strategy, if not addressed, may limit the effectiveness of targeting as a tool to help ensure homeland security.
In the 1990s, a number of influential studies sponsored by NIH, IOM, and AAMC and the American Medical Association (AMA) identified some major problems in clinical research and highlighted NIH’s role in addressing some of these problems. First, there was a general concern that clinical research was receiving substantially less support than basic research at NIH, yet there was little systematic data to document how much, in fact, NIH was spending on clinical research. In an analysis of NIH investigator-initiated extramural grants active in 1991, an IOM committee found that 16 percent involved human research. A few years later, a panel appointed by the NIH director known as the “Nathan Panel,” developed a broad definition of clinical research (the definition NIH now uses) and, applying this definition to all NIH competing extramural research grants in fiscal year 1996, found that 27 percent of grants and 38 percent of dollars were devoted to clinical research. The Nathan Panel believed that this fraction of the extramural budget devoted to clinical research was reasonable and should remain about the same, as efforts to increase the NIH budget as a whole were pursued. The studies sponsored by NIH, IOM, and AAMC/AMA recommended that NIH monitor and track its expenditures on clinical research. A second concern was that clinical research proposals, especially those from individual investigators, did not fare as well as basic research proposals in peer review at NIH. Grant applications for clinical trials, clinical research centers, and clinical research training are typically reviewed by the sponsoring institute; however, the peer review of individual investigator grant applications usually takes place centrally, within CSR. CSR has approximately 65 study sections that review research. A study section is a panel of experts established according to scientific disciplines or research areas for the purpose of evaluating the scientific and technical merit of grant applications. In 1994 an NIH- commissioned study reported that patient-oriented research applications were less likely to receive favorable reviews in CSR than laboratory- oriented research applications when reviewed in study sections with less than 30 percent patient-oriented research applications. However when patient-oriented research applications were grouped in study sections with greater than 50 percent patient-oriented research, they fared as well as laboratory-oriented research applications. Consequently, this report recommended that study sections reviewing patient-oriented research should have at least 50 percent of such applications and that a means should be developed and implemented to collect and track data prospectively on research applications that are predominantly patient- oriented, laboratory-oriented, mixed, or clinical epidemiology and behavioral research. Similarly, the Nathan Panel recommended that panels that review clinical research must include experienced clinical investigators and that at least 30 to 50 percent of the applications reviewed by these panels must be for clinical research. The IOM committee also recommended more oversight of study section composition, functions, and outcomes pertaining to human research. A third problem identified in these studies was the adequacy of support for the infrastructure (that is, facilities, equipment, data systems, and research personnel) for the conduct of clinical research. Since the late 1950s, NIH has funded GCRCs across the U.S to provide clinical research infrastructure—facilities, equipment, and personnel—for NIH-funded investigators as well as non-federally funded investigators conducting patient-oriented research. Interdisciplinary and collaborative research is encouraged at these centers. The Nathan Panel, the IOM committee, and others recommended increasing financial support for GCRCs and broadening their leadership role in clinical research and research training. A fourth concern was the decline in the number of physicians conducting clinical research. According to data collected by the AMA, the number of physicians reporting research as their primary career activity fell by 6 percent from 1980 to 1997 (from 15,377 to 14,434), while the number reporting patient care as their primary career activity almost doubled (376,512 to 620,472). Observers identified a variety of challenges in pursuing a career as a clinical investigator, including the indebtedness of medical students, the length of time a clinical scientist must train, the culture of academic medicine, as well as the competition from other career options. For many years NIH has supported the training of investigators through extramural and intramural predoctoral, postdoctoral training and career development awards. However, there was concern that these awards were being directed toward basic research and were not sufficiently supporting the training and development of clinical investigators. The IOM committee, the Nathan Panel, and the AAMC/AMA reports recommended that NIH provide substantial new support for clinical research training, career development, and debt relief. NIH reports that it increased its funding of clinical research and expanded its clinical research activities in response to CREA. NIH estimates that it spent about one-third of its budget, or approximately $6.4 billion, on clinical research in fiscal year 2001. Based on these estimates, the proportion of the NIH budget spent on clinical research has remained fairly constant since fiscal year 1997. NIH’s estimates of clinical research expenditures represent the best available indications of financial trends over time, but they are not precise figures because the process of counting clinical research dollars varies widely across ICs. Finally, in response to CREA, some NIH ICs have developed specific clinical research initiatives. In fiscal year 2001, NIH estimated that it spent approximately $6.4 billion on clinical research, which represented about 32 percent of total research spending (see table 1). The institutes that spent the most on clinical research in fiscal year 2001 were the National Cancer Institute (NCI); the National Heart, Lung, and Blood Institute (NHLBI); and the National Institute of Mental Health (NIMH) (see app. I). NIH’s estimated expenditures on clinical research have kept pace with the overall growth in NIH’s budget. As NIH’s reported clinical research expenditures increased by 44 percent (adjusted for inflation) from fiscal year 1997 to fiscal year 2001, the proportion of research dollars spent on clinical research remained constant, at 32 percent, each year. NIH estimates that in fiscal year 2001, it spent approximately $5.9 billion on extramural clinical research, about 35 percent of its total extramural research expenditures. NIH’s extramural clinical research dollars were spent through a variety of funding mechanisms in fiscal year 2001. About 40 percent of the awarded dollars were grants to individual investigators, followed by other funding mechanisms, center grants, cooperative agreements, research program projects, and research and development contracts (see fig. 1). Of NIH’s total extramural research expenditures for cooperative agreements and center grants, the majority of dollars were spent on clinical research in fiscal year 2001. In fiscal year 2001, NIH estimated that it spent about $529 million, or 27 percent of its intramural research expenditures, on clinical research. NIH’s intramural clinical research activities include research at the Clinical Center on NIH’s Bethesda, Maryland, campus, as well as research by individual institutes. The Clinical Center’s budget represents more than half of the intramural clinical research expenditures. The budget of the Clinical Center increased from approximately $204 million in fiscal year 1997 to an estimated $303 million in fiscal year 2002. This budget increase supported an increase in admissions, inpatient days, and outpatient visits. NIH’s reports of clinical research expenditures represent the best available indications of financial trends, but are not precise figures. The methods NIH uses to count clinical research dollars are inconsistent across ICs, potentially underestimating or overestimating its actual clinical research expenditures. Since fiscal year 1997, the Office of Budget, within the Office of the Director, has collected information from each IC on its extramural and intramural clinical research expenditures. The ICs use the NIH definition of clinical research (described earlier), but they count the dollars in very different ways. The 20 ICs that fund clinical research reported three different ways of counting clinical research dollars. First, 12 ICs count 100 percent of the grant dollars of research projects that include any clinical research. Second, one institute, NCI, codes a research project’s “percent relevance” to clinical research. Projects are coded as 100 percent, major, minor, or 0 percent clinical research. If they are classified as “major,” they are assigned a percentage relevancy of 50 percent, and 50 percent of the dollars are counted. If they are classified as “minor,” they are assigned a percentage relevancy of 5 percent, and 5 percent of the dollars are counted. Third, 7 ICs either attempt to estimate the dollars of a research project spent on clinical research or the percentage of a project that is clinical research and apply that percentage to the total grant dollars. These different methods of counting clinical research dollars can produce very different results. For example, given a hypothetical grant to an investigator of $300,000 for which an IC has estimated that $50,000 of the budget would be spent on clinical research, some ICs would report that $300,000 was spent on clinical research; NCI could conclude that this grant has only minor relevance to clinical research and therefore would count 5 percent, or $15,000, as clinical research dollars; the rest of the ICs would estimate that this project is about 17 percent clinical research and therefore count $50,000 of the grant as clinical research dollars. The Office of Budget said that the reason the ICs count clinical research dollars differently is that each developed its own methods over time, and for historical consistency, they are reluctant to change. One IC director, who heads an NIH Director’s committee concerned with clinical research spending told us that NIH is working on ways to make its process of tracking and reporting clinical research dollars more consistent and accurate. In response to CREA, some institutes have developed new clinical research initiatives. For example, since the passage of CREA, NCI has funded two new clinical cancer centers and funded 22 new Specialized Programs of Research Excellence for different types of cancer, all of which involved early phase clinical trials. NHLBI is establishing new clinical research centers to study ways to reduce racial and economic disparities in asthma prevalence, treatment, and mortality and is funding trials to assess innovative strategies to improve the implementation of clinical practice guidelines for heart, lung, and blood diseases. The National Institute of Arthritis and Musculoskeletal Diseases has a new osteoarthritis initiative; funds multidisciplinary clinical research centers in arthritis, musculoskeletal, and skin diseases; and plans to enhance its translational research projects in children’s diseases. The National Institute of Allergy and Infectious Diseases (NIAID) has continued to fund large clinical trial networks such as the AIDS Clinical Trials Group, a $120 million per year initiative that involves research on pediatric and adult AIDS. Since passage of CREA, NIH has acted to strengthen its peer review of clinical research applications. CSR established two new study sections in the areas of clinical oncology and clinical cardiovascular sciences. In study sections with a mix of clinical and basic proposals, CSR tries to group clinical research applications and reviewers, but officials could not provide data to determine how successful it has been in achieving this goal. NIH has established peer review mechanisms at the institutes for the review of career development and training awards established under CREA. In response to concerns that clinical research proposals are not fairly reviewed in its study sections, CSR has established two new clinically oriented study sections, Clinical Oncology and Clinical Cardiovascular Sciences. In these scientific areas, CSR found that there were a sufficient number of clinical research applications to justify separate study sections. Although the two new clinical research study sections have been welcomed by the research community, some concerns remain among clinical investigators about the fairness of the review of clinical research by other study sections that have a mix of clinical and basic research. In these study sections, CSR officials told us they try to group clinical research applications and clinical research reviewers. CSR officials told us that it is their general goal to review clinical research applications in study sections in which at least 30 percent of the applications involve clinical research and in which at least 30 percent of the reviewers are themselves clinical investigators. CSR officials also explained that this goal cannot always be achieved because if the number of clinical research applications in a specific scientific area is small, it may not be possible to group the applications to 30 percent and still review them in a study section that provides the appropriate scientific context for review. They emphasized that reviewing applications in the appropriate scientific context is given priority over quantitative targets for grouping. CSR officials could not provide data on the extent to which they have been able to group clinical research applications and have very limited data on which reviewers are clinical investigators. The officials told us that, to date, they do not have reliable and accurate methods for identifying and tracking clinical applications or clinical reviewers. CSR officials told us they are in the process of a broader review and restructuring of their peer review system, with input from the scientific community, to account for new developments in science. According to CSR, one of the goals of this reorganization is grouping applications and reviewers at 30 percent so that there is a “density of expertise” in review sections. In addition, CSR has recently appointed a special advisor on clinical research review to serve as a liaison with the clinical research communities. To determine NIH’s response to CREA’s requirement that NIH establish appropriate mechanisms for the peer review of clinical research career development and training applications, we surveyed nine ICs that sponsored the highest number of clinical research career development awards in fiscal year 2001. We found that three ICs used a Special Emphasis Panel, while the six others used established committees or subcommittees to review clinical research career development and training applications. In addition, the ICs reported that most of the reviewers of these applications have clinical research experience, and some are involved in clinical research training. One institute brings in temporary reviewers to augment its committee if special expertise is needed. NCRR uses CSR for peer review of some career development applications that require very specific scientific expertise and therefore require review by the discipline-specific study sections of CSR. NIH has increased its support of GCRCs and GCRCs’ scope of work, as required by CREA. The GCRC budget has grown over time, although more slowly than NIH’s estimates of clinical research spending. Adjusted for inflation, the funding for GCRCs increased by 24 percent from fiscal year 1997 to fiscal year 2001, compared to a 44 percent estimated increase in clinical research spending at NIH during that same period. Although NIH has stopped funding some GCRCs, there has been a gradual increase in the number of GCRCs over time, from 74 in fiscal year 1997 to 79 in fiscal year 2001. There has also been an increase in the activities of GCRCs and some expansion in their scope since passage of CREA. NIH has increased funding for the GCRC program, although funding for the GCRCs has grown more slowly than NIH’s estimate of overall expenditures on clinical research. From fiscal year 1997 through fiscal year 2001, funding for the GCRCs increased from $153,521,000 to $220,824,000 (see table 2). Adjusted for inflation, this represents an increase of 24 percent, compared to the 44 percent estimated growth in total clinical research expenditures during this period. The number of GCRCs gradually increased during this period, from 74 to 79. Funding levels for individual GCRCs in fiscal year 2001 ranged from $712,339 to $6.2 million, with an average funding level of about $2.8 million. NIH officials told us that in fiscal year 2002, they are opening two new GCRCs, one at the University of Maryland and one at the University of Miami. Establishing a new GCRC costs about $2.5 million and requires a certain threshold of investigators. Once a GCRC is set up, attracting additional investigators and research activities is easier, according to NIH officials. Also shown in table 2, some activities of GCRCs have increased in recent years. For example, the number of research protocols and investigators supported by GCRCs increased from fiscal year 1997 through fiscal year 2001. While the number of inpatient days funded by GCRCs declined from 70,814 in fiscal year 1997 to 62,769 in fiscal year 2001, the number of outpatient visits increased from 282,125 to 334,828 during the same period. Since passage of CREA, NIH officials told us there has not been a change in the mission of GCRCs, but there has been an increase in the scope of GCRC activities. For example, in fiscal year 2002, 27 GCRCs have funded Clinical Research Feasibility pilot projects to support the research of beginning investigators. In addition, 76 GCRCs now each have a Research Subject Advocate who helps ensure that GCRC research is conducted safely and protects human research subjects. CREA required that NIH expand the activities of the GCRCs through increased use of telecommunications and telemedicine initiatives. In response, NIH officials told us they increased their support of specialized bioinformatics networks that electronically link research data across GCRCs. Specifically, NCRR established a Biomedical Informatics Research Network, a computerized network that allows investigators affiliated with GCRCs to share high-resolution images of human brains and large volumes of complex data and conduct remote analysis of the data. In fiscal year 2001, NCRR funded five bioinformatics centers at $2.1 million, and a coordinating center at $1.6 million, spending a total of $3.7 million on this initiative. In fiscal year 2002, $6 million has been set aside to extend this network. NCRR also funded a collaborative pilot project between the Cystic Fibrosis Foundation and several GCRCs, called CFnet, to assess whether clinical trials could be facilitated across GCRC sites with Web- based data handling. Based on the success of this pilot, NCRR plans to extend CFnet to 20 GCRCs and also establish a comparable network among the eight U.S. medical schools that have a high proportion of minority students to facilitate the schools’ participation in clinical trials that relate to health disparities. NIH has established the four new career development award programs required by CREA. Three of these have been implemented, and the fourth is just beginning. NIH has also established intramural and extramural clinical research training programs for medical and dental students and clinical research continuing education programs as required by CREA. NIH recently established three new clinical research career development award programs for individuals and institutions outside government that are designed to increase the supply and expertise of clinical investigators (see table 3). NIH used its K award mechanism, its usual method for providing support for career development of investigators, to establish these programs. In fiscal year 1999, NIH implemented the Mentored Patient-Oriented Research Career Development Award (K23) to support investigators who are committed to conducting patient-oriented research for 3 to 5 years. In the same year, NIH implemented the Mid-Career Investigator Award in Patient-Oriented Research (K24) to provide support for more senior clinicians to relieve them of patient-care duties and administrative responsibilities so that they can conduct patient-oriented research and serve as mentors for beginning clinical investigators. The Clinical Research Curriculum Award (K30), also implemented in fiscal year 1999, supports the development and expansion of clinical research teaching programs at institutions. About half of the K30 programs offer graduate degrees in clinical research (for example, masters or doctorate). The response to these new award programs was substantial, and NIH funded more awards than originally planned. NCRR and the largest institutes (for example, NCI, NHLBI, and NIMH) sponsored the highest number of the new K23 and K24 awards. NHLBI is administering the majority of the K30 awards. Although NIH has received applications for K23 and K24 awards from a variety of clinical investigators, most applicants and awardees are physicians. The K30 awards have primarily gone to academic medical centers. The new awards combined represent 25 percent of expenditures NIH allotted for all K awards under its Career Development Program in fiscal year 2001 (see fig. 2). NIH officials told us that they are initiating plans to evaluate the new clinical research career development awards and track career outcomes. The design of this assessment will be based on previous studies of training award recipients, specifically NIH’s study of the outcomes of the National Research Service Awards (NRSA) and will rely on NIH’s new electronic grant application. In 2001 NIH announced a fourth new clinical research career development award, the Mentored Clinical Research Scholar Program (K12). This award program, sponsored by NCRR and linked to the GCRCs, is NIH’s response to CREA’s directive to support graduate training in clinical research. NCRR decided to start the K12 program as a small pilot project and then expand it later if successful. In fiscal year 2002, NCRR received 43 applications for this award and expects to fund 10 of these. In the first year of the program, each funded award may enroll three clinical research scholars, for a total of 30 scholars. NIH projects that the number of scholars could grow to 120 in 5 years. We interviewed several K30 program directors who indicated that obtaining graduate tuition and stipend support for their students and prospective students was a major constraint. The K30 award, which has been well received in the research community, funds curriculum, staff, as well as tuition and other costs in special circumstances, but generally does not directly support students. Instead, students must seek funding from other NIH, federal, or private sources. An NIH official estimated that the number of formal trainees in individual K30 programs ranges from several to three dozen. This official was not able to provide data on whether these students had tuition support and what kind of support. However, the K30 program directors we talked to said some of their students had tuition support from other NIH funding mechanisms; others had support from their university. Although the new K12 program is consistent with the requirements of CREA, some K30 program directors and other experts believe the size and scope of the program will be too small to meet the need for graduate training support for clinical investigators. In terms of fellowships for clinical research training, in fiscal year 2001, NCRR announced a new mentored medical student clinical research program that will support a small number of medical and dental students at GCRCs. This program provides supplemental grants to GCRCs to offer 1 year of support for medical and dental students, usually from their third through fourth year of school, in the form of salary, supplies, and tuition assistance. A total of five students may eventually be supported at each GCRC site annually, although NCRR plans to provide support for only one medical student per GCRC in fiscal year 2002. Since 1997, NIH has also trained medical and dental students at its campus in the area of clinical research. In this program, partially supported by a pharmaceutical company, 15 to 20 students are selected each year and are each paired with a mentor for a year of academic study and clinical research experience. NIH has launched an extramural loan repayment program for clinical investigators as required by CREA, and most of NIH’s ICs participate in the program. In the first year of implementation, eligibility for the loan repayment program was tied to receipt of NIH funding. However, in fiscal year 2003, NIH plans to extend eligibility to allow clinical investigators who receive funding from other sources, such as other federal agencies and nonprofit foundations, to apply. In response to CREA, NIH established an extramural Clinical Research Loan Repayment Program. This new loan repayment program joins four other extramural loan repayment programs and four intramural loan repayment programs that are administered by NIH’s Office of Loan Repayment and Scholarship. The new extramural Clinical Research Loan Repayment Program was implemented on December 28, 2001, and a total of 456 applications were received by February 28, 2002. NIH plans to fund 396 loan repayment contracts for a total of $20.2 million by the end of fiscal year 2002. The program provides for the repayment of up to $35,000 per year of the principal and interest of an individual’s educational loans for each year of obligated service. These individuals are obligated to engage in clinical research for at least 2 years. The clinical research loan repayment program represents a sizeable proportion (almost two-thirds) of the total extramural loan repayment program budget. To be eligible for the clinical research loan repayment program, a clinical investigator must have received an NIH research service award, training grant, career development award, or other NIH grant as a first-time principal investigator or a first-time director of a subproject on a grant or cooperative agreement. In fiscal year 2003, the Director of the Office of Loan Repayment and Scholarship told us that NIH plans to remove the NIH-funding restriction and allow clinical investigators who receive funding from other sources, such as other federal agencies and nonprofit foundations, to apply for the loan repayment program. In addition, NIH expects to almost double the size of the extramural Clinical Research Loan Repayment Program in fiscal year 2003. Although NIH has a central office that administers all the loan repayment programs, funding for the clinical research loan repayment program was distributed to the ICs, based on reported clinical research expenditures in fiscal year 1999. Thus 21 of NIH’s 27 ICs plan to participate in the program by reviewing applications and awarding loan repayment contracts (see app. II). The ICs sponsoring the highest number of contracts are NCI, NHLBI, and NIMH. NCRR also plans to sponsor a significant number of loan repayment contracts. As with most of the training and career development awards, an NIH official told us that the ICs were in the best position to assess applications and the clinical research career potential of awardees. In general, NIH has complied with the key provisions in CREA. It has increased its financial support of clinical research, expanded its clinical research activities, made improvements in its review of clinical research proposals, expanded its support of GCRCs, established new clinical research career development and training programs, and begun to implement a new extramural clinical research loan repayment program. Some of NIH’s actions were taken prior to CREA’s passage and some are still being implemented. However, we identified some inconsistencies with the way that NIH counts clinical research expenditures. These inconsistencies limit the precision of NIH’s reports of clinical research expenditures and its ability to monitor the support of clinical research. To strengthen the tracking and reporting of intramural and extramural expenditures for clinical research, we recommend that the Director of NIH develop and implement a consistent, accurate, and practical way for all ICs to count intramural and extramural clinical research expenditures. NIH reviewed a draft of this report and provided comments, which are included as appendix III. NIH concurred with our recommendation and reported that it is taking steps to implement a better, more unified system for tracking and reporting clinical research expenditures across the ICs. According to NIH, this new system will be implemented in fiscal year 2003. NIH also provided technical comments, which we incorporated as appropriate. In particular, NIH clarified its response to our questions about the peer review of clinical research. NIH emphasized that it recognizes the importance of collecting data on the grouping of clinical research applications and reviewers. Toward that end, NIH stated that one of the responsibilities of CSR’s newly appointed Special Advisor on Clinical Research Review will be to investigate new methods to reliably identify and track clinical research applications and clinical research reviewers. We will send copies to the Secretary of Health and Human Services, the Director of NIH, appropriate congressional committees, and others who are interested. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions, please contact me at (202) 512-7119 or Martin T. Gahart at (202) 512-3596. Key contributors to this assignment were Anne Dievler, Cedric Burton, and Elizabeth Morrison.
Clinical research is critical for the development of strategies for the prevention, diagnosis, prognosis, treatment, and cure of diseases. Clinical research has been defined as patient-oriented research, epidemiologic and behavioral studies, and outcomes research and health services research. The National Institutes of Health (NIH) is the principal federal agency that funds clinical research supporting individual clinical investigators, clinical trials, general and specialized clinical research centers, and clinical research training. For many years, there have been concerns that clinical research proposals are viewed less favorably than basic research during the peer review process at NIH and that clinical research has not received its fair share of NIH funding. In November 2000, the Clinical Research Enhancement Act was enacted to address some of these concerns. NIH reports that it has increased its financial support of clinical research and that spending on clinical research has kept pace with total NIH research spending. NIH has taken some steps to improve its peer review of clinical research applications. The Center for Scientific Review recently added two new peer review study sections for the review of clinical research applications--one for clinical cardiovascular science and other for clinical oncology. NIH has increased its support of general clinical research centers, as required by the act, although the program has grown more slowly than NIH's overall estimated expenditures on clinical research. NIH has established the four clinical research career enhancement award programs mandated by the act. Three of these programs have been implemented, and they support new and midcareer clinical investigators and institutional clinical research teaching programs. The fourth program is designed to support graduate training in clinical investigation. NIH has initiated a new extramural loan repayment program specifically for clinical investigators as required by the act. This program was launched in December 2001. NIH received 456 applications by the February 2002 deadline. Twenty-one of NIH's institutes plan to fund 396 loan repayment contracts, for a total of $20.2 million, by the end of fiscal year 2002.
The LDA, as amended by HLOGA, requires lobbyists to register with the Secretary of the Senate and the Clerk of the House and file quarterly reports disclosing their lobbying activity. No specific requirements exist for lobbyists to create or maintain documentation in support of the reports they file. However, LDA guidance issued by the Secretary of the Senate and the Clerk of the House recommends lobbyists retain copies of their filings and supporting documentation for at least 6 years after their reports are filed. Lobbyists are required to file their registrations and reports electronically with the Secretary of the Senate and the Clerk of the House through a single entry point (as opposed to separately with the Secretary of the Senate and the Clerk of the House as was done prior to HLOGA). Registrations and reports must be publicly available in downloadable, searchable databases from the Secretary of the Senate and the Clerk of the House. The LDA requires that the Secretary of the Senate and the Clerk of the House of Representatives provide guidance and assistance on the registration and reporting requirements of the LDA and develop common standards, rules, and procedures for compliance with the LDA. The Secretary and the Clerk are to review the guidance semiannually, with the latest revision having occurred in June 2010 and the latest review having occurred in December 2010. The guidance provides definitions of terms in the Act, Secretary and Clerk interpretations of the LDA as amended by HLOGA, specific examples of different scenarios, and an explanation of why the scenarios prompt or do not prompt disclosure under LDA. In meetings with the Secretary and Clerk, they stated that they consider information we report on lobbying disclosure compliance when they periodically update the guidance. The LDA defines a “lobbyist” as an individual who is employed or retained by a client for compensation; who has made more than one lobbying contact (written or oral communication to a covered executive or legislative branch official made on behalf of a client); and whose lobbying activities represent at least 20 percent of the time that he or she spends on behalf of the client during the quarter. Lobbying firms are persons or entities that have one or more employees who are lobbyists on behalf of a client other than that person or entity. 2 U.S.C. § 1602(9). for lobbying activities. Lobbyists are also required to submit a quarterly report, also known as an LD-2 report, for each registration filed. The registration and subsequent LD-2 reports must disclose: the name of the organization, lobbying firm, or self-employed individual that is lobbying on that client’s behalf; a list of individuals who acted as lobbyists on behalf of the client during whether any lobbyists served as covered executive branch or legislative branch officials in the previous 20 years, known as a “covered official” position; the name of and further information about the client, including a general description of its business or activities; information on the general issue areas and corresponding issue codes used to describe lobbying activities; any foreign entities that have an interest in the client; whether the client is a state or local government; information on which federal agencies and house(s) of Congress the lobbyist contacted on behalf of the client during the reporting period; the amount of income related to lobbying activities received from the client (or expenses for organizations with in-house lobbyists) during the quarter rounded to the nearest $10,000; and a list of constituent organizations that contribute more than $5,000 for lobbying in a quarter and actively participate in planning, supervising, or controlling lobbying activities, if the client is a coalition or association. The LDA, as amended, also requires lobbyists to report certain contributions semiannually in the contributions report, also known as the LD-203 report. These reports must be filed 30 days after the end of a semiannual period by each organization registered to lobby and by each individual listed as a lobbyist on an organization’s lobbying reports. The lobbyists or organizations must: list the name of each federal candidate or officeholder, leadership political action committee, or political party committee to which they made contributions equal to or exceeding $200 in the aggregate during the semiannual period; report contributions made to presidential library foundations and presidential inaugural committees; report funds contributed to pay the cost of an event to honor or recognize a covered official, funds paid to an entity named for or controlled by a covered official, and contributions to a person or entity in recognition of an official or to pay the costs of a meeting or other event held by or in the name of a covered official; and certify that they have read and are familiar with the gift and travel rules of the Senate and House and that they have not provided, requested, or directed a gift or travel to a member, officer, or employee of Congress that would violate those rules. The Secretary of the Senate and the Clerk of the House of Representatives, along with the U.S. Attorney’s Office for the District of Columbia (the Office) are responsible for the enforcement of the LDA. The Secretary and the Clerk notify lobbyists or lobbying firms in writing that they may be in noncompliance with the LDA, and subsequently refer those lobbyists who fail to provide an appropriate response to the Office. The Office researches these referrals and sends additional noncompliance notices to the lobbyists, requesting that the lobbyists file reports or correct reported information. If no response is received after 60 days, the Office decides whether to pursue a civil case against referred lobbyists which could result in penalties up to $200,000 or a criminal case against lobbyists who knowingly and corruptly fail to comply with the act that could lead to a maximum of 5 years in prison. While no specific requirements exist for lobbyists to create or maintain documentation in support of the reports they file, LDA guidance issued by the Secretary of the Senate and Clerk of the House recommends lobbyists retain copies of their filings and supporting documentation for at least 6 years after their reports are filed. As in our prior reviews most lobbyists reporting $5,000 or more in income or expenses were able to provide documentation to varying degrees for the reporting elements in their disclosure reports. Lobbyists for an estimated 97 percent of LD-2 reports were able to provide documentation for income and expenses for the fourth quarter of 2009 and the first three quarters of 2010. The most common forms of documentation provided included invoices for income and payroll records for expenses. According to the documentation lobbyists provided for income and expenses, we estimate that the amount disclosed was supported for 68 percent (65 of 96) of the LD-2 reports; differed by at least $10,000 from the reported amount in 13 percent (13 of 96) of LD-2 reports; and had rounding errors in 19 percent (18 of 96) of LD-2 reports. Lobbyists for an estimated 90 percent of the LD-2 reports filed year-end 2009 or midyear 2010 LD-203 contribution reports for all of the lobbyists and the lobbying firm listed on the report as required. All individual lobbyists and lobbying firms reporting specific lobbying activity are required to file LD-203 reports each period even if they have no contributions to report, because they must certify compliance with the gift and travel rules. Figure 1 illustrates the extent to which lobbyists were able to provide documentation to support selected elements on the LD-2 reports. Of the 100 LD-2 reports in our sample, 52 disclosed lobbying activities at executive branch agencies with lobbyists for 28 of these reports providing documentation to support lobbying activities at all agencies listed. These results are consistent with our findings from last year’s Lobbying Disclosure report. Based on this we estimate that approximately 54 percent of all reports disclosing executive branch activities could be supported by documentation. The LDA requires lobbyists to disclose previously held covered positions when first registering as a lobbyist for a new client, either on the lobbying registration (LD-1) or on the first LD-2 quarterly filing when added as new. Of the 100 reports in our sample, 15 reports listed lobbyists who did not disclose covered positions when they first lobbied on behalf of the client as required or on subsequent disclosure reports. We therefore estimate that a minimum of 9 percent of all LD-2 reports, list lobbyists who never properly disclosed one or more previously held covered positions. Table 1 lists the common reasons why lobbyists we interviewed stated they did not have documentation for some of the elements of their LD-2 report. For 21 of the LD-2 reports in our sample, lobbyists indicated they planned to file an amendment as a direct result of our review. As of March 2011, 12 of those 21 lobbyists had filed amended LD-2 reports. Reasons for filing amendments varied, but included reporting lobbyists covered positions, changing the income or expense amounts previously reported, and removing lobbyists who did not lobby on behalf of the client during the quarter under review. In addition to the 21 reports that lobbyists stated they were going to file an amendment following our review, lobbyists filed amendments for 8 of the reports in our sample after being notified their report was selected as part of our random sample but prior to our review. Specific reasons lobbyists filed amendments to change the original filing were to: Report no lobbying activity, reduce the amount of lobbying income from $21,000 to less than $5,000, and remove the previously reported lobbying contact with the Senate and the House, which the lobbyists stated did not occur during the quarter. Add a lobbying contact with the Senate and lower the income reported from $10,000 to less than $5,000. Add the client’s interest in a foreign entity. Change the client’s name, remove and add the name of the federal agencies lobbied, remove the earlier reported lobbying contact with the Senate, and remove a lobbyist. Add a lobbying contact with the House and add a lobbyist. Add lobbyists. Add a federal agency and remove a bill number. Change the point of contact. We estimate that a minimum of 5 percent of all LD-203 reports with contributions omit one or more FEC-reportable contributions. The sample of LD-203 reports we reviewed contained 80 reports with political contributions and 80 reports without political contributions. We compared those reports against the contribution reports in the FEC database to identify any instance when the FEC database listed political contributions made by the lobbyists that were not disclosed on the lobbyist’s LD-203 report. Of the 80 LD-203 reports sampled with contributions reported, 7 sampled reports failed to disclose one or more political contributions that were documented in the FEC database. Of the 80 LD-203 reports sampled with no contributions reported, 1 of the sampled reports failed to disclose political contributions that were documented in the FEC database. We estimate that among all reports a minimum of 2 percent failed to disclose one or more political contributions. Of the 4,553 new registrations we identified from fiscal year 2010 we were able to match 4,132 reports filed in the first quarter in which they were registered, which is a match rate of more than 90 percent of registrations, similar to our prior reviews. To determine whether new registrants were meeting the requirement to file, we matched newly filed registrations in the last quarter of 2009 and the first three quarters of 2010 from the Senate and House Lobbyists Disclosure Databases to their corresponding quarterly disclosure reports using an electronic matching algorithm that allowed for misspelling and other minor inconsistencies between the registrations and reports. While most lobbyists we interviewed told us they thought that the reporting requirements were clear, a few lobbyists highlighted areas of potential inconsistency and confusion in applying some aspects of LDA reporting requirements. Several of the lobbyists said that the Secretary of the Senate and Clerk of the House staff were helpful in providing clarifications when needed. As part of our review, lobbyists present during reviews were asked to rate various terms associated with LD-2 reporting as being clear and understandable, not clear and understandable, or somewhat clear and understandable. Figure 2 shows the terms associated with LD-2 reporting that the lobbyists we interviewed in our sample of reports were asked to rate and how they responded to each term. Table 2 summarizes the feedback we obtained from the lobbyists in our sample of reports that rated the lobbying terms as either not clear and understandable or somewhat clear and understandable. Sixty-nine lobbyists in our sample of LD-2 reports said that they found the reporting requirements easy to meet. However, 10 lobbyists we interviewed told us that they found meeting the deadline for filing disclosure reports difficult because of the short time frame between the end of the reporting period and the deadline for filing reports. For example, one lobbyist mentioned that they have to estimate the income for the final month of the reporting period because bills are prepared after the filing deadline. The deadline for filing disclosure reports is 20 days after each reporting period, or the first business day after the 20th day if the 20th day is not a business day. Prior to enactment of HLOGA, the deadline for filing disclosure reports was 45 days after the end of each reporting period. While the electronic filing system used for lobbying reports may reduce the amount of time filers must spend on data entry, a few lobbyists stated that they misreported on their LD-2 reports because they carried information from old reports to new reports without properly updating information. As a result, some lobbyists now have to amend their LD-2 reports to accurately reflect the lobbying activity for the quarter under review. Since the enactment of HLOGA, quarterly referrals for noncompliance with the LD-2 requirements have been received from both the Secretary of the Senate and the Clerk of the House. From June 2009 to July 2010, the Office received referrals from both the Secretary and the Clerk for noncompliance with reports filed for the 2008 and 2009 reporting periods. The Office received a total of 418 referrals of lobbying firms for the 2008 filing period and 457 referrals of lobbying firms from the 2009 filing period for noncompliance with the quarterly LD-2 reporting requirements. The Office has not yet received all referrals from the Secretary of the Senate and the Clerk of the House for the 2010 reporting period. In addition, the Office has received referrals from the Secretary Clerk for noncompliance with LD-203 contribution reports. For noncompliance in the 2008 calendar year, the Office has to date received LD-203 referrals from the Secretary of the Senate for 1,324 lobbying firms, and 126 LD-203 referrals from the Clerk of the House. The Office mailed 962 noncompliance letters to the registered lobbying firms and included the names of the individual lobbyists who were not in compliance with t requirement to report federal campaign and political contributions and certify that they understand the gift rules. However, the Office stated that there is confusion among the lobbying community as to whether the individual or organization is responsible for responding to letters of noncompliance with LD-203 requirements. To date, the Office has receive 765 lobbying firm LD-203 referrals from the Secretary of the Senate, and 195 referrals from the Clerk of the House for the 2009 calendar year. The Office has not yet sent letters of noncompliance with the LD-203 referrals for the 2009 calendar year. d To enforce LDA compliance, the Office has primarily focused on sending letters to lobbyists who have potentially violated the LDA by not filing disclosure reports as required. Not all referred lobbyists are sent noncompliance letters because some of the lobbyists have terminated their registrations, or lobbyists may have complied by filing the report before the Office sends noncompliance letters. The letters request that the lobbyists comply with the law and promptly file the appropriate disclosure documents. Resolution typically involves the lobbyists coming into compliance by filing the reports or terminating their lobbying status. As of January 25, 2011, about 47 percent (758 of 1,597) of the lobbyists sent letters for noncompliance with 2008 and 2009 referrals are now considered compliant because the lobbyists in question have either filed reports or they have terminated their registrations as lobbyists. Additionally, about 49 percent (776 of 1,597) are pending action because the Office did not receive a response from the lobbyist and plans to conduct additional research to determine if they can locate the lobbyist or close the referral because the lobbyist cannot be located. The remaining 4 percent (63 of 1,597) of the referrals did not require action because the lobbyists were found to be compliant when the Office received them. This may occur when lobbyists have responded to the contact letters from the Secretary of the Senate and Clerk of the House after the referrals have been received by the Office. Other referrals did not require action because the lobbyist or client was no longer in business or the lobbyist was deceased. Figure 3 shows the status of enforcement actions as a result of noncompliance letters sent to registrant organizations for 2008 and 2009 referrals. Since the LDA was passed in 1995, the Office has settled with three lobbyists and collected civil penalties totaling about $47,000. All of the settled cases involved a failure to file. The settlements occurred before the enactment of HLOGA, which increased the penalties for offenses committed after January 1, 2008, to involve a civil fine of not more than $200,000 and criminal penalties of not more than 5 years in prison. Criminal penalties may be imposed against lobbyists who knowingly and corruptly fail to comply with the act. Officials from the Office stated that they have sufficient civil and criminal statutory authorities to enforce the LDA. As we reported previously, the Office identified six lobbyists whose names appeared frequently in the referrals and sent them letters more targeted toward repeat nonfilers. However, the Office has decided not to pursue action against any of them because they determined the lobbyists were unaware of the need to file, and therefore did not intentionally avoid compliance with the requirements of the LDA. In all of those cases, the lobbyists terminated or filed once they were made aware of the requirements. In addition, in the summer of 2010, six additional lobbyists were identified as repeat nonfilers and to date, no action has been taken against any of them. Three of these cases have been resolved because the Office decided not pursue further action due to lobbyists’ illness, inability to pay, or lobbyists’ stating the failure to file was the result of an inadvertent oversight. In an additional case, the Office determined the level of noncompliance was not sufficiently significant for further action. The Office continues to consider further enforcement actions for the remaining two, and has forwarded these matters to the Assistant United States Attorney for Civil Enforcement for further review. In addition, the Office plans to identify additional cases for civil enforcement review in the coming months. In a prior report, we raised issues regarding the tracking, analysis, and reporting of enforcement activities for lobbyists whom the Secretary of the Senate and the Clerk of the House identify as failing to comply with LDA requirements. Our report recommended that the Office complete efforts to develop a structured approach for tracking referrals when they are made, recording reasons for referrals, recording the actions taken to resolve them, and assessing the results of actions taken. The Office has developed a system to address that recommendation. The current system provides a foundation that allows the Office to better focus its lobbying compliance efforts by tracking and recording the status and disposition of enforcement activities. In addition, the system allows the Office to monitor lobbyists who continually fail to file the required disclosure reports. Under HLOGA, the Attorney General is required to file an enforcement report with Congress after each semiannual period beginning on January 1 and July 1, detailing the aggregate number of enforcement actions DOJ took under the act during the semiannual period and, by case, any sentences imposed. On September 6, 2009, the Attorney General filed his report for the semiannual period ending June 30, 2009. We found information provided in the enforcement report generally matched information the system provided to GAO. In cases where we identified inconsistencies, they were very minor. For example, the differences in the number of noncompliance letters sent were less than 10 out of several hundred letters sent. In addition, there were small inconsistencies regarding the dates referrals were received from the Secretary of the Senate and Clerk of the House. There were also inconsistencies in the number of referrals received regarding individual lobbyists and registrant organizations. These inconsistencies totaled less than 10 out of more than a thousand referrals received. We brought these minor errors to the attention of the Office and asked them about their processes for ensuring data accuracy. Officials from the Office stated that they do not have formal procedures for ensuring that data are entered into the system in a timely fashion. In addition, they stated that there are no formal processes in place to review, validate, or edit the system data after they are entered to help ensure that accurate data are entered into the system and to help ensure that erroneous data are identified, reported, and corrected. The Office stated that they plan to formalize data review, refine summary data, and institute procedures to ensure data are accurate and reliable in the next few months. As part of this effort, they plan to establish periodic quality checks and verification of data as we suggested when we met with them in January 2011. Officials from the Office stated that they have sufficient civil and criminal statutory authorities to enforce LDA. The Office has increased the number of staff assigned to assist with lobbying compliance issues from 6 to 17. All of the staff continue to work on lobbying disclosure enforcement part- time and primarily in an administrative capacity. Some of their administrative activities include researching the Senate and House databases to determine if referrals have been resolved, or mailing noncompliance letters. In addition to those 17 part-time staff members, one contractor was hired in September 2010 to work on lobbying compliance issues on a full-time basis. We provided a draft of this report to the Attorney General for review and comment. We met with the Assistant U.S. Attorney for the District of Columbia, who on behalf of the Attorney General responded that DOJ had no comments. We are sending copies of this report to the Attorney General, Secretary of the Senate, Clerk of the House of Representatives, and interested congressional committees and members. This report also is available at no charge on the GAO Web site at http://www.gao.gov. Please contact J. Christopher Mihm at (202) 512-6806 or mihmj@gao.gov if you or your staffs have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Consistent with the audit requirements in the Honest Leadership and Open Government Act of 2007, our objectives were to: determine the extent to which lobbyists are able to demonstrate compliance with the Lobbying Disclosure Act of 1995 (LDA), as amended by providing documentation to support information contained on reports filed under the LDA; identify any challenges that lobbyists report to compliance and potential improvements; and describe the resources and authorities available to the U.S. Attorney’s Office for the District of Columbia (the Office), and the efforts the Office has made to improve enforcement of the LDA, including identifying trends in past lobbying disclosure compliance. To respond to our mandate, we used information in the lobbying disclosure database maintained by the Clerk of the House of Representatives. To assess whether these disclosure data were sufficiently reliable for the purposes of this report, we reviewed relevant documentation and spoke to officials responsible for maintaining the data. Although registrations and reports are filed thorough a single Web portal, each chamber subsequently receives copies of the data and follows different data cleaning, processing, and editing procedures before storing the data in either individual files (in the House) or databases (in the Senate). Currently, there is no means of reconciling discrepancies between the two databases that result from chamber differences in data processing. For example, Senate staff has told us during previous reviews that they set aside a greater proportion of registration and report submissions than the House for manual review before entering the information into the database, and as a result, the Senate database would be slightly less current than the House database on any given day pending review and clearance; and House staff told us during previous reviews that they rely heavily on automated processing, and that while they manually review reports that do not perfectly match information on file for a given registrant or client, they will approve and upload such reports as originally filed by each lobbyist even if the reports contain errors or discrepancies (such as a variant on how a name is spelled). Nevertheless, we do not have reason to believe that the content of the Senate and House systems would vary substantially. While we determined that both the Senate and House disclosure data were sufficiently reliable for identifying a sample of quarterly disclosure reports (LD-2 reports) and for assessing whether newly filed registrants also filed required reports, we chose to use data from the Clerk of the House for sampling LD-2 reports from the last quarter of 2009, first three quarters of 2010, as well as for sampling year- end 2009 and midyear 2009 contributions reports (LD-203 reports), and finally for matching quarterly registrations with filed reports. We did not evaluate the Offices of the Secretary of the Senate or the Clerk of the House, both of which have key roles in the lobbying disclosure process, although we consulted with officials from each office, and they provided us with general background information at our request and detailed information on data processing procedures. To assess the extent to which lobbyists could provide evidence of their compliance with reporting requirements, we examined a stratified random sample of 100 LD-2 reports from the fourth quarter of calendar year 2009 and the first, second, and third quarters of calendar year 2010, with 25 reports selected from each quarter. We excluded reports with no lobbying activity or with income less than $5,000 from our sampling frame and drew our sample from 55,282 activity reports filed for the last quarter of 2009 and the first three quarters of 2010 available in the public House database, as of our final download date for each quarter. There is 1 LD-2 report in the sample that amended their LD-2 after notification of being selected for the sample but prior to our review. The amended LD-2 report decreased lobbying activity income for that quarter from $21,000 to less than $5,000. Further, that report was amended to show no lobbying contact, whereas the original LD-2 activity report showed lobbying contact with the Senate and House. We conducted a review of this report because they amended to no activity with lobbying income of less than $5,000 following notification of inclusion in the sample. Since “no lobbying activity” was indicated on the amended LD-2 activity report, lobbyists were not required to provide information for all reporting elements on the LD-2. Therefore, in certain calculations this 1 report is excluded from the sample. Our sample is based on a stratified random selection, and it is only one of a large number of samples that we may have drawn. Because each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples that we could have drawn. All percentage estimates in this report have 95 percent confidence intervals of within plus or minus 10.0 percentage points or less of the estimate itself, unless otherwise noted. When estimating compliance with certain of the elements we examined, we base our estimate on a one-sided 95 percent confidence interval to generate a conservative estimate of either the minimum or maximum percentage of reports in the population exhibiting the characteristic. We contacted all the lobbyists and lobbying firms in our sample and asked them to provide support for key elements in their reports, including: the amount of income reported for lobbying activities, the amount of expenses reported on lobbying activities, the names of those lobbyists listed in the report, the houses of Congress and federal agencies that they lobbied, and the issue codes that they had lobbied. In addition, we determined whether each individual lobbyist listed on the LD-2 report had filed a semiannual LD-203 report. Prior to interviewing lobbyists about each LD-2 report in our sample, we conducted an open-source search to determine whether each lobbyist listed on the report appeared to have held a covered official position required to be disclosed. For lobbyists registered prior to January 1, 2008, covered official positions held within 2 years of the date of the report must be disclosed; this period was extended to 20 years for lobbyists who registered on or after January 1, 2008. Lobbyists are required to disclose covered official positions on either the client registration (LD-1) or on the first LD-2 report for a specific client, and consequently those who had held covered official positions may have disclosed the information on a LD-2 report filed prior to the report we examined as part of our random sample. To identify likely covered official positions, we examined lobbying firms’ Web sites and conducted extensive open-source search of Leadership Directories, Who’s Who in American Politics, and U.S. newspapers through Nexis for lobbyists’ names and variations on their names. We then examined the current LD-2 report under review, prior LD-2 reports, and the client registration to determine if the identified covered positions were disclosed properly. Finally, we asked lobbying firms and organizations about each lobbyist listed on the LD-2 report that we had identified as having a previous covered official position that we had not found disclosure of to determine whether covered official positions had been appropriately disclosed or whether there was some other acceptable reason for the omission (such as having been disclosed on an earlier registration or LD-2 report). Despite our rigorous search protocol, it is possible that our search failed to identify omitted reports of covered official positions. Thus, our estimate of the proportion of reports with lobbyists who failed to appropriately disclose covered official positions is a lower-bound estimate of the minimum proportion of reports that failed to report such positions. In addition to examining the content of LD-2 reports, we confirmed whether year-end 2009 and midyear 2010 LD-203 reports had been filed for each firm and lobbyist listed on the LD-2 reports in our random sample. Although this review represents a random selection of lobbyists and firms, it is not a direct probability sample of firms filing LD-2 reports or lobbyists listed on LD-2 reports. As such, we did not estimate the likelihood that LD- 203 reports were appropriately filed for the population of firms or lobbyists listed on LD-2 reports. To determine if the LDA’s requirement for registrants to file a report in the quarter of registration was met for the fourth quarter of 2009 and the first, second, and third quarters of 2010, we used data filed with the Clerk of the House to match newly filed registrations with corresponding disclosure reports. Using direct matching and text and pattern matching procedures, we were able to identify matching disclosure reports for 4,132 of the 4,553, or 90.8 percent, of newly filed registrations. We began by standardizing client and registrant names in both the report and registration files (including removing punctuation and standardizing words and abbreviations such as “Company and CO”). We then matched reports and registrations using the House identification number (which is linked to a unique registrant-client pair), as well as the names of the registrant and client. For reports we could not match by identification number and standardized name, we also attempted to match reports and registrations by client and registrant name, allowing for variations in the names to accommodate minor misspellings or typos. We could not readily identify matches in the report database for the remaining registrations using electronic means. To assess the accuracy of the LD-203 reports, we analyzed two stratified random samples of LD-203 reports from the 32,893 total LD-203 reports. The first sample contains 80 reports of the 10,956 reports with political contributions and the second contains 80 reports of the 21,937 reports listing no contributions. Each sample contains 40 reports from the year- end 2009 filing period and 40 reports from the midyear 2010 filing period. The samples allow us to generalize estimates in this report to either the population of LD-203 reports with contributions or the reports without contributions to within a 95 percent confidence interval of plus or minus 7.1 percentage points or less, and to within 3.5 percentage points of the estimate when analyzing both samples together. We analyzed the contents of the LD-203 reports and compared them to contribution data found in the publicly available Federal Elections Commission’s (FEC) political contribution database. For our fiscal year 2009 report, we interviewed staff at the FEC responsible for administering the database and determined that the data reliability is suitable for the purpose of confirming whether a FEC-reportable disclosure listed in the FEC database had been reported on an LD-203. We compared the FEC-reportable contributions reported on the LD-203 reports with information in the FEC database. The verification process required text and pattern matching procedures, and we used professional judgment when assessing whether an individual listed is the same individual filing an LD-203. For contributions reported in the FEC database and not on the LD-203, we asked the lobbyists or organizations to provide an explanation of why the contribution was not listed on the LD-203 report or to provide documentation of those contributions. As with covered positions on LD-2 disclosure reports, we cannot be certain that our review identified all cases of FEC-reportable contributions that were inappropriately omitted from a lobbyist’s LD-203 report. We did not estimate the percent of other non-FEC political contributions that were omitted (such as honoraria, or gifts to presidential libraries). We obtained views from lobbyists included in our sample of reports on any challenges to compliance. To describe the processes used by the Office in following up on referrals from the Secretary of the Senate and the Clerk of the House, data reliability in the Office’s tracking system for referrals, and to provide information on the resources and authorities used by the Office in its role in enforcing compliance with the LDA, we interviewed officials from the Office and obtained information on the capabilities of the system they established to track and report compliance trends and referrals and other practices they have established to focus resources on enforcement of the LDA; the extent to which they have implemented data reliability checks into their tracking system; and the level of staffing and resources dedicated to lobbying disclosure enforcement. The Office provided us with reports from the tracking system on the number and status of cases referred, pending, and resolved. The mandate does not include identifying lobbyists who failed to register and report in accordance with LDA requirements, or whether for those lobbyists that did register and report all lobbying activity or contributions were disclosed. We conducted this performance audit from April 2010 through March 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The random sample of lobbying disclosure reports we selected was based on unique combinations of registrant lobbyists and client names (see table 3). See table 4 for a list of lobbyists and lobbying firms from our random sample of lobbying contribution reports with contributions. See table 5 for a list of lobbyists and lobbying firms from our random sample of lobbying contribution reports without contributions. In addition to the contacts named above, Robert Cramer, Associate General Counsel; Bill Reinsberg, Assistant Director; Shirley Jones, Assistant General Counsel; Crystal Bernard; Amy Bowser; Anna Maria Ortiz; Melanie Papasian; Katrina Taylor; Megan Taylor; and Greg Wilmoth made key contributions to this report. Assisting with lobbyist file reviews and interviews were Sarah Arnett, Sandra Beattie, Colleen Candrl, Irina Carnevale, Jeffrey DeMarco, Nicole Dery, Shannon Finnegan, Robert Gebhart, Meredith Graves, Lauren Grossman, Amanda Harris, Lois Hanshaw, Angela Leventis, Blake Luna, Patricia MacWilliams, Stacy Ann Spence, Jonathan Stehle, and Daniel Webb.
The Honest Leadership and Open Government Act of 2007 requires that GAO annually (1) determine the extent to which lobbyists can demonstrate compliance with disclosure requirements, (2) identify any challenges that lobbyists report to compliance, and (3) describe the resources and authorities available to the U.S. Attorney's Office for the District of Columbia (the Office), and the efforts the Office has made to improve its enforcement of the Lobbying Disclosure Act of 1995 as amended (LDA). This is GAO's fourth report under the mandate. GAO reviewed a stratified random sample of 100 lobbying disclosure reports filed from the fourth quarter of calendar year 2009 through the third quarter of calendar year 2010. GAO also selected two random samples totaling 160 reports of federal political campaign contributions from year-end 2009 and midyear 2010. This methodology allowed GAO to generalize to the population of 55,282 disclosure reports with $5,000 or more in lobbying activity. GAO also met with officials from the Office regarding efforts to focus resources on lobbyists who fail to comply. GAO provided a draft of this report to the Attorney General for review and comment. The Assistant U.S. Attorney for the District of Columbia responded on behalf of the Attorney General that the Department of Justice had no comments on the draft of this report. Lobbyists were generally able to provide documentation to support the amount of income and expenses reported; however, less documentation was provided to support other items in their disclosure reports. This finding is similar to GAO's results from prior reviews. There are no specific requirements for lobbyists to create or maintain documentation related to disclosure reports they file under the LDA. For income and expenses, two key elements of the reports, GAO estimates that lobbyists could provide documentation for approximately 97 percent of the disclosure reports for the fourth quarter 2009 and the first three quarters of 2010. According to the documentation lobbyists provided for income and expenses, we estimate the amount disclosed was supported for 68 percent of disclosure reports. After GAO's review, 21 lobbyists stated that they planned to amend their disclosure reports to make corrections on one or more data elements. As of March 2011, 12 of the 21 amended their disclosure reports. For political contributions reports, GAO estimates that a minimum of 2 percent of reports failed to disclose political contributions that were documented in the Federal Election Commission database. The majority of lobbyists who newly registered with the Secretary of the Senate and Clerk of the House of Representatives in the last quarter of 2009 and first three quarters of 2010 filed required disclosure reports for that period. GAO could identify corresponding reports on file for lobbying activity for 90 percent of registrants. The majority of lobbyists felt that the terms associated with disclosure reporting were clear and understandable. For the few lobbyists who stated that disclosure reporting terminology remained a challenge, areas of potential inconsistency and confusion in applying the terms associated with disclosure reporting requirements have been highlighted. Some lobbyists reported a lack of clarity in determining lobbying activities versus non-lobbying activities. A few lobbyists stated that they misreported on their disclosure reports because they carried information from old reports to new reports without properly updating information. The Office is responsible for enforcement of the LDA and has the authority to pursue a civil or criminal case for noncompliance. To enforce LDA compliance, the Office has primarily focused on sending letters to lobbyists who have potentially violated the LDA by not filing disclosure reports. For calendar years 2008 and 2009, the Office sent 1,597 noncompliance letters for disclosure reports and political contributions reports. About half of the lobbyists who received noncompliance letters are now compliant. In response to an earlier GAO recommendation, the Office has developed a system to better focus enforcement efforts by tracking and recording the status of enforcement activities. The system allows the Office to monitor lobbyists who continually fail to file the required disclosure reports. The Office stated that they plan to institute procedures to formalize data review, refine summary data, and ensure data are accurate and reliable in the next few months.
Afghanistan is one of the world’s poorest countries and ranks near the bottom of virtually every development indicator category, including life expectancy; literacy; nutrition; and infant, child, and maternal mortality. According to the most recent National Risk and Vulnerability Assessment conducted by the government of Afghanistan between 2007 and 2008, the Afghan poverty rate was 36 percent. The highest rates of poverty were among nomads and rural farmers and varied across regions and provinces. (For additional information on regional poverty in Afghanistan see GAO-10- 756SP.) The survey also found that agricultural activities provided the Afghan population’s primary livelihood; 55 percent of households were engaged in farming and 68 percent had livestock. According to the World Bank, the agricultural sector accounts for 30 percent of Afghanistan’s gross domestic product. The National Risk and Vulnerability Assessment states that agricultural productivity is hampered by water shortage, lack of credit, insufficient outreach of agricultural and veterinary extension services, and poor access to markets. Afghanistan suffers from limited means to capture water resources, soil degradation, overgrazing, deforestation, and desertification. As shown in figure 1, Afghanistan is mountainous and much of its land is not arable. According to the National Risk and Vulnerability Assessment, household access to arable land increased between 2005 and 2007/2008, largely due to increasing access to irrigated land across urban, rural, and nomadic households. Additionally, in 2007/2008, 40 percent of households nationwide had access to irrigated land and 17 percent had access to rain- fed land. Farms in Afghanistan averaged 1.4 hectares for irrigated land and 2.8 hectares for rain-fed land. The survey also found that wheat was the most frequently cited crop produced on irrigated and rain-fed land during the summer planting season, followed by opium and potatoes on irrigated land and cotton and barley on rain-fed land; corn, sorghum, and rice were grown on irrigated land during the winter planting season. Some households also grew fruit and nut trees and grapes. (For additional information on major crops grown in Afghanistan see GAO-10-756SP.) As figure 2 shows, between fiscal years 2002 and March 31, 2010, USAID awarded about $1.4 billion to 41 agricultural-assistance programs in Afghanistan, with almost two-thirds of the amount (about $900 million) disbursed. As table 1 shows, disbursements of U.S. funds for agricultural programs represented 14 percent of all USAID assistance programs in Afghanistan from fiscal years 2002-March 31, 2010. Moreover, the percentage of USAID’s total assistance to Afghanistan disbursed to agricultural programs has increased from 6 percent in fiscal years 2002-2004 to 17 percent in fiscal years 2008 and 2009. Appendix II has more funding information on USAID’s agricultural programs, including the eight agricultural programs in our review. The administration requested $827 million dollars in fiscal year 2010 for USAID agricultural-assistance programs. In fiscal years 2002–2003, to help address the complex humanitarian crisis in Afghanistan, the U.S. government provided emergency assistance that helped avert a famine, significantly reduced the suffering of the most vulnerable Afghans, and assisted the return of refugees. USAID provided Afghanistan with 355,270 metric tons of wheat and other emergency food assistance (valued at $206.4 million), and the U.S. Department of Agriculture provided 79,600 metric tons of surplus wheat (valued at $38.7 million). Over the 2-year period, the United States provided over 60 percent of all international food assistance received by Afghanistan. According to the World Food Program, the food assistance provided by the United States and the international community helped avert famine in Afghanistan. As we previously reported, from 2002 through 2004, increased opium poppy cultivation spread and drug trafficking grew as a threat to Afghanistan’s security and stability. During that time, the United States supported Afghan- and United Kingdom-led counternarcotics efforts. These efforts reportedly had little effect on the illicit narcotics industry because of limited security and stability across Afghanistan. In response, the U.S. government made counternarcotics a top U.S. priority and developed a strategy in 2004 to reduce poppy cultivation, drug production, and trafficking, shifting the emphasis of the United States’ agricultural assistance programs in Afghanistan from food security programs to counternarcotics-related ADP. This part of the U.S. counternarcotics strategy was intended to offer incentives to stop opium poppy production by helping farmers and farm laborers obtain other ways to earn a living. The strategy also called for strong disincentives such as forced eradication, interdiction, and law enforcement, while at the same time spreading the Afghan government’s antinarcotics message. The United States’ efforts also were expected to build the Afghan government’s capacity to conduct counternarcotics efforts on its own. As part of its counternarcotics efforts, beginning in 2005, USAID awarded most of its new agricultural funds to alternative-development programs (ADP)—to (1) increase agricultural productivity, (2) accelerate economic growth, and (3) eliminate illicit drug cultivation. As figure 3 shows, between 2005 and 2008, $494 million, or 71 percent of new awards were for ADP. During this time period, USAID funded ADP, as well as other large agricultural programs. Figure 4 provides a brief description of the goals and objectives of the eight programs included in this review including, beginning in 2005, ADP-Northeast, ADP-South, and ADP-East; beginning in 2006, the Accelerating Sustainable Agriculture Program; beginning in 2008, ADP-Southwest, the Afghanistan Water, Agriculture, and Technology Transfer program, and the Afghanistan Vouchers for Increased Production in Agriculture program; and beginning in 2009, the Incentives Driving Economic Alternatives-North, East and West, as a follow-on program to ADP-East. USAID identified six of the eight programs as ADP, excluding the Accelerating Sustainable Agriculture Program and the Afghanistan Vouchers for Increased Production in Agriculture. Appendix III specifies the provinces where the eight agricultural programs included in our review operated and gives examples of the range of projects that these programs implemented. In 2009, under the direction of the President and the Special Representative for Afghanistan and Pakistan, the United States shifted the focus of its agricultural strategy in Afghanistan from counternarcotics to counterinsurgency efforts. This shift de-emphasized eradication. According to the Special Representative for Afghanistan and Pakistan, eradication unduly punished and alienated farmers for making a “rational economic decision,” while ignoring the profits gleaned by traffickers and insurgents from the sale of processed opium and heroin. Further, the Administration noted that economic growth and new job creation were critical to U.S. counterinsurgency efforts in Afghanistan because they provide licit alternatives to narcotics- and insurgent-related activities and connect people to their government. As a result, the Administration integrated agricultural programs with other U.S. efforts, including military operations, and directed more resources to the agricultural sector. For example, the Afghanistan Vouchers for Increased Production in Agriculture program, which originally operated outside of the southeastern portion of Afghanistan, was expanded to Helmand and Kandahar in 2010, where according to Department of Defense officials, the United States and Afghanistan have begun military operations to break the momentum of the insurgency. Furthermore, the Administration increased the involvement of the U.S. Department of Agriculture and aligned U.S. efforts with the current agricultural priorities of the Afghan government, as laid out in the Ministry of Agriculture, Irrigation, and Livestock’s National Agriculture Development Framework. The strategy focuses on the following four areas: 1. increasing agricultural productivity by increasing farmers’ access to quality inputs, such as improved seeds and fertilizer, and effective extension services; 2. regenerating agribusiness by increasing linkages between farmers, markets, credit, and trade corridors; 3. rehabilitating watersheds and improving irrigation structure; and 4. increasing the Ministry of Agriculture, Irrigation, and Livestock’s capacity to deliver services and promote the private sector and farmer associations through direct budget and technical assistance. USAID’s Automated Directives System establishes performance management and evaluation procedures that USAID expects its staff to follow in planning, monitoring, and evaluating its agricultural assistance programs in Afghanistan. USAID operationalized these procedures through the development of a Mission Performance Management Plan (PMP). Similarly, USAID requires its implementing partners to develop monitoring and evaluation plans, which are generally included in implementing partners’ program PMPs. The collection of planning, monitoring, and evaluating efforts, when taken together, enable USAID to manage the performance of its programs. While USAID has noted that Afghanistan is an insecure environment in which to implement its programs, the agency has generally required the same performance management and evaluation procedures as it does in other countries in which it operates. In October 2008, USAID approved new guidance that outlined several alternative monitoring methods—especially when site visits are difficult or not possible—in high threat environments such as Afghanistan. This guidance, however, was not disseminated to USAID staff until December 2009, and the USAID Mission to Afghanistan agricultural staff did not become aware of the guidance until June 2010. This guidance was included in GAO’s review where applicable. Figure 5 presents a summary of USAID’s Automated Directives System’s performance management and evaluation procedures it expects its staff to follow, grouped into planning, monitoring, and evaluating categories. USAID’s Automated Directives System requires USAID officials to complete a Mission PMP for each of its high-level objectives as a tool to manage its performance management and evaluation procedures. In line with this requirement, USAID’s Mission to Afghanistan developed its first PMP in 2006; the document covered the years 2006, 2007, and 2008. The Mission operated without a PMP to guide its efforts after 2008. According to USAID, the Mission is in the process of developing a new missionwide PMP that will reflect the current Administration’s priorities and strategic shift to counterinsurgency. USAID expects the new PMP to be completed by the end of fiscal year 2010. The Mission attributed the delay in creating the new PMP to the process of developing new strategies in different sectors and gaining approval from the U.S. Embassy in Afghanistan and from agency headquarters in Washington. Overall, the 2006-2008 Mission PMP incorporated key planning activities. For example, the PMP identified indicators, established baselines and targets, planned for data quality assessments, and described the frequency of data collection for four high-level objectives for all USAID programs in Afghanistan, including its agricultural programs. The eight agricultural programs we reviewed all fell under one of these four high-level objectives identified in the Mission PMP—“developing a thriving licit economy led by the private sector.” In addition, the PMP described regular site visits, random data checks, and data quality assessments as the means to be used to verify and validate information collected. Furthermore, the Mission PMP noted that the PMP was developed to enable staff to actively and systematically assess their contributions to the Mission’s program results and take corrective action when necessary. It noted that indicators, when analyzed in combination with other information, provide data for program decision making. The Mission PMP did not include plans for evaluations and special studies for the high-level objective that the eight programs included in this review supported; but according to USAID, the agency has planned evaluations for seven of the eight agricultural programs included in this review during fiscal year 2010. In addition, USAID has planned to conduct evaluations of agricultural depots and veterinarian field units, activities included in several agricultural programs. Similar to the Automated Directives System’s requirement that USAID develop a Mission PMP as a planning tool to manage the process of monitoring and evaluating progress—including establishing targets for each performance indicator—implementing partners are required to develop and submit monitoring and evaluation plans to USAID for approval. To keep its performance-management system cost-effective, reduce its burden, and ensure implementing partner activities and USAID plans are well-aligned, USAID requires its implementing partners to integrate performance-data collection in their performance-management activities and work plans. In fulfilling this requirement, the implementing partners submitted monitoring and evaluation plans for the eight programs included in our review to USAID for approval. The implementing partners’ plans, among other things, generally contained goals and objectives, indicators, and targets. However, we found that USAID had not always approved these plans and did not require implementing partners to set targets for each of their indicators, which are needed to assess program performance. Figure 6 shows the number of performance indicators by fiscal year with targets that the implementing partner developed and submitted to USAID for approval. The number of indicators with targets varied over time. The three programs we reviewed that were active in 2005, identified indicators and, in some cases, targets in their monitoring and evaluation plans to track progress; however, according to implementing partners, USAID did not approve these plans in 2005. Implementing partners for the eight programs we reviewed were contractually required to submit monitoring and evaluation plans. According to implementing partners, USAID was developing a common set of indicators for all three programs to track. In 2006, all three programs were requested to revise their monitoring and evaluation plans and develop PMPs that included a common set of performance indicators. All three programs submitted revised plans in their 2006 PMPs, however, USAID subsequently approved only two out of the three program PMPs. ADP-South’s PMP was never formally approved during the life of the program. USAID officials were unable to explain or provide reasons for the lack of approval. In addition, while ADP-South and ADP-East were intended to end early in fiscal year 2009, when their contracts were extended into fiscal year 2009, these programs were not required to set targets for all of their indicators during the additional time frame. The USAID officials we spoke with were uncertain as to why their predecessors did not require this of the implementing partners. In addition, several of the other programs, such as the Afghanistan Water, Agriculture, and Technology Transfer and the Afghanistan Vouchers for Increased Production in Agriculture programs did not establish targets for all of their indicators. As a result, in fiscal year 2009, out of the seven active agricultural programs we reviewed, two had set targets for all of their indicators. According to USAID’s Automated Directives System, monitoring efforts should include, among other things, collecting performance data, assessing data quality, identifying limitations, and taking steps to mitigate data limitations. USAID regularly collected program reports containing performance data from implementing partners for the eight programs we reviewed and assessed data quality, as well as mitigated data limitations, by conducting site visits when feasible, regularly communicating with implementing partners, and completing a data quality assessment for performance data. USAID assigned a monitoring official—known as an agreement or contracting officer’s technical representative—to oversee implementing partners’ activities for each of the eight agricultural programs we reviewed. Monitoring officials identified quarterly reports submitted by implementing partners as key documents used to collect performance data. To assess data quality and make efforts to mitigate data limitations, USAID conducted site visits and documented these efforts by completing monitoring reports, progress reports, and trip reports. According to USAID, Afghanistan’s insecure environment limited the frequency of some site visits and monitoring officials’ ability to consistently verify reported data. As such, the frequency of site visits varied within and across programs. Moreover, also according to USAID, formal site visit reports are seldom completed. As a result of time constraints, documentation of site visits is often limited to photographic documentation combined with informal emails from staff participating in site visits. In 2009, USAID conducted site visits for two of the eight programs included in our review. In 2008 and 2009, the USAID Mission director cited USAID’s efforts to monitor project implementation in Afghanistan as a significant deficiency, in the Mission’s Federal Management Financial Integrity Act of 1982 Annual Certification. These assessments raised concerns that designated USAID staff are “prevented from monitoring project implementation in an adequate manner with the frequency required” and noted that there is a high degree of potential for fraud, waste, and mismanagement of Mission resources. USAID further noted that the deficiency in USAID’s efforts to monitor projects will remain unresolved until the security situation in Afghanistan improves and stabilizes. USAID identified several actions to address the limitations to monitoring project implementation, these include placement of more staff in the field to improve monitoring capacity, use of hired security services to provide protection to Mission staff traveling to project sites, use of provincial reconstruction team staff to obtain information on the progress of USAID–funded activities where the provincial reconstruction teams operate, use of more Afghan staff, who have greater mobility than expatriate staff, to monitor projects, hiring of a contractor to monitor the implementation of construction projects and conduct regular site visits, use of Google Earth geospatial mapping to substitute for site visits, frequent and regular communication with implementing partners, collection of implementing partner videos or photographs—including spot checks of implementing partner records or files, and feedback from Afghan ministries and local officials. USAID performance management procedures require that Mission performance data reported to Washington for Government Performance and Results Act (GPRA) reporting purposes or for reporting externally on Agency performance must have had a data quality assessment at some time within the 3 years before submission. USAID established 10 Mission agricultural indicators that it reports to the joint Department of State- USAID’s Foreign Assistance Coordination and Tracking System. As required, USAID completed a data quality assessment for all 10 Mission agricultural indicators in November 2008. As table 3 shows, USAID’s data quality assessments generally provided high, medium, or low rankings of quality for the data collected. USAID’s assessments also identified actions to mitigate weaknesses in data quality. Suggestions for improving data quality included clarifying definitions and qualifying activities, increasing the frequency of direct monitoring, and increasing the effort to understand the impact of activities. In addition to collecting performance data and assessing the data’s quality, USAID’s Automated Directives System also includes the monitoring activities of analyzing and interpreting performance data in order to make program adjustments, inform higher-level decision making and resource allocation. We found that while USAID collects implementing partner performance data, or information on targets and results, the agency did not fully analyze and interpret this performance data for the eight programs in our review. Some USAID officials in Afghanistan told us that they reviewed the information reported in implementing partners’ quarterly reports in efforts to analyze and interpret a program’s performance for the eight programs in our review, although they could not provide any documentation of their efforts to analyze and interpret program performance. Some USAID officials also said that they did not have the time to fully review the reports. As a result, it is unclear the extent to which USAID uses performance data to make program adjustments, inform higher-level decision making and resource allocation. As previously noted, efforts to monitor program performance as outlined in USAID’s Automated Directives System should include and document decisions on how performance data will be used to guide higher-level decision making and resource allocation. Additionally, USAID is required to report results to advance organizational learning and demonstrate USAID’s contribution to overall U.S. government foreign assistance goals. While USAID did not fully analyze and interpret program data, the Mission does meet semiannually to examine and document strategic issues and determine whether the results of USAID-supported agricultural activities are contributing to progress toward high-level objectives. With respect to reporting of results, the Mission reported aggregate results in the Foreign Assistance Coordination and Tracking System—discussed earlier. USAID’s Automated Directives System requires USAID to undertake at least one evaluation for each of its high-level objectives; to disseminate the findings of evaluations; and to use evaluation findings to further institutional learning, inform current programs, and shape future planning. As noted earlier, each of the eight agricultural programs included in this review support the high-level objective of “developing a thriving licit economy led by the private sector.” In May 2007, USAID initiated an evaluation covering three of the eight agricultural programs included in our review—ADP-Northeast, ADP-East, and ADP-South. This evaluation was intended to assess the progress of the alternative-development initiatives toward achieving program objectives and offer recommendations for the coming years. The evaluators found insufficient data to evaluate whether the programs were meeting objectives and targets and, thus, shifted their methodology to a qualitative review based on interviews and discussions with key individuals. As required under USAID’s evaluation requirements, USAID posted the evaluation to its Web site for dissemination. In addition, as noted earlier, USAID is planning to conduct evaluations in fiscal year 2010 for all but one of the agricultural programs included in this review. We are uncertain of the extent to which USAID used the 2007 evaluation to adapt current programs and plan future programs. Few staff were able to discuss the evaluation’s findings and recommendations and most noted that they were not present when the evaluation of the three ADP programs was completed and, therefore, were not aware of the extent to which changes were made to the programs. With regard to using lessons learned to plan future programs, USAID officials told us that, in planning for the Afghanistan Vouchers for Increased Production in Agriculture program, key donors met with USAID staff and the Afghan government to share ideas and lessons learned from other programs. However, the officials could not provide documentation of this discussion or examples of how programs were modified as a result of the discussion. Based on our assessment of USAID implementing partner data, we found that six of the eight agricultural programs we reviewed fell short of achieving their established targets for several of their performance indicators. Additionally, although USAID requires implementing partners to submit information on indicators, targets, and results, as previously noted, not all indicators had established targets to allow for performance assessments. As figure 8 shows, six of the eight programs we reviewed did not meet their performance targets in the most recent year for which information was reported on performance targets. For the two programs that met all their targets, we found, as previously discussed, that they did not establish targets for several indicators and, thus, we could not fully assess performance for those indicators. We also found that the three longest-running programs in our review showed declines in performance over time. We measured performance for the eight agricultural programs in our review by comparing annual results against annual targets reported by USAID’s implementing partners. We assessed the extent to which targets were fully met. We decided that this measure of performance was appropriate because implementing partners are allowed to adjust and revise target levels to better reflect available information in the field. Our analysis is detailed in appendix IV. With respect to the three longest-running agricultural programs in our review—ADP-Northeast, ADP-South, and ADP-East—we found that the number of indicators that met or exceeded annual targets generally declined from 2006 to 2008. For example, ADP-Northeast met 33 percent of its targets in fiscal year 2006 and 29 percent of its targets in fiscal year 2008. While ADP-Northeast showed improvements in the percentage of indicators that met targets between fiscal years 2006 and 2007, the percentage declined in 2008. Similarly, ADP-South met 79 percent of its targets in fiscal year 2006, but 50 percent of its targets in fiscal year 2008. Although ADP-South showed substantial improvements in fiscal year 2009, the performance assessment was based on 5 out of 25 indicators that had set targets (see fig. 6 earlier in the report); the remaining 20 indicators showed results, but did not have annual targets. The Mission noted that these declines coincided with declines in the security environment; however, the Mission acknowledged that it had not conducted any analysis to confirm that the security environment was the reason for the declines in performance. From 2006 to 2008, the percentage of targets met declined for indicators, such as the number full-time equivalent jobs; Afghans trained in business skills; and hectares of improved irrigation as a result of infrastructure works. Appendix V has details on the indicators, targets, and results for the latest year where performance data was available for each of the eight programs in our review. Based on our assessment, on average, the percentage of targets met declined from 2006 to 2008 across these three programs. The longest-running program in our review that is currently active, the Accelerating Sustainable Agricultural Program, showed improvements in fiscal year 2008, but declined in fiscal year 2009 in the number of targets met. For example, the program met 0 percent of its targets in fiscal year 2007, 100 percent of its targets in fiscal year 2008, and 67 percent of its targets in fiscal year 2009. As shown in figure 8, trends in targets met could not be determined for the most recent programs—Afghanistan Water, Agriculture, and Technology Transfer, ADP-Southwest, Afghanistan Vouchers for Increased Production in Agriculture, and Incentives Driving Economic Alternatives-North, East and West—because sufficient data was not available to establish trends. In addition, as noted earlier, most of these programs failed to establish targets for all of their indicators and, thus, we could not assess performance for all indicators. For example, even though recent performance data show that Afghanistan Water, Agriculture, and Technology Transfer and Afghanistan Vouchers for Increased Production in Agriculture programs met all targets for fiscal year 2009, both programs did not set targets for all indicators. In fiscal year 2009, the Afghanistan Water, Agriculture, and Technology Transfer program set targets for 3 out of 5 indicators, while Afghanistan Vouchers for Increased Production in Agriculture set targets for 2 out of 10 indicators. All indicators with targets met or exceeded their annual target. The security situation, the Afghan government’s lack of capacity, and USAID’s difficulties in providing management and staff continuity challenge the implementation of agricultural programs in Afghanistan. The security situation hinders USAID’s ability to reach key areas of the country and monitor programs. Additionally, while the Afghan government’s capacity to carry out its core functions has improved, key ministries, including the Ministry of Agriculture, Irrigation, and Livestock—which works to restore Afghanistan’s licit agricultural economy through increasing production and productivity, natural resource management, improved physical infrastructure, and market development—lack the ability to implement their missions effectively. Finally, USAID’s ability to maintain institutional knowledge has been hampered by high staff turnover. USAID noted difficulties in program oversight and implementation caused by the challenging security environment in Afghanistan. In November 2009, we reported that while U.S. and international development projects in Afghanistan had made some progress, deteriorating security complicated such efforts to stabilize and rebuild the country. And as we reported in May 2010, the lack of a secure environment has continued to challenge reconstruction and development efforts. Specifically, USAID has cited the security environment in Afghanistan as a severe impediment to its ability to monitor projects. For example, USAID noted that solely traveling by road to visit alternative development, food assistance, and environmental projects in rural areas of northern and eastern Afghanistan is normally not allowed due to security constraints, and must, consequently, be combined with some air travel. However, air service in much of the north and east is limited during the winter months, which has complicated oversight efforts. Similarly, USAID officials are required to travel with armored vehicles and armed escorts to visit projects in much of the country. Consequently, as USAID officials stated, their ability to arrange project visits can become restricted if military forces cannot provide the necessary vehicles or escorts because of heightened fighting or other priorities. We experienced similar restrictions to travel beyond the embassy compound when we visited Afghanistan in July 2009. For example, we were initially scheduled to visit agricultural sites in Jalalabad, but could not due to security threats. Instead, implementing partners traveled to Kabul to meet with us. According to USAID, limited monitoring has heightened the risk of fraud, waste, and mismanagement of USAID resources. In addition to increasing challenges in overseeing programs, the security environment has also challenged USAID’s ability to implement programs, increasing implementation times and costs for projects in nonsecure areas. In particular, U.S. officials cited poor security as having caused delays, disruptions, and even abandonment of certain reconstruction projects. For example, according to implementing partner officials, in ADP-Southwest, some 15 to 20 illegal security checkpoints run by the Taliban and criminals near major trade centers have increased costs to and endangered the lives of farmers they support. USAID predicated the success of its agricultural programs on a stable or improving security environment. In preparing its 2005-2010 strategic plan, USAID assumed that security conditions would remain stable enough to continue reconstruction and development activities. Likewise, several implementing partner documents included this assumption, and USAID officials affirmed that this assumption remains true today. Furthermore, the commander of the North Atlantic Treaty Organization-led International Security Assistance Force and U.S. forces in Afghanistan testified in his June 2009 confirmation hearing that improved security was a prerequisite for the development of local governance and economic growth in Afghanistan. However, as figure 9 illustrates, while attack levels continue to fluctuate seasonally, the annual attack “peak” (high point) and “trough” (low point) for each year since September 2005 have surpassed the peak and trough, respectively, for the preceding year. USAID has increasingly included and emphasized capacity building among its programs to address the government of Afghanistan’s lack of capacity to sustain and maintain many of the programs and projects put in place by donors. In 2009, USAID rated the capability of 14 of 19 Afghan ministries and institutions it works with as 1 or 2 on a scale of 5, with 1 representing the need for substantial assistance across all areas and 5 representing the ability to perform without assistance. For example, the Ministry of Agriculture, Irrigation, and Livestock was given a rating of 2—needing technical assistance to perform all but routine functions—while the Ministry for Rural Rehabilitation and Development was given a rating of 4—needing little technical assistance. Although USAID has noted overall improvement among the ministries and institutions in recent years, none was given a rating of 5. USAID officials noted that a key Afghan official was recently moved from the Ministry for Rural Rehabilitation and Development to enhance the Ministry of Agriculture, Irrigation, and Livestock’s capacity. As a result, USAID officials also said that they have recently begun to work more closely with Ministry of Agriculture, Irrigation, and Livestock. According to the Afghanistan National Development Strategy, Afghanistan’s capacity problems are exacerbated by government corruption, a significant and growing problem in the country. Transparency International’s 2009 Corruption Perception Index ranked the country 179 out of 180. Similarly, in April 2009, USAID published an independent report, Assessment of Corruption in Afghanistan, that found that corruption, defined as “the abuse of public position for private gain,” is a significant and growing problem across Afghanistan that undermines security, development, and democracy-building objectives. According to USAID’s assessment, pervasive, entrenched, and systemic corruption is now at an unprecedented scope in the country’s history. The causes of corruption in Afghan public administration, according to the Afghanistan National Development Strategy, can be attributed to a lack of institutional capacity in public administration, weak legislative and regulatory frameworks, limited enforcement of laws and regulations, poor and nonmerit-based qualifications of public officials, low salaries of public servants, and a dysfunctional justice sector. Furthermore, the sudden influx of donor money into a system already suffering from poorly regulated procurement practices increases the risk of corruption and waste of resources. However, the assessment also noted that Afghanistan has or is developing most of the institutions needed to combat corruption, but these institutions, like the rest of the government, are limited by a lack of capacity, rivalries, and poor integration. The assessment also noted that the Afghan government’s apparent unwillingness to pursue and prosecute high-level corruption was particularly problematic. USAID moved to address this lack of capacity and growing corruption by including a capacity-building component in its more recent contracts. For example, the Afghanistan Water, Agriculture, and Technology Transfer program was designed to, among other things, improve the capabilities of Afghan ministries and universities by partnering with them on research- based decision making and outreach projects, and to identify water and land-use policies and institutional frameworks that encourage individuals and local, provincial, and the national governments to increase sustainable economic development. Likewise, the Assessment of Corruption in Afghanistan report noted that “substantial USAID assistance already designed to strengthen transparency, accountability, and effectiveness— prime routes to combat corruption—in the most critical functions of national and subnational government.” For example, the assessment points to alternative-development and agricultural efforts to create incentives to not grow poppy, but also notes that these efforts should be coordinated with enforcement efforts supported by the Departments of Defense, Justice, and State. The Administration has further emphasized capacity building by pursuing a policy of Afghan-led development, or “Afghanization,” to ensure that Afghans lead efforts to secure and develop their country. At the national level, the United States plans to channel more of its assistance through the Afghan government’s core budget. At the field level, the U.S.-assistance plan is to shift assistance to smaller, more flexible, and faster contract and grant mechanisms to increase decentralized decision making in the field. The new U.S. government agricultural strategy, linked to the U.S. effort to counter insurgency, stresses the importance of increasing the Ministry of Agriculture, Irrigation, and Livestock’s capacity to deliver services and promote the private sector and farmers’ associations through direct budget and technical assistance. However, USAID also recognized that, with the move toward direct assistance to government ministries, USAID’s vulnerability to waste and corruption is anticipated to increase. According to USAID officials, direct budget assistance to the Ministry of Agriculture, Irrigation, and Livestock is dependent on the ability of the ministry to demonstrate the capacity to handle the assistance. These officials noted that an assessment of the Ministry of Agriculture, Irrigation, and Livestock’s ability to manage direct funding was being completed. The U.S. Embassy has plans under way to establish a unit at the embassy to receive and program funds on behalf of the Ministry while building the Ministry’s capacity to manage the funds on its own. USAID has not taken steps to mitigate challenges to maintaining institutional knowledge, a problem exacerbated by high staff turnover. As we noted earlier, USAID did not consistently document decisions and staff could not always respond to our questions about changes that had taken place over the life of the programs, often noting that they were not present at the time of the changes. For example, when we inquired about changes in the results and performance data reported, USAID officials in Afghanistan were not able to comment on the performance data or why changes were made to the data and noted that they were either not present when the changes took place or were too recently staffed to comment on performance data reported. Likewise, the Special Representative for Afghanistan and Pakistan’s staff responsible for drafting the current agricultural strategy for the United States stated that they could not effectively discuss USAID program implementation over the last several years because they were not there and lacked institutional knowledge of the programs. We previously reported that USAID and other agencies in Afghanistan lack enough acquisition and oversight personnel with experience working in contingency operations. The USAID Mission to Afghanistan has experienced high staff turnover—USAID personnel are assigned 1-year assignments with an option to extend their assignment for an additional year—which USAID acknowledged hampered program design and implementation. In addition, the State Department Office of Inspector General noted in its recent inspection of the entire embassy and its staff, including USAID, that 1-year assignments coupled with multiple rest-and- recuperation breaks limited the development of expertise, contributed to a lack of continuity, and required a higher number of officers to achieve strategic goals. For example, the USAID monitoring officials for the eight programs we examined were in place on average 7.5 months (see table 4). Moreover, the length of time that a monitoring official is in place has declined. As of September 2009, the two most recently initiated programs, the Afghanistan Vouchers for Increased Production in Agriculture program and the Incentives Driving Economic Alternatives-North, East, and West program, have had monitoring officials in place for an average of only 3 months each. USAID has not addressed the need to ensure the preservation of institutional knowledge. USAID officials noted that the effectiveness of passing key information from one monitoring official to another, is dependent on how well the current official has maintained his or her files and what guidance, if any, is left for their successor. USAID officials noted that a lack of documentation and knowledge transfer may have contributed to the loss of institutional knowledge. Agricultural development is a key element of U.S. counterinsurgency efforts in Afghanistan. The United States considers agricultural assistance a key contribution to Afghanistan’s reconstruction and stabilization. Since 2002, the United States has awarded about $1.4 billion toward agricultural programs in Afghanistan and plans to invest hundreds of millions of dollars more. As such, ensuring sufficient oversight and accountability for development efforts in Afghanistan takes on particular importance. In addition to relying on implementing partners to execute its programs, a key part of the U.S. oversight and accountability efforts involves a reliance on the collection and analysis of implementing partner data. These implementing partners are expected to and have reported routinely on the performance of USAID’s agricultural programs. However, USAID has not always approved performance indicators established by its implementing partners, allowing one program to operate for almost 5 years without approved performance indicators. Additionally, USAID did not ensure that its implementing partners had established targets for each performance indicator, and USAID did not consistently analyze and interpret implementing partner performance data, which is vital to making program adjustments, higher- level decisions, and resource allocation. Without a set of agreed-upon indicators and targets, and analysis and interpretation of reported performance data, it becomes more difficult to accurately assess the performance of USAID agricultural programs. It is also unclear whether or how USAID has used evaluations to further institutional learning, inform current programs, and shape future planning. Best management practices have demonstrated that routine evaluations enable program managers to identify program vulnerabilities, implement lessons learned, help program managers understand program weaknesses and make needed improvements. Moreover, a lack of documentation of key programmatic decisions and an insufficient method to transfer knowledge to successors have contributed to the loss of institutional knowledge and the ability of the U.S. government and others to build on lessons learned. This makes it more difficult for USAID officials responsible for programmatic decisions, most of whom are in place for less than a year, to make informed decisions about shaping current and future programs. To enhance the performance management of USAID’s agricultural programs in Afghanistan, we recommend the Administrator of USAID take steps to ensure the approval of implementing partner performance indicators; ensure that implementing partners establish targets for all performance consistently analyze and interpret program data, such as determining the extent to which annual targets are met; make use of results from evaluations of its agricultural programs; and address preservation of institutional knowledge. USAID provided written comments on a draft of this report. The comments are reprinted in appendix VI. USAID generally agreed with the report’s findings, conclusions, and recommendations and described several initiatives that address elements of the recommendations. In further discussions with USAID to clarify its response, USAID officials stressed the challenges involved in working in Afghanistan as a result of the security environment and working conditions. They submitted additional documentation, including new guidance on monitoring in high-threat environments, which was disseminated in December 2009. USAID also provided technical comments, which we have included throughout this report as appropriate. We also provided drafts of this report to the Departments of Agriculture, Defense, and State, all of which declined to comment. We are sending copies of this report to interested congressional committees, USAID, and the Departments of Agriculture, Defense, and State. In addition, the report will be available at no charge on GAO's Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7331 or johnsoncm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. This review assesses (1) how the United States has changed the focus of its agricultural efforts in Afghanistan since 2002, (2) USAID’s performance management and evaluation efforts of agricultural programs in Afghanistan, (3) the extent to which USAID’s agricultural programs in Afghanistan met targets, and (4) USAID’s efforts to mitigate challenges in implementing agricultural programs in Afghanistan. In addition, we analyzed financial information on USAID’s programs in Afghanistan and reported on the financial status of its agricultural programs. To assess USAID’s agricultural programs in Afghanistan, including changes in focus, we met with officials from USAID, the Departments of Agriculture, Defense, and State—including the Special Representative for Afghanistan and Pakistan’s office—and implementing partners in Washington, D.C., and Kabul, Afghanistan. In Kabul, we also met with officials from the United Nations, the governments of Afghanistan and the United Kingdom, and a local research group to discuss agricultural efforts. We traveled to the provinces of Badakhshan and Farah to meet with U.S. and Afghan officials and discussed various U.S.–funded projects. For example, in Farah, we met with the local Afghan officials and beneficiaries of U.S. assistance to discuss the progress of USAID’s agricultural projects, visited a project site, and met with U.S. contractors implementing the projects. In Badakhshan, we also met with local officials and beneficiaries to discuss USAID agricultural efforts and how U.S. assistance was being used. In addition, we reviewed past GAO work and reports from other agencies in the U.S. accountability community and nongovernmental organizations on Afghanistan’s current situation and the challenges it faces. We reviewed U.S. government documents concerning the U.S. agricultural strategy and efforts in Afghanistan, as well as USAID funding data. Beginning in fiscal year 2005, USAID began financing agricultural projects as part of the U.S. government’s counternarcotics strategy. These were initially referred to as alternative-livelihood programs, but were later called alternative-development programs (ADP). To track this strategy over time, we reported the share of annual obligated and disbursed funds for ADP and for other agriculture activities. We focused our review on the eight USAID agricultural programs that were active between 2007 and 2009 and had total awards of more than $15 million; however, our analysis of financial information included all USAID agricultural programs. None of the agricultural programs included in GAO’s review were Office of Transition Initiatives (OTI) and the Office of U.S. Foreign Disaster Assistance (OFDA) programs. We did not address agriculture-related projects carried out independently by other U.S. government agencies, such the Department of Defense’s Commander’s Emergency Response Program, or those carried out by multilateral institutions to which the United States contributes, such as the United Nations Development Programme or World Food Program. To assess USAID’s performance management and evaluation efforts, we reviewed the Government Performance and Results Act of 1993 and pertinent GAO evaluations of performance management practices to identify best practices. In addition, we examined USAID’s Automated Directives System and USAID performance management and evaluation documents to identify the agency’s procedures, requirements, and guidance. The Government Performance and Results Act and USAID’s Automated Directives System establish requirements or provide guidance at a level higher than the program level; the former operates at the agency level and the latter directs most of its performance management and evaluations procedures to the bureau or mission level. Nevertheless, effective planning for results at the agency, mission, and bureau level is a function of effective planning for results at the program level. We view results-oriented management tools, including the setting of indicators and targets, as important at all levels of an agency, including the program level. Consequently, we determined that the Government Performance and Results Act criteria, which is operationalized for USAID through its Automated Directives System, is applicable at this level as well. We reviewed USAID and implementing partner planning, funding, and reporting documents for their agricultural programs in Afghanistan, as well as those addressing evaluations. Our review of these documents provided us with information regarding the programs’ performance management structure, goals, objectives, indicators, and targets. We examined these and other documents to determine the extent to which the Mission followed requirements, guidance, and best practices. To assess the extent to which performance was achieved, we reviewed all quarterly and annual reports, implementing partner PMPs, and annual work plans for the eight agricultural programs under review. The data were primarily compiled from implementing partner quarterly reports from April 2005 though September 2009. When data were not available, we used the PMPs and other documents to fill in gaps. USAID could not provide all the needed documents; we, therefore, requested missing documents from the implementing partners. To determine the validity and reliability of the data reported in the quarterly reports, we requested USAID’s completed data quality assessments. We received one USAID data quality assessment completed for all agricultural programs from November 2008, and one program data quality assessment completed by USAID monitoring officials, also from November 2008. We also checked the data for inconsistencies and questioned USAID officials and implementing partners about any inconsistencies. We found the data in the quarterly reports to be sufficiently reliable for our purposes. The data collected were organized in a spreadsheet on a quarterly basis for the eight programs. To track program performance over time, we collected all reported quantitative data on indicators, targets, and results. For each reported indicator, we measured performance achieved as the ratio of results against established targets. We found that implementing partners were inconsistent in reporting on targets, by either not setting targets, or in two cases, retroactively setting or revising targets. In addition, two programs shifted reporting time frames between fiscal and calendar years. Some of the changes resulted from implementing partner audits or a USAID Regional Inspector General audit of the data collected and reported to USAID. Although USAID encourages and permits changes to targets over the life of a program in response to new information, these factors complicated attempts to determine performance. Furthermore, in some cases, in addition to targets, the results were also updated retroactively. We captured changes in target levels and reported results by inserting additional lines in a spreadsheet. This process allowed us to determine changes in targets, results, and performance achieved over time. In general, data reported in quarterly reports were presented cumulatively; however, we found this presentation masked performance achieved in a specific year. Therefore, once all cumulative data were entered into the spreadsheet, we calculated the numbers to show annual targets, results, and performance. The performance data collected were categorized into eight categories: (1) met or exceeded target, (2) achieved 76 to 99 percent of target set, (3) achieved 51 to 75 percent of target set, (4) achieved 26 to 50 percent of target set, (5) achieved 1 to 25 percent of target set, (6) achieved zero progress toward target set, (7) number of indicators used to assess performance, and (8) no target set. Based on the categorical assessment, we were able to determine the number of indicators reported annually and over the life of the program in each of the categories noted above. We are reporting program performance achievements on the annual percentage of indicators that met or exceeded the target. For example, if there were 15 indicators and 9 indicators had met or exceeded the target, than annual program performance was 60 percent (9/15). We decided this measure of performance was appropriate because implementing partners are allowed to adjust and revise target levels to better reflect available information in the field. Further, we found that the percentage of indicators meeting their targets could increase or decrease for a variety of reasons, including changes in measures, the types of measures or the targets set, as well as changes in actual underlying performance. A review of all those factors was beyond the scope of this report. To examine the challenges faced by agricultural efforts in Afghanistan, we reviewed U.S. strategy documents and USAID documents addressing the status of and challenges faced by U.S. efforts in Afghanistan, including security, Afghan capacity and corruption, and USAID staffing and workspace concerns. We also reviewed Department of Defense documents on counterinsurgency strategy and joint diplomatic-military plans. We updated attack data on which we had previously reported. We assessed the data and found them to be sufficiently reliable for our purposes. We reviewed the Afghan government and nongovernmental organization reports regarding capacity and corruption in Afghanistan. To compile a list of USAID monitoring officials, we reviewed the names listed in data USAID provided. To report financial information on USAID agriculture programs and individual projects, we used financial information from the “Pipeline Report” generated from USAID’s Phoenix financial management information system provided to us by the Office of Financial Management at the USAID Mission to Afghanistan. This report contains cumulative financial information for individual projects that may be funded through a contract, cooperative agreement, or grant. We received pipeline reports for all USAID projects in Afghanistan as of the end of fiscal years 2004-2009 and March 31, 2010, and for selected agriculture projects, we received quarterly pipeline reports from fiscal years 2005-2009. The March 31, 2010, pipeline report contains financial information on USAID projects in Afghanistan from fiscal year 2002 to March 31, 2010, for 254 projects of which 41 were agriculture projects. We also checked the data for inconsistencies and questioned USAID officials about any inconsistencies. To describe the financial status of the USAID agriculture program, we used three financing concepts: award, unliquidated obligation, and disbursement. These are related to, but not exactly the same as budget concepts. Award refers to the dollar amount of the award in a signed contract, cooperative agreement, or grant. The signed document indicates the period of time over which the project is expected to be implemented. The amount of the award, the time frame, and other elements of the contractual agreement may be changed through a formal amendment process. Unliquidated obligations represent the current amount of obligations remaining to be disbursed. Disbursements are those funds that have been released from the U.S. Treasury. Cumulative obligations is the total of unliquidated obligations plus disbursements. The e-supplement presents information on Afghanistan for select indicators, including poverty rates by province and region, and the number of households that produced crops during the summer and winter seasons during 2007/2008. The information comes from supplementary tables to a Government of Afghanistan household survey, National Risk and Vulnerability Assessment 2007/8, http://nrva.cso.gov.af/index.html, released online in January 2010. The survey covered the period from September 2007 through August 2008, and was conducted with the financial and technical assistance of the European Commission and several other organizations. The report groups contiguous provinces into 8 regions. For each of the 34 provinces and 8 regions we report the population and compute the share of population that has access to land, has access to safe drinking water, is literate (ages 15 years and over), is urban, has access to electricity, has access to public health facilities (1 hour or less by foot), and owns livestock. The poverty rate is the share of population living on less than a minimum level of food and nonfood (for example, shelter) consumption. In the map, provinces are categorized into 5 groups from high to low ranges of poverty rates. These are from a World Bank analysis of the assessment’s data. These are ranges of poverty rates and not confidence intervals. The poverty rate is only presented for the regions, a national average for the year and for seasons of the year. For a list of 25 crops, including other crops, the report lists the number of households that produce each crop during the summer and winter seasons on irrigated land. Since more than one crop may be grown each season by a farmer, the assessment reports the number of households that report each crop as its primary, second, or third crop. For example, during the summer season, 1,349,200 households cultivate a primary crop, 889,200 households (66 percent) cultivate a second crop, and 448,700 households (33 percent) produce a third crop. For each crop, we computed the number of households that reported producing that crop, regardless whether as a primary, second, or third crop, for each season. We also provide tables showing the number of households that report producing crops as the primary, second, or third crop, for each season. There is no information on the number of households that produce different crops on rain-fed land. The assessment reports that 591,000 households have access to rain-fed land, but some of these households may also have access to irrigated land. We reviewed the survey methodology of National Risk and Vulnerability Assessment 2007/8 report and found it to be sufficiently sound, particularly given the challenging environment in which the data were collected and the potentially sensitive nature of the questionnaire topics. That said, we were not able to ascertain information on some aspects of the survey, which would have helped shed light on its quality. A survey’s design can be judged by its success or failure in minimizing the following types of errors. Sampling error. The report provides no information on the precision of its estimates. Usually, this is expressed in a confidence interval. We cannot, therefore, judge the reliability of the point estimates. Nonresponse error. The report mentions a process of reserve replacement sampled households in the event of a noncontact. However, no data is kept on how frequently these were used, so it is not possible to calculate a response rate. Officials reported that this replacement rate was low. Coverage error. Sixty-eight of the 2441 primary sampling units were replaced, mostly due to security concerns. Such coverage errors could lead to a coverage bias if those covered are categorically different from those not covered with respect to variables of interest. Measurement error. While most of the questionnaire is not of a sensitive nature, we have to be aware that farmers might not be completely honest with a government interviewer when it comes to the cultivation of illicit crops. As such, our assessment was based only on information that was made available about the survey methodology. We conducted this performance audit from March 2009 through July 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Summary financial information about the eight agriculture programs discussed in our report is reported in table 5. Because we selected relatively large programs, the total amount of funds that USAID planned to spend on these eight programs over $1 billion or 75 percent of the total awards for all 41 agriculture programs. One program, the Afghanistan Vouchers for Increased Production in Agriculture program, accounts for 26 percent of all agricultural awards. As of the end of fiscal year 2009, these eight programs accounted for 69 percent of total USAID agricultural assistance disbursements, with one program accounting for 18 percent of all disbursements. The following figure shows the provinces in which the eight agricultural programs we reviewed were active and an example of the types of activities they undertook. The following tables provide information on the annual number of indicators that fell into one of the eight percentage categories. Performance is measured by comparing results against targets. We assessed annual performance—the number of indicators that met or exceeded their target—for each of the eight programs to highlight improvements and declines in performance that took place in a given fiscal year. As shown below, the data collected were organized into eight categories (1) met or exceeded target, (2) achieved 76 to 99 percent of target set, (3) achieved 51 to 75 percent of target set, (4) achieved 26 to 50 percent of target set, (5) achieved 1 to 25 percent of target set, (6) achieved zero progress toward target set, (7) number of indicators used to assess performance, and (8) no target set. Based on the categorical assessment, we were able to determine the number of indicators that fell into one of the eight categories, the number of indicators with target levels reported, and the total number of indicators reported. For each fiscal year, the first column represents the number of indicators whose performance fell within an indicated range. The second column is a percentage of total number of indicators (with and without targets) that fell within an indicated range. Please note this percentage is based on total number of indicators tracked, and may differ from those in figure 8, which is based on total number of indicators with targets. The tables below provide information on annual targets, results, and percentage of each target met for the eight programs we reviewed. The data provided is based on the latest year where performance data was available. The following are GAO’s comments on the U.S. Agency for International Development’s (USAID) letter dated July 1, 2010. 1. GAO modified the report to reflect the new Automated Directives System Guidance on Monitoring in High Threat Environments. However, GAO would note that the Mission to Afghanistan that GAO was directed to for all inquiries was not aware of the December 2009 guidance until June 2010. In addition our application of the Automated Directives System criteria was consistent with the new guidance and required only minor technical revisions. 2. None of the agricultural programs included in GAO’s review were Office of Transition Initiatives (OTI) or Office of U.S. Foreign Disaster Assistance (OFDA) programs. 3. USAID’s Automated Directives System generally requires the same procedures for a conflict zone that it requires elsewhere or else documentation specifically describing when those procedures are not followed. The new guidance does not provide any exemptions with regard to approval of monitoring and evaluation plans and the establishment of indicators and targets, which USAID did not consistently approve. 4. Without approved indicator targets, it is unclear how performance can be reviewed or assessed. 5. GAO acknowledges that USAID is currently developing a PMP on page 14 of the report. 6. At the time of our review, USAID had completed a one midterm evaluation covering three of the eight programs we reviewed. USAID staff, however, were unable to indicate how the findings of the evaluation were used to inform the design of subsequent programs. Additionally, the midterm evaluation included recommendations for improving the three programs, but USAID staff were unable to comment on how the recommendations of the evaluation were implemented. 7. GAO removed its mention of the semiannual reviews from the recommendation based on additional information provided. In addition to the contact named above, the following staff contributed to the report: Hynek Kalkus, Assistant Director; Thomas Costa; Farahnaaz Khakoo; Bruce Kutnick; Sara Olds; Steven Banovac; Joseph Carney; Elizabeth Curda; Mark Dowling; Gena Evans; Etana Finkler; Justin Fisher; Cindy Gilbert; Gloria Mahnad; Kara Marshall; Jackie Nowicki; Sheila Rajabiun; and Jena Sinkfield.
Eighty percent of Afghans are dependent on agriculture for their livelihoods. Agricultural assistance is a key U.S. contribution to Afghanistan's reconstruction efforts. Since 2002, the U.S. Agency for International Development (USAID) has awarded about $1.4 billion for agricultural programs to increase agricultural productivity, accelerate economic growth, and eliminate illicit drug cultivation. This report (1) describes the change in U.S. focus on agricultural assistance since 2002, (2) assesses USAID's performance management and evaluation of its agricultural programs, (3) analyzes the extent to which certain programs met targets, and (4) addresses efforts to mitigate implementation challenges. GAO reviewed USAID documents; analyzed program data; and interviewed program implementers and USAID officials in Washington, D.C., and Afghanistan. GAO has prepared this report as part of its ongoing efforts to monitor key aspects of U.S. efforts in Afghanistan. The United States' focus in providing agricultural assistance to Afghanistan shifted from food security programs in 2002 to counternarcotics-related alternative-development programs in 2005. This focus on providing farmers with alternatives to growing opium poppy lasted through 2008. In 2009, the Administration shifted the focus of its agricultural strategy in Afghanistan from counternarcotics to counterinsurgency, noting that economic growth and new job creation were critical to U.S. efforts in Afghanistan because they provide alternatives to narcotics- and insurgent-related activities. USAID's Automated Directives System established planning, monitoring, and evaluation procedures that USAID was expected to follow in Afghanistan. USAID planning efforts prior to 2009 largely follow these procedures. However, since the end of 2008, USAID has operated without a required Mission performance management plan for Afghanistan. In addition, USAID did not approve all implementing partner monitoring plans for the eight USAID agricultural programs, which represented about 75 percent of all USAID agricultural awards since 2002. USAID also did not assure all indicators had targets. USAID undertook efforts to monitor agricultural programs, but due to security concerns could not consistently verify reported data. USAID did not consistently analyze and interpret or document program performance for these eight programs, active between 2007 and 2009, on which our review focused. In the absence of this analysis, USAID did not document decisions linking program performance to changes made to the duration or funding of programs. USAID conducted one evaluation covering three of the eight programs, but the extent to which or whether USAID used the evaluation to enhance current or future programs is unclear. We found that the eight agricultural programs we reviewed did not always establish or achieve their targets for each performance indicator. USAID requires implementing partners to submit information on indicators, targets, and results. We measured performance for the eight programs by comparing annual results against annual targets and determining the extent to which targets were met. Six of the eight programs did not meet their performance targets in the most recent year for which targets were reported. For the two programs that met all their targets, we found they failed to establish targets for several indicators and, thus, we could not fully assess performance for those indicators. We also found that the three longest-running programs in our review showed declines in performance from fiscal years 2006 to 2008. USAID faces several challenges to implementing its agricultural programs in Afghanistan, such as the security environment, and has taken steps to mitigate other challenges, such as working to improve Afghan government capacity. However, while USAID's lack of documentation and high staff turnover have hampered USAID's ability to maintain institutional knowledge, the agency has not taken steps to address this challenge. GAO recommends that the USAID Administrator take a number of steps to enhance performance planning, monitoring and evaluation, and knowledge transfer procedures. USAID agreed with our recommendations, highlighted ongoing efforts to improve in these areas, and noted the high-threat environment in which they are operating.
FDA categorizes medical devices in one of three classifications based on the degree of potential risk and control needed to reasonably ensure their safety and effectiveness. Class I, or “low risk,” devices are subject to minimum regulation and include such items as tongue depressors, elastic bandages, and bed pans. Class II, or “medium risk,” devices include syringes, hearing aids, resuscitators, and electrocardiograph machines and are subject to more scrutiny than class I devices. Most medical devices designated as class I or class II reach the market through FDA’s premarket notification—or 510(k)—process. Class III, or “high risk,” devices are the most rigidly controlled and include devices such as heart valves, pacemakers, and defibrillators. These devices are life-supporting or life-sustaining, or are substantially important in preventing the impairment of human life, or present a potentially unreasonable risk of illness or injury. Class III devices are subject to FDA’s premarket approval process, which requires the manufacturer to present evidence, often including extensive clinical data, that there is a reasonable assurance that the device is safe and effective before placing it on the market. To help ensure the safety of medical devices, the Federal Food, Drug, and Cosmetic Act was amended in 1976 to expand FDA’s responsibility for regulating medical devices in the United States. However, studies prepared by our office, the former Office of Technology Assessment, and the Office of Inspector General of the Department of Health and Human Services, as well as congressional investigations and hearings led the Congress to conclude that the 1976 amendments were inadequate to protect the public from dangerous and defective medical devices. Among its concerns, the Congress concluded that FDA’s ability to ensure the removal of dangerous or defective devices, such as heart valves and other life-sustaining devices, from the market was hampered because manufacturers did not have adequate systems for tracking patients with these high-risk devices. The efforts of two medical device manufacturers—Shiley, Inc., and Vitek, Inc.—to notify product recipients of potential product defects highlight the need for effective medical device tracking systems. Shortly after passage of the 1976 Medical Device Amendments, Shiley applied for and received approval to market its Bjork-Shiley artificial heart valve under FDA procedures—which were under development and would not be finalized for another decade. According to a congressional study, between 1979 and 1986, an estimated 40,000 people in the United States received the valve. During this period, the struts that held the mechanical valves in place fractured, leading to death in an estimated two out of three cases where strut failures occurred. Despite a redesign, strut fractures persisted, and Shiley was forced to recall all of its devices and cease production in November 1986. According to the congressional study, Shiley had reported a total of 389 fractures related to one of its valves and 248 deaths, as of January 1990. In December 1990, Shiley voluntarily undertook an effort to locate and inform approximately 23,000 recipients about one of its artificial heart valves that was subject to life-threatening fractures. Although Shiley reported distributing patient registration cards to hospitals to obtain recipients’ names when valves were implanted, less than 50 percent of the cards were reportedly returned. As a result, efforts to locate patients—which also included letters and telephone calls to physicians and announcements in print media—confirmed the locations of only about 14,000 of the 23,000 (61 percent) heart valve recipients, as of November 1991. There were similar difficulties in locating patients who had received a jaw implant device manufactured by Vitek. The device contained layers of teflon or proplast—or various combinations of these materials—which Vitek argued were substantially equivalent to a product on the market prior to the 1976 amendments. Following FDA approval, the devices were reportedly prone to break apart, fragment, and function improperly. FDA estimated that more than 26,000 of the devices were distributed between 1973 and 1988. However, Vitek did not know how many devices were actually implanted because it was not required to and did not maintain records of patients who had received an implant. FDA officials reported that apart from the devices recovered through seizures executed in October 1990, it was unable to determine the number of devices that were in distribution or implanted in patients because the devices were manufactured in sets of two, which could be split between patients. Following Vitek’s declared bankruptcy in June 1990 and the bankruptcy trustee’s refusal to notify recipients of Vitek’s implant device, FDA established and funded a patient notification program, which it estimates cost about $41,000. In September 1991, as part of its patient notification program, FDA notified physicians and hospitals on Vitek’s consignee list and requested that they advise patients about the problems associated with the Vitek implants and treatment options. FDA also conducted an extensive media campaign, which included a video news release and various press releases and forums targeted to health organizations and professionals. In addition, the Medic Alert Foundation, a nonprofit organization, established a registration program for Vitek proplast implant patients at FDA’s request. For an enrollment fee, patients in the registry received updated information about Vitek implants, symptoms and treatments available after the implant was removed, and other devices that could possibly serve patients better. Through the program, FDA could, if needed, locate patients and their doctors. However, in mid-1994, Medic Alert informed FDA that it was no longer financially feasible to operate the program. In November 1994, FDA issued letters to about 5,000 patients to inform them that the notification program was closing and that patients could receive updated information through various organizations. The medical device tracking provision, enacted in November 1990 as part of SMDA 90, was intended to improve manufacturers’ ability to track patients with high-risk medical devices and to ensure that FDA and manufacturers could quickly remove dangerous or defective devices from the market. The provision defines high-risk devices as those that are permanently implantable and would likely have serious adverse health consequences were the device to fail or those that are life sustaining or life supporting and are used outside the device-user facility. The Secretary of Health and Human Services may also designate any other device as high risk. All manufacturers of such devices must adopt a method of device tracking. CDRH’s Office of Compliance has primary responsibility for implementing and enforcing the requirements of the device tracking regulation. These responsibilities include providing agency field staff with guidance on inspecting manufacturers for compliance with the regulation during GMP inspections and monitoring corrections and removals of device products from the market. The Office of Compliance is also responsible for reviewing and approving exemptions and variances from one or more parts of the tracking regulation, filed by a manufacturer, importer, or distributor seeking relief. In addition, FDA’s Office of Regulatory Affairs and FDA’s 21 district offices within the United States and Puerto Rico are responsible for all FDA inspections, which include inspections of medical devices. District office staff conduct on-site GMP inspections for all FDA-regulated facilities and monitor the efforts of recalling manufacturers to ensure that defective or dangerous devices are corrected or removed from the market. In August 1993, FDA issued a medical device tracking regulation that implemented the new tracking requirement of SMDA 90. Under the regulation, manufacturers must adopt a method of tracking that enables them to quickly provide FDA information on the location of the device and patients using them to facilitate efficient and effective mandatory recalls or notifications. Manufacturers generally use one of two basic approaches. The first approach registers the patient at the time of implant and uses a periodic follow-up mechanism—such as post card, letter, or phone call—to update names and addresses. The second approach uses a health care professional, usually a physician, who stays in contact with the patient to update his or her location. While no specific method of tracking is required, under the regulation, manufacturers are required to follow three key tracking requirements: First, manufacturers are required to develop written standard operating procedures for implementing a tracking method to generate the required information on devices and patients. The operating procedures must include data collection and recording procedures that include documentation on the reasons for missing required tracking data, a method for recording all modifications to the tracking system—including changes to the data format, file maintenance, and recording system—and a quality assurance procedure to ensure that the operating procedures are working effectively. The manufacturer’s quality assurance program must provide for audits of the tracking system based on a statistically relevant sampling of the manufacturer’s tracking data. For the first 3 years of tracking a device, manufacturers are required to perform the audits in 6-month intervals; thereafter, the tracking systems must be audited annually. These audits must check both the functioning of the tracking system and the accuracy of the data in the system. Second, manufacturers are required to establish and maintain certain data in their tracking systems. For device products that have not yet been distributed to a patient, the manufacturer must obtain and keep current the name, address, and telephone number of the distributor holding the device, as well as the location of the device. For tracked devices distributed to a patient, the manufacturer must obtain and maintain information on the identity and current location of the patient and other information on the device product, such as the lot, batch, model, or serial number, and the attending physician’s name, address, and telephone number. Finally, upon request by FDA, manufacturers must be able to quickly report device and patient location information. Within 3 workdays of FDA’s request, manufacturers must report the location of tracked devices that have not yet been distributed to patients as well as provide information about the distributor. For devices distributed to patients, manufacturers must report the locations of devices and patients within 10 workdays of FDA’s request. The tracking regulation also states that manufacturers have the responsibility of determining the devices subject to tracking and to initiate tracking. To provide guidance to manufacturers, FDA listed 26 categories of devices in the regulation that it regards as subject to tracking and has focused enforcement efforts on manufacturers of such devices. (See app. II for the 26 device categories.) Distributors of devices—which may include user facilities, physicians, and pharmacies—also have reporting and recordkeeping responsibilities under the device tracking regulation. Distributors must collect, maintain, and report back to the manufacturer current information on the locations of tracked devices and patients who use them. Following concerns that SMDA 90 provisions did not provide manufacturers with clear guidance on which devices were subject to tracking, the medical device tracking provision was revised in FDAMA.Under FDAMA, FDA—not manufacturers—is required to determine whether certain devices are subject to tracking and to issue orders to manufacturers requiring them to adopt a tracking system for these devices. Devices not identified are exempt from the tracking requirement until FDA issues an order to the contrary. FDA has taken several actions to implement the changes mandated by FDAMA. In February 1998, FDA issued orders to manufacturers of devices in categories designated for tracking under SMDA 90 provisions, requiring them to continue their tracking systems while FDA considered reducing the number of devices subject to tracking. In March 1998, FDA announced in a Federal Register notice the availability of guidance on manufacturers’ responsibilities for medical device tracking under FDAMA and requested comments from the public on factors FDA should consider in deciding whether some device categories should no longer be subject to tracking. FDA is currently reviewing these comments and plans to publish a notice in the Federal Register with a revised list of device categories subject to tracking. If a medical device exhibits a problem after it is marketed, one remedial action available to the manufacturer of the device and FDA is to recall the product. In these cases, manufacturers must develop a strategy for implementing recalls and take appropriate action to protect the public health, including effectiveness checks to ensure that users of devices have been notified of the recall. FDA reviews and approves recall strategies and assigns one of three recall classifications—class I, II, or III—to indicate the relative degree of health hazard of the product being recalled. For a class I recall, FDA has determined that the use of or exposure to the product could cause serious health consequences or death. Class II recalls are designated for situations where FDA has determined that the use of or exposure to the product could cause temporary or medically reversible adverse health consequences and the probability of serious health consequences is remote. Class III recalls are reserved for situations that involve minor component malfunction repairs and labeling changes where use of or exposure to the product is not believed likely to cause adverse health consequences. FDA also monitors the progress of recalls and audits a sample of completed recalls to verify that the recalling manufacturer has properly removed defective devices from the market. As a goal, FDA expects manufacturers to complete recalls within 6 months of initiation and requires FDA staff to conduct audits and terminate recalls in not more than 90 workdays after the manufacturer reports the recall completed. (For more details of FDA’s recall process, see app. III.) FDA’s approach to ensuring that device manufacturers are operating tracking systems capable of tracing devices from distribution to end users has several limitations, such as a failure to include audits of the tracking systems in its inspections and infrequent inspections. To address these problems, CDRH’s Office of Compliance is considering initiatives that are intended to strengthen its oversight of manufacturers’ tracking systems, but the details of most of these initiatives have not yet been developed. Moreover, FDA has not taken steps to ensure that tracking continues when manufacturers responsible for tracking go out of business, merge, or are acquired by others. FDA on-site inspections of device manufacturers’ facilities are intended to ensure compliance with the requirements of the GMP regulation, the medical device reporting regulation, and the medical device tracking regulation. Manufacturers of class II or class III devices, such as those subject to tracking, are to receive inspections at least once every 2 years.Although these inspections represent a key element of FDA’s oversight of device tracking, we found that both the scope and frequency of the inspections are limited. FDA’s Compliance Program Guidance Manual requires investigators to determine whether manufacturers are complying with the device tracking regulation by reviewing their written standard operating procedures for tracking during GMP inspections. If the manufacturer does not have a tracking system, the investigator is required to cite the violation on the agency’s list of inspection observations, commonly referred to as FDA form 483. This form is presented to and discussed with the manufacturer’s management at the conclusion of the inspection. The investigator also prepares an establishment inspection report, which summarizes the manufacturer’s operations and any conditions observed during the inspection that may violate federal statutes. FDA’s guidance manual also requires investigators to document in the report whether the manufacturer makes any of the devices subject to tracking, and if so, whether it is meeting its tracking obligations. However, in making assessments of manufacturers’ compliance with the tracking regulation, FDA does not require inspectors to conduct independent audits of the tracking systems to ensure that the systems are working. Moreover, inspectors are directed not to review manufacturers’ quality assurance audit reports that evaluate the functioning and accuracy of data in the tracking systems. Reviewing these reports or conducting independent audits of manufacturers’ tracking systems would assist FDA investigators in assessing whether manufacturers are operating tracking systems capable of quickly identifying the locations of devices and patients that use them. FDA can use its existing regulatory authority to require manufacturers to certify that they are, in fact, conducting required audits. However, a senior FDA official told us that this requirement has never been used and is not included in FDA’s guidance manual. FDA officials explained that its current inspection strategy for assessing compliance with the tracking requirements is intended to educate and encourage compliance among manufacturers before actually enforcing the provisions of the regulation. The officials also explained that due to a written policy established as part of the GMP regulation in 1978, FDA inspectors are precluded from reviewing or copying manufacturers’ records and reports that result from audits performed in accordance with a written quality assurance program at any regulated entity. FDA officials also told us that the medical device industry has resisted the agency’s access to the audits because they fear the audits would yield incriminating information that FDA could use against them. According to FDA officials, this belief may cause manufacturers to be less than candid and thorough in their audits if the audits were subject to FDA inspection. As such, FDA adopted the policy to encourage manufacturers to conduct self-audits that are unbiased and meaningful. Nevertheless, the agency has not conducted its own audits of the tracking systems. In addition, FDA’s inspections of manufacturers’ tracking systems have not been conducted at least once every 2 years, as required by GMP standards. Our analysis of FDA data shows that FDA conducted GMP inspections in 137 (58 percent) of the 238 reportedly active establishments subject to tracking during fiscal years 1996 and 1997. In addition, during fiscal years 1994 and 1995, FDA conducted GMP inspections in only 91 (45 percent) of the 202 reportedly active manufacturers subject to tracking. FDA officials told us that reductions in field staff resources used to conduct inspections has made it difficult for FDA to meet the GMP biennial inspection requirement. According to FDA officials, the number of investigators available to inspect all FDA-regulated facilities—including manufacturers of foods, drugs, biologics, veterinary medicine, and medical devices—has declined since 1993. They noted for example that investigators assigned specifically to cover the approximately 4,400 device manufacturing establishments nationwide declined from 34 in fiscal year 1994 to 28 in fiscal year 1997, while the scope of GMP inspections has increased to include compliance programs for the medical device reporting regulation and medical device tracking regulation. FDA officials acknowledged that changes are needed to better assess compliance with the medical device tracking regulation and improve its oversight of manufacturers subject to tracking. For example, FDA has included in its fiscal year 1999 performance plan a risk-based inspection plan that will require FDA to identify and prioritize device areas of concern to focus resources on the highest priorities. Inspection activities would be prioritized based on several factors, including reports of problems with medical devices, earlier inspections, and devices associated with higher risk. FDA officials told us that many of the devices designated for tracking, such as cardiovascular implants, would likely receive priority attention because of the relative high risk associated with their use. The risk-based plan is expected to be presented to FDA’s Medical Device Field Committee, which is responsible for reviewing and approving significant changes to on-site inspections before they can be included in FDA’s compliance program. To improve FDA’s assessment of manufacturer compliance with tracking requirements, officials of the Office of Compliance told us they are considering separating GMP inspections of manufacturing and distribution processes from records inspections—which typically include reviews of manufacturer compliance with medical device reporting and tracking requirements under SMDA 90—thereby allowing inspectors more time to review manufacturer compliance with recordkeeping requirements. In addition, the Office of Compliance also plans to develop an audit plan that will require FDA inspectors to independently verify the adequacy of manufacturers’ standard operating procedures for tracking and determine whether the procedures are being followed. However, at the time of our review, FDA did not have a draft of the planned changes available for review and had not established a time frame for presenting these changes to FDA’s Medical Device Field Committee for approval. Maintaining accurate and complete tracking records when a manufacturer goes out of business, merges, or is acquired by another manufacturing establishment is critical to ensuring that devices can be traced from distribution to end users if a recall becomes necessary. However, FDA has not acted to ensure that tracking continues in these situations. Several options are available to FDA for covering the costs of operating a tracking system when a manufacturer goes out of business. Requiring manufacturers to notify FDA when mergers and acquisitions take place could also help FDA ensure that device tracking systems continue. The device tracking regulation requires a manufacturer that goes out of business to advise FDA at the time it notifies any government agency, court, or supplier and provide FDA with a complete set of its tracking records and information. Our review of FDA’s registry of device manufacturers shows that 47—or 16 percent—of the 285 establishments subject to tracking were classified by the agency as either tentatively (21 establishments) or permanently (26 establishments) out of business, as of May 1997, and that some manufactured high-risk devices, such as heart valves, pacemakers, ventilators, defibrillators, and apnea monitors. However, none of the manufacturers that were reportedly closed for business has provided FDA with tracking records. FDA officials believe it is possible that several of the manufacturers may have merged or been acquired. In such instances, the tracking regulation requires the acquiring establishment to continue the tracking obligations of the failed one. However, the device tracking regulation does not require the establishment acquiring the rights to manufacture the device to notify FDA when these transactions take place. As a result, FDA officials could not determine the number of manufacturers that were involved in mergers or acquisitions or whether any of them had assumed the tracking responsibilities of establishments involved in these transactions. FDA officials told us they have no plans to recover the tracking records of any failed establishments and operate a tracking system itself. In FDA’s view, absent a public health emergency with a tracked device, the agency would not be able to justify cuts in other programs to carry out a tracking program, which is largely the responsibility of a manufacturer. Further, officials said they have no basis to determine how much it would cost to operate any of the failed establishment’s tracking systems because the variables, such as the number of devices distributed by the manufacturers and used by patients, are unknown. Thus, no valid cost estimates could be made by FDA. While we recognize that FDA would likely incur additional costs to operate the tracking system of a failed establishment, without reliable tracking data, FDA may have serious difficulties promptly recalling and notifying patients if a public health emergency were to occur. To pay the cost of maintaining the tracking systems of failed establishments, FDA could seek legislative authority to require manufacturers of tracked devices to provide some form of financial assurance to FDA that would demonstrate their commitment to meet their tracking obligations. Alternatively, FDA could encourage patients and health providers that use tracked devices of defunct establishments to pay a fee to establish and maintain a registry of the current locations of patients and devices, as was done in the case of the Vitek jaw implants. FDA could also consider shifting resources from other programs or request additional funding from the Congress for the operation of a tracking program. A senior FDA official said the agency could attempt to obtain the tracking records of failed establishments. This, at a minimum, would provide FDA with information on the last known locations of devices and patients in the event a recall and notification became necessary. To locate the records would likely require FDA investigators to visit the last known address of the manufacturer to confirm its closure, the local post office to determine whether a forwarding address was provided, or government agencies or courts that may have received notification of the manufacturer’s closing. For manufacturers involved in mergers and acquisitions, FDA could include a requirement in the medical device tracking regulation that an establishment that has acquired the right to manufacture another manufacturer’s tracked device must notify FDA that it has assumed the tracking duties of the former establishment. This would provide FDA with greater assurance that tracking of critical devices is being continued when mergers and acquisitions have taken place. Medical device recalls are an important remedial action that manufacturers and FDA can take to protect the public from unsafe and ineffective device products. According to FDA, delays in the identification and removal of potentially hazardous devices from the market can increase the chances of inadvertent misuse of devices and risk to public health. To encourage expeditious recalls, FDA requires that manufacturers complete recalls within 6 months from the date of initiation. In addition, FDA, through its Regulatory Procedures Manual, requires district offices to review and audit the recall effort and submit a report to headquarters that summarizes the results of the recall and recommends approval to terminate the recall not more than 90 workdays after the recall is completed. These reports provide FDA with assurance that recalling manufacturers have taken prompt and appropriate actions to resolve problems with devices and assists FDA in identifying trends and evaluating new problem areas in manufacturing and processing. (See app. III for additional detail on FDA’s recall process.) However, our review of recalls of defective tracked devices initiated by manufacturers and monitored by FDA shows that most were not completed within the required time frames. The number of devices subject to recall and type of correction or modification required were among possible factors cited by FDA officials as contributing to delays in manufacturers completing recalls; late submissions of summary recall reports due to other work priorities in district offices were believed to have contributed to delays in FDA terminating the recalls. At this writing, FDA has not conducted a comprehensive review of its recall procedures and recall performance to determine how to improve the timeliness of recalls. From FDA’s recall records and computer databases, we identified 54 recalls of tracked devices, all of which were voluntarily initiated by 35 manufacturers, during fiscal years 1994 through 1996. Three of the 54 recalls were designated by FDA as class I recalls—where FDA had determined that use of or exposure to the device could have serious adverse health consequences; 43 were class II recalls—where FDA determined that use of or exposure to the device could cause adverse health consequences that are reversible or the probability of adverse health consequences is remote; and 7 were class III recalls—where FDA determined that use of or exposure to the device would likely not cause adverse health consequences. The remaining recall was a safety alert. As of January 1998, 49 (90 percent) of the 54 recalls of tracked devices had been completed; however, only 15 (31 percent) of these 49 recalls had been completed within FDA’s 6-month guideline. Thirty-four (69 percent) of the 49 completed recalls took longer than 6 months to complete, including 17 (35 percent) that took between 6 months to 1 year to complete. Seventeen other recalls (34 percent) took more than 1 year to complete, including 4 class II recalls that took more than 2 years. For example, a class II recall of ventilators took 919 calendar days to complete. In addition, we found that five class II recalls were still ongoing as of January 16, 1998, even though four of the recalls were started in 1996 and one started in 1995. Recall completions ranged from 12 calendar days to 1,044 calendar days, with a median of 226 calendar days. (See table 1.) FDA also did not terminate recalls of tracked devices in a timely manner. Of the 49 recalls that were reported completed by manufacturers, less than one-half were terminated by FDA within its 90-workday standard. As of January 16, 1998, FDA had approved 36 (73 percent) of the 49 recalls for termination; 13 recalls (27 percent) were still awaiting termination by FDA, which included two class I recalls of a ventilator and a pacemaker that were completed by the manufacturers in August 1995 and August 1997, respectively. For the 36 recalls terminated by FDA, 24 (67 percent) were reviewed and terminated by FDA within 90 workdays. For 12 completed recalls (33 percent), FDA took more than 90 workdays, including 2 class II recalls of ventilators and defibrillators that required about 1 year or more for termination. (See table 2.) FDA time to terminate the 36 recalls ranged from 1 day to as many as 390 workdays, with a median of 45 workdays. Total recall time for device manufacturers and FDA ranged from 20 days to 786 days, with a median of 333 combined calendar days and workdays. Total recall time for one class I recall of a defibrillator was 529 days. (App. IV shows for each recall, the number of calendar days elapsed for manufacturers to complete the recall, the number of workdays elapsed for FDA to terminate the recall, and the total time elapsed.) It was beyond the scope of this study to identify the underlying reasons for delays in completing recalls of tracked devices. However, we discussed our analysis of the recalls with officials in FDA’s Office of Regulatory Affairs who are responsible for recalls. Although these officials did not conduct an in-depth review of the recalls, they told us that the length of time manufacturers generally took to complete recalls can vary due to factors such as the amount and type of recall correction or modification required, the amount of product subject to recall or correction, the number of notifications needed to obtain responses from consignees, and inadequate attention some manufacturers give to recalls. The officials also explained that while district offices are often slow in submitting the summary reports to FDA for closure due to higher-priority work, such as conducting comprehensive GMP inspections, district offices are monitoring the progress of recalls and completing audit checks to verify the effectiveness of recalls. FDA reported that it is taking a number of actions to complete and close all outstanding recalls initiated during 1994 through 1996, as quickly as possible. These include instructing district offices to complete and close out all outstanding recalls by the end of fiscal year 1998 and updating the recall database with current information on the status of all recalls. FDA’s approach to inspecting medical device tracking systems provides little assurance that manufacturers can track devices through the chain of distribution to patients within a short period of time. Without conducting inspections once every 2 years that include audits of the tracking systems to verify the reliability of data in them, FDA cannot be certain manufacturers are operating systems that will work when recalls are necessary. FDA’s initiatives for improving oversight of high-risk manufacturers appear to be a step in the right direction. However, it is unclear whether these initiatives will provide FDA with adequate oversight of manufacturers subject to tracking, given the workload and size of FDA’s inspection force available to conduct inspections in device establishments. Still, FDA may be able to increase its oversight presence, if as expected, FDA reduces the number of device categories subject to tracking under FDAMA, focuses its inspection priorities on high-risk devices, and separates GMP inspections from record inspections. In addition, because FDA has not acted to recover the tracking records of failed establishments and is unaware of manufacturers involved in mergers or acquisitions, the agency has no assurance that the tracking obligations of such manufacturers are being continued. While we recognize that device tracking is the responsibility of manufacturers, FDA, as the protector of public health, must have a method of continuing the tracking obligations of manufacturers when they go out of business or the agency will likely have serious problems executing prompt and effective recalls—as was the case with Vitek’s jaw implants. FDA can explore a number of options for funding tracking systems for failed establishments to ensure public safety, including seeking necessary legislation. Requiring manufacturers to notify FDA when they merge or acquire the rights to manufacture another establishment’s product would also better fulfill the goals of tracking and protect public health. Finally, because recalls of tracked devices have been executed slowly by manufacturers and FDA, the agency has less assurance that dangerous or defective devices under recall are promptly and appropriately removed from the market and less information available to analyze trends in device problems. Timely completion and termination of recalls provide FDA with greater assurance that defective devices are corrected and removed promptly and effectively and with more information to analyze and resolve device problems. FDA actions to address these problems are encouraging. Nevertheless, FDA’s efforts to improve the timeliness of recalls could benefit from evaluating the reasons why manufacturers and FDA frequently require extensive amounts of time to complete and terminate device recalls. To improve FDA’s ability to monitor manufacturer compliance with the medical device tracking regulation and conduct recalls of tracked devices in a timely manner, we recommend that the Commissioner of FDA develop and implement a plan to verify the completeness and accuracy of data in the tracking systems of device manufacturers to ensure that the systems can trace devices through the chain of distribution to end users, take steps to recover the medical device tracking records of manufacturers that have failed and have not provided such information to FDA and report to the Congress on the results of its assessment of options for covering the costs of operating a device tracking system for failed establishments, revise the medical device tracking regulation to require an establishment that acquires the right to manufacture another establishment’s tracked device either through merger or acquisition to immediately notify FDA that it has assumed the tracking obligations of the former establishment, and examine the reasons for delays in completing recalls of tracked devices and develop and implement strategies for improving the timeliness of recalls. We obtained comments on a draft of this report from FDA and three trade associations that represent the medical device industry—Health Industry Manufacturers Association (HIMA), Medical Device Manufacturers Association (MDMA), and the National Electrical Manufacturers Association (NEMA). In general, FDA agreed with the findings and some of the recommendations in our report and said they had begun acting on them. While NEMA agreed with our findings and recommendations, HIMA and MDMA were concerned that implementing our recommendations would place undue burdens on manufacturers without improving the agency’s oversight of device tracking or benefiting public health. FDA agreed that manufacturers should maintain accurate and complete data in tracking systems and indicated that it was revising its inspection approach to verify that manufacturers have adequate procedures in place to comply with medical device tracking requirements. However, FDA does not plan to verify the data in the tracking systems as we recommended because it believes the agency should focus on ensuring that manufacturers have tracking systems in place. While we agree that determining whether manufacturers have appropriate tracking procedures in place is an important element of an FDA inspection, limiting inspections in this way does not provide FDA with assurance that the data in the tracking system are accurate and complete. HIMA and MDMA were concerned about the methods FDA may use to implement our recommendation to verify tracking system data and the cost of these efforts. HIMA believes it is unnecessary for FDA to conduct independent audits of tracking systems because such audits would duplicate those already performed by the manufacturers. They noted that, if necessary, FDA is authorized to require manufacturers to certify that they are, in fact, conducting required audits. MDMA indicated that we had not linked weaknesses in FDA’s oversight of tracking systems with an adverse impact on public health. They also believed that in order to independently verify tracking system data, FDA would need to conduct “mock recalls” that would require manufacturers to generate reports of the location of all devices in distribution with hypothetical malfunctions. MDMA believes such an effort would be costly and provide little benefit to FDA and the public. In our view, FDA needs to develop an approach to verify the data in manufacturers’ tracking systems. However, the industry’s comments are not without merit and should be considered by FDA as it weighs the costs and benefits of different approaches for addressing this issue. We agree with HIMA that requiring FDA inspectors to conduct independent audits of tracking systems could duplicate manufacturers’ own internal audits and would likely result in some additional costs to FDA and manufacturers. Nevertheless, it may be one way FDA can verify the tracking data without having access to the internal audit reports. HIMA also correctly notes that FDA is authorized to require manufacturers to certify that they are conducting required audits of tracking system. We have revised our report to include certification as an option FDA could use to verify tracking system data. While our report does not link weaknesses in FDA’s oversight of device tracking with adverse health events, as noted by MDMA, we do not believe FDA should await a public health emergency before taking action to ensure that device tracking systems are capable of tracing devices to end users. We disagree with MDMA that FDA would need to conduct mock recalls to independently verify the tracking data. Indeed, such an approach would be costly and inefficient and may be outside of FDA’s authority. Although we are not prescribing a specific method of verification for FDA, the agency could select a random sample of tracking system data and contact end users of devices to confirm their locations. Such an approach would likely take less time and resources to complete than the mock recalls suggested by MDMA. HIMA and MDMA also disagreed with our recommendation that FDA examine options for requiring manufacturers of tracked devices to provide financial assurance to offset the costs FDA may incur in maintaining a tracking program for failed establishments. HIMA said that such a requirement would be costly and difficult to implement. MDMA indicated that while it supports FDA’s efforts to ensure compliance with these regulations, the cost should be borne by the taxpayers who benefit from the tracking of medical devices, rather than the manufacturers. While it may be costly to operate the tracking systems of failed establishments, these systems must be maintained so that end users can be promptly notified of serious device problems that warrant corrective action. However, as HIMA and MDMA point out, there may be ways to cover the cost of this activity other than requiring manufacturers to provide financial assurance. In our view, FDA needs to evaluate approaches for resolving this problem, including who should be responsible for maintaining the tracking records and who should pay the cost of this activity. FDA should report its findings to the Congress, and, if necessary, seek authority from the Congress to implement a solution. We have modified the report and recommendation to address this issue. HIMA believes it would be better to measure the effectiveness of a recall based on a manufacturer’s success in identifying and contacting a device’s end users or patients where notification was required. While we agree with HIMA that the ability of a tracking system to locate end users provides a valuable measure of recall success, we believe that the timeliness of recalls is also important. A key goal of tracking is to ensure that defective or dangerous devices can be corrected or removed from the market within a short period of time. Thus, in our view, FDA’s timeliness guidelines for completing and terminating recalls are valuable indicators for measuring recall performance. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to other congressional committees and members with an interest in this matter, and we will make this report available to others upon request. If you or your staff have any questions about this report, please call me at (202) 512-7119 or John Hansen at (202) 512-7105. Other major contributors to this report are Darryl Joyce, Julian Klazkin, and Claude Hayeck. We conducted our study of FDA’s implementation of the provisions of the medical device tracking requirements of the Safe Medical Devices Act of 1990 at FDA’s Office of Compliance within the Center for Devices of Radiological Health (CDRH) and the Office of Regulatory Affairs. In addition to reviewing the laws, regulations, and literature relevant to medical device tracking, we met with officials of CDRH and the Office of Regulatory Affairs to discuss the agency’s efforts to implement the tracking regulation and policies and procedures used to inspect manufacturers’ tracking systems and recall tracked devices. From a list of about 150 manufacturers subject to tracking provided to us by the Emergency Care Research Institute (ECRI), a nonprofit health research agency, we judgmentally selected and interviewed representatives of 10 manufacturers to discuss the methods they used to track devices, which included pacemakers, heart valves, defibrillators, ventilators, apnea monitors, and jaw implants. We also analyzed FDA statistics on the number of GMP inspections conducted in establishments that were subject to tracking during fiscal years 1994 through 1997. We reviewed a list of manufacturers registered with FDA to identify the number of active and inactive establishments that were subject to tracking as of May 1997. To identify recalls of medical devices that were subject to tracking during fiscal years 1994 through 1996, we reviewed a list of recalls of tracked devices known to FDA. To supplement this list, we obtained a list of recalls of tracked devices covering the same period from ECRI because it maintains an automated database that collects information from manufacturers, FDA weekly enforcement reports, and scientific literature on devices subject to recalls. With CDRH staff, we reviewed a total of 135 recalls from these two sources and identified 54 recalls of devices that were subject to tracking. Included in our analysis were recalls of devices distributed on or after August 29, 1993, the effective date of the medical device tracking regulation. We excluded recalls for at least one of the following reasons: date of distribution of the device could not be determined; device was not subject to the tracking regulation; date of distribution of the device occurred prior to the effective date of device was granted an exemption from tracking by FDA; recall commenced in 1997, which was beyond the scope of our study; or recall involved devices distributed outside the United States, which are not subject to tracking. To determine the amount of time manufacturers and FDA took to complete recalls of tracked devices, we analyzed data in recall records and FDA’s recall database maintained by CDRH and the Office of Regulatory Affairs on the dispositions of 54 recalls of tracked devices that were initiated by manufacturers during fiscal years 1994 through 1996. From documentation in the recall records and databases, we calculated the number of calendar days manufacturers took to complete each of the 54 recalls and compared the results against FDA’s instructions to manufacturers to complete recalls within 6 months of initiation. Calendar days were used because we wanted to measure the time elapsed for manufacturers to remove devices subject to recall from the market. The number of workdays FDA took to review and approve recalls for termination was compared to FDA’s requirement that recalls be terminated in not more than 90 workdays after manufacturers reported recalls completed. Next, to measure total recall time, we added the number of calendar days spent by manufacturers to complete recalls to the number of workdays FDA took to terminate the recalls. We did not independently verify the information contained in the recall databases or evaluate the internal controls of the computer systems. FDA’s medical device tracking regulation, effective August 29, 1993, required manufacturers to use the criteria established in SMDA 90 to determine the devices that meet the criteria for tracking and to initiate tracking. To illustrate and provide guidance to manufacturers, FDA listed 26 categories of devices in the regulation that it regarded as subject to tracking. A recall is a voluntary action by a manufacturer to remove a medical device from the market or to correct a problem with a medical device to protect the public from products that present a risk of injury, gross deception, or are otherwise defective. Under SMDA 90, FDA can require manufacturers to report to the agency any corrections and removals of problem devices from the market and order recalls of defective and dangerous devices. However, in practice, with the exception of urgent situations, the majority of recalls are initiated voluntarily by manufacturers with FDA oversight. FDA assigns one of three classifications—class I, class II, and class III—to indicate the relative degree of risk the recalled product presents to public health. For a class I recall, FDA has determined that the use of or exposure to the product could cause serious health consequences or death. Class II is designated for situations where FDA has determined that the use of or exposure to the product could cause temporary or medically reversible adverse health consequences and the probability of serious health consequences is remote. A class III recall is reserved for situations where use of or exposure to the product is considered not likely to cause adverse health consequences. To initiate a recall of a product from the market, manufacturers develop and submit a recall strategy report to FDA that includes information on the reason for the correction or removal of the device, an assessment of the health hazard associated with the device, and volume of product in distribution. The recall strategy also includes provisions for effectiveness checks to ensure that users of devices have been notified of the recall and taken appropriate action to protect the public health. In general, class I recalls require a check with 100 percent of the device users that received notice of the recall and class II recalls require checks of 80 percent of device users. No checks are required for class III recalls because there is no public health risk involved. FDA guidelines instruct manufacturers to complete recalls within 6 months from the date of initiation of the recall. FDA reviews and recommends changes, if any, to the proposed recall strategy; advises the manufacturer of the assigned recall classification; and places the recall in its weekly enforcement report. At least once a month, FDA district offices monitoring recalls receive recall status reports from manufacturers that provide updates on the progress of recalls. Status reports on class I and some class II recalls are forwarded to FDA headquarters for review. Upon completion of the recall, the district offices conduct audit checks to confirm that the recalling manufacturer has properly corrected or removed devices from the market, in accordance with the recall strategy. Audit checks, which generally range from 2 to 10 percent of the total number of device users notified of the recall notice, are always performed on class I recalls and are usually conducted on class II recalls. After the monitoring district has determined that the recall was effective at notifying device users and appropriate action has been taken, a recall termination recommendation and summary of recall report is prepared by the district and forwarded to FDA headquarters for termination approval. This report provides FDA headquarters with documentation that reasonable and appropriate actions have been taken by the manufacturer to correct or remove the defective device product from the market. FDA requires from the time a manufacturer considers the recall completed to FDA’s recall termination approval should not exceed 90 days. Both the Office of Compliance and Office of Regulatory Affairs maintain separate automated computer databases that track the processing of recalls. Table IV.1 shows our analysis of the days elapsed for manufacturers and FDA to complete and terminate recalls of tracked devices that were initiated during fiscal years 1994 through 1996. Monitor (apnea detector, ventilatory effort) DC-defibrillator, low energy (including paddles) Defibrillator, automatic implantable cardioverter Pump, infusion, implanted, programmable Pulse generator, pacemaker, implantable (continued) As of January 16, 1998, FDA had not received a summary recall recommendation report from the district office and, therefore, had not approved the recall for termination. Recall was classified as a safety alert. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on whether the Food and Drug Administration (FDA) is providing adequate oversight of the tracking systems of high-risk device manufacturers and whether recalls of devices are executed promptly, focusing on whether: (1) FDA ensures that manufacturers operate tracking systems that are capable of tracking devices through the distribution chain to end users; and (2) device manufacturers and FDA are executing recalls of tracked devices in a timely manner. GAO noted that: (1) there are several weaknesses in FDA's approach for determining whether device manufacturers are operating tracking systems capable of quickly locating and removing defective devices from the market and notifying patients who use them; (2) FDA's inspections of the tracking systems do not include independent audits that could verify the completeness and accuracy of data in the systems; (3) instead, the inspections focus on reviews of the manufacturers' written standard operating procedures for tracking; (4) further, although the good manufactured product standard require FDA to inspect manufacturers of tracked devices at least once every 2 years, only about one-half of the 238 manufacturers subject to tracking were inspected during fiscal year (FY) 1996 and FY 1997; (5) FDA attributed its limited inspection activity to a reduction in field resources; (6) FDA has also not acted to ensure that device tracking continues when establishments go out of business, merge, or are acquired by other entities; (7) FDA officials told GAO they are planning to revise their inspection program to include an audit plan to better assess manufacturers' compliance with the tracking requirements and redirect FDA's compliance priorities toward high-risk devices, such as implant devices; (8) the details for most of these plans, however, have not yet been determined; (9) in GAO's analysis of FDA's recall data, manufacturers and FDA have not acted in a timely manner to correct and remove defective devices from the market; (10) less than one-third of the 54 recalls initiated from FY 1994 through FY 1996 were completed by manufacturers within 6 months, as specified in FDA guidelines; (11) FDA has also had problems terminating device recalls in a timely manner; (12) less than one-half of the 49 recalls reported completed by manufacturers were reviewed and terminated by FDA within the 90-workday standard established by the agency; and (13) FDA officials have identified several factors that may contribute to delays in completing recalls, but an in-depth review of the recall procedures used by manufacturers and FDA has not been conducted.
We interviewed staff from USAID and its implementing partners in Nairobi who had responsibility for oversight of the EFSP-funded operations in both Kenya and Somalia. including some countries with areas considered high security risk. Obligations for cash-based EFSP projects grew from $75.8 million in fiscal year 2010 to $409.5 million in fiscal year 2014—an increase of 440 percent over the 5-year period, the majority of which was in response to a large and sustained humanitarian crisis in Syria, including cash-based food assistance to Syrian refugees in the Syria region. Of the $991 million in total grant funding obligated in fiscal years 2010 to 2014, $330.6 million was for cash interventions and $660.3 million was for voucher interventions. The majority of the funding—$621.7 million (or 63 percent)—was awarded to WFP, and $369.3 million (or 37 percent) was awarded to other implementing partners. To deliver cash-based food assistance, USAID’s implementing partners employ a variety of mechanisms ranging from direct distribution of cash in envelopes to the use of information technologies such as cell phones and smart cards to redeem electronic vouchers or access accounts established at banks or other financial institutions (see fig. 1). The value of cash and voucher transfers is generally based on a formula that attempts to bridge the gap between people’s food needs and their capacity to cover them. internal control framework that, according to COSO, has gained broad acceptance and is widely used around the world. Both frameworks include the five components of internal control: control environment, risk assessment, control activities, information and communication, and monitoring. Internal control generally serves as a first line of defense in safeguarding assets, such as cash and vouchers. In implementing internal control standards, management is responsible for developing the detailed policies, procedures, and practices to fit the entity’s operations and to ensure they are built into and are an integral part of operations. In our March 2015 report, we found that USAID had developed processes for awarding cash-based food assistance grants; however, it lacked formal internal guidance for its process to approve award modifications and provided no guidance for partners on responding to changing market conditions that might warrant an award modification. USAID’s process for awarding EFSP funds. USAID outlined its process for reviewing and deciding to fund proposals for cash-based food assistance projects in the Annual Program Statement (APS) for International Emergency Food Assistance. According to USAID, the APS functions as guidance on cash-based programming by describing design and evaluation criteria for selecting project proposals and explaining the basic steps in the proposal review process. The APS also serves as a primary source of information for prospective applicants that apply for emergency food assistance awards using EFSP resources. Under the terms of the APS, USAID awards new cash-based food assistance grants through either a competitive proposal review or an expedited noncompetitive process. For our March 2015 report, we reviewed 22 proposals for new cash-based food assistance projects that were awarded and active as of June 1, 2014; we found that USAID made 13 of these awards through its competitive process, 7 through an abbreviated noncompetitive review, and 2 under authorities allowing an expedited emergency response. USAID lacked guidance for staff on modifying awards. In our March 2015 report, we found that although the APS outlined the review process for new award proposals, neither the current 2013 APS nor the two previous versions provide clear guidance on the process for submission, review, and approval of modifications to existing awards. According to USAID officials, USAID follows a similar process in reviewing requests to modify ongoing awards, which implementing partners may propose for a variety of reasons, such as an increase in the number of beneficiaries within areas covered by an award or a delay in completing cash distributions. Two main types of modifications may be made to a grant agreement—no-cost modifications and cost modifications. For the four case study countries, in our March 2015 report, we reviewed 13 grant agreements made from January 2012 to June 2014 that had 41 modifications during that period. Twenty of these cost modifications resulted in an increase in total funding for the 13 grants from about $91 million to about $626 million, a 591 percent increase. Ten of these cost modifications were made to 1 award, the Syria regional award, whose funding increased from $8 million to $449 million (see fig. 2). The Syria regional award modifications amounted to about 82 percent of the total increase in funding for the cost modifications we reviewed. We concluded that without formal guidance, USAID cannot hold its staff and its partners accountable for taking all necessary steps to justify and document the modification of awards. At the time of our study, USAID noted that its draft internal guidance for modifying awards was under review. In our March 2015 report, we recommended that USAID expedite its efforts to establish formal guidance for staff reviewing modifications of cash-based food assistance grant awards. USAID concurred with our recommendation. In June 2015, USAID reported that it issued written guidance that addresses the review and approval of grant modifications. We have yet to verify this information to determine whether it addresses the issues we identified. USAID lacked guidance for implementing partners. Additionally, in our March 2015 report we found that, although USAID required partners implementing cash-based food assistance to monitor market conditions, USAID did not provide clear guidance about how to respond when market conditions change—for example, when and how partners might adjust levels of assistance that beneficiaries receive. We analyzed data on the prices of key staple commodities in selected markets for our case study countries from fiscal years 2010 through 2014. We found that the prices of key cereal commodities in Niger and Somalia changed significantly without corresponding adjustments to all implementing partners’ cash- based projects. We did not find similar food price changes in Jordan and Kenya. According to USAID officials, USAID does not have a standard for identifying significant price changes, since the definition of significance is specific to each country and region. In addition, we did not find guidance addressing modifications in response to changing market conditions in the APS. We found that this lack of guidance had resulted in inconsistent responses to changing market conditions among different cash and voucher projects funded by USAID. For example, an implementing partner, whose project we reviewed in Kenya, predetermined, as part of its project design, when adjustments to cash transfer amounts would be triggered by food price changes, while an implementing partner whose project we reviewed in Niger relied on an ad hoc response. The implementing partner in Kenya established the cash and voucher transfer rate based on the value of the standard food basket; it reviewed prices every month but would change cash and voucher transfer amounts only in response to price fluctuations, in either direction, of more than 10 percent. We concluded that without clear guidance about when and how implementing partners should modify cash-based food assistance projects in response to changing market conditions, USAID ran the risk of beneficiaries’ benefits eroding through price increases or inefficient use of scarce project funding when prices decrease. We recommended in our March 2015 report that USAID develop formal guidance to implementing partners for modifying cash-based food assistance projects in response to changes in market conditions. USAID concurred with this recommendation. In June 2015, USAID reported entering into an agreement with the Cash Learning Partnership (CaLP), an organization that is working to improve the use of cash and vouchers, to help develop guidance to implementing partners on adapting programs to changing market conditions. USAID plans to complete this guidance by April 2016. We have yet to verify this information to determine whether it addresses the issues we identified. In our March 2015 report, we found that USAID relied on its implementing partners to implement financial oversight of EFSP projects, but it did not require them to conduct comprehensive risk assessments to plan financial oversight activities—two key components of an internal control framework. In addition, we found that USAID provided little or no guidance to partners and its own staff on carrying out these components. Risk assessments were lacking. Our March 2015 report found that for case study projects we reviewed in four countries, neither USAID nor its implementing partners conducted comprehensive risk assessments that address financial vulnerabilities that may affect cash-based food assistance projects, such as counterfeiting, diversion, and losses. USAID officials told us that they conduct a risk assessment for all USAID’s programs within a country rather than separate risk assessments for cash-based food assistance projects. According to USAID, its country-based risk assessments focus primarily on the risks that U.S. government funds may be used for terrorist activities and on the security threat levels that could affect aid workers and beneficiaries; these risk assessments do not address financial vulnerabilities that may affect cash-based food assistance projects, such as counterfeiting, diversion, and losses. A USAID official provided us with internal EFSP guidance to staff on the grant proposal and award process stating that an award would not be delayed if a risk-based assessment has not been conducted. According to USAID officials, its partners have established records of effective performance in implementing cash and voucher projects and they understand the context of operating in these high-risk environments. As a result, USAID expects that its partners will conduct comprehensive risk assessments, including financial risk assessments, and develop appropriate risk mitigation measures for their cash-based food assistance projects. However, none of the partners implementing EFSP- funded projects in our four case study countries had conducted a comprehensive risk assessment based on their guidance or widely accepted standards during the period covered by our March 2015 review. We found that USAID did not require its implementing partners to develop and submit comprehensive risk assessments with mitigation plans as part of the initial grant proposals and award process or as periodic updates, including when grants are modified.EFSP grant proposals and agreements do not contain risk assessments and mitigation plans. In addition, the implementing partners we reviewed had not consistently prioritized the identification or the development of financial risks that address vulnerabilities such as counterfeiting, diversion, and losses. USAID officials stated that most We concluded that without comprehensive risk assessments of its projects, USAID staff would be hampered in developing financial oversight plans to help ensure that partners are implementing the appropriate controls, including financial controls over cash and vouchers to mitigate fraud and misuse of EFSP funds. In our March 2015 report, we recommended that USAID require implementing partners of cash-based food assistance projects to conduct comprehensive risk assessments and submit the results to USAID along with mitigation plans that address financial vulnerabilities such as counterfeiting, diversion, and losses. USAID concurred with our recommendation. In June 2015, USAID noted that the Fiscal Year 2015 APS includes a requirement for applicants to provide an assessment of risk of fraud or diversion and controls in place to prevent any diversion or counterfeiting. We have yet to verify this information to determine whether it addresses the issues we identified. Control activities had weaknesses. In our March 2015 report, we found that USAID’s partners had generally implemented financial controls over cash and voucher distributions but the partners’ financial oversight guidance had weaknesses. We reviewed selected distribution documents for three implementing partners with projects that began around 2012 in our four case study countries (Jordan, Kenya, Niger, and Somalia). Our review found that the three implementing partners had generally implemented financial controls over their cash and voucher distribution processes. For example, in Niger, we verified that there were completed and signed beneficiary payment distribution lists with thumb prints; field cash payment reconciliation reports that were signed by the partner, the financial service provider, and the village chief; and payment reconciliation reports prepared, signed, and stamped by the financial service provider. Additionally, we determined that these three implementing partners generally had proper segregation of financial activities between their finance and program teams. Nonetheless, in Kenya, our review showed that in some instances, significant events affecting the cash distribution process were not explained in the supporting documentation. Our review also found that in most instances the implementing partners had submitted reports required by their grant awards, and generally within the required time frames; in addition, we found that these reports contained the key reporting elements required by the grant award. However, in some instances, we were unable to determine whether quarterly reports were submitted on time because USAID was unable to provide us with the dates when it received these reports from the implementing partner. According to USAID officials, USAID does not have a uniform system for recording the date of receipt for quarterly progress reports and relies on FFP officers to provide this information; however, individual FFP officers have different methods for keeping track of the reports and the dates on which they were received. Financial oversight guidance had gaps. In our March 2015 report, we found that implementing partners in the four case study countries we reviewed had developed some financial oversight guidance for their cash and voucher projects, but we found gaps in the guidance that could hinder effective implementation of financial control activities. For example, one implementing partner developed a financial procedures directive in 2013 that requires, among other things, risk assessments, reconciliations, and disbursement controls. However, the directive lacked guidance on how to estimate and report losses. Another implementing partner had developed field financial guidance in 2013 that provides standardized policies and procedures for financial management and accounting in the partner’s field offices. However, the implementing partner acknowledged that the field manual does not address financial procedures specifically for voucher projects. In addition, we found that USAID’s guidance to partners on financial control activities is limited. For example, USAID lacked guidance to aid implementing partners in estimating and reporting losses. We concluded that when implementing partners for EFSP projects have gaps in financial guidance and limitations with regard to oversight of cash- based food assistance projects, the partners may not put in place appropriate controls for areas that are most vulnerable to fraud, diversion and misuse of EFSP funding. In our March 2015 report, we recommended that USAID develop a policy and comprehensive guidance for USAID staff and implementing partners for financial oversight of cash- based food assistance projects. USAID concurred with our recommendation and in June 2015 reported that CalP is expected, as part of its award, to work on the development and dissemination of policy and guidance related to cash-based food assistance. USAID plans to complete this effort by April 2016. We have not yet verified this information to determine whether it addresses the issues we identified. Limitations in USAID’s field financial oversight. As we reported in March 2015, according to USAID officials, Washington-based country backstop officers (CBO) perform desk reviews of implementing partners’ financial reports and quarterly and final program reports and share this information with FFP officers in the field; in addition, both the Washington- based CBOs and FFP officers in-country conduct field visits. However, we found that the ability of the CBOs and FFP officers to consistently perform financial oversight in the field may be constrained by limited staff resources, security-related travel restrictions and requirements, and a lack of specific guidance on conducting oversight of cash transfer and food voucher programs. Field visits are an integral part of financial oversight and a key control to help ensure management’s objectives are carried out. They allow CBOs and FFP officers to physically verify the project’s implementation, observe cash disbursements, and conduct meetings with beneficiaries and implementing partners to determine whether the project is being implemented in accordance with the grant award. According to the CBOs and FFP officers, the frequency of field visits for financial oversight depends on staff availability and security access. In our four case study countries, the FFP officers told us that because of their large portfolios and conflicting priorities, they performed limited site visits for the projects that we reviewed. In Kenya, the FFP officer told us that her portfolio covered 14 counties, and the cash-based food assistance project we reviewed was just one component. Owing to the demands of all her projects, she had been able to perform limited site visits for the projects we reviewed. We also found that USAID had two staff members in the field to oversee its Syria regional cash-based projects spread over five countries that had received approximately $450 million in EFSP funding from July 2012 through December 2014. Because of staff limitations, FFP officers primarily rely on implementing partners’ reports from the field and regular meetings with them to determine whether a project is being executed as intended. However, USAID’s guidance to its FFP officers and its implementing partners on financial oversight and reporting is limited. For example, FFP staff in Niger stated that they have had insufficient guidance and training on financial oversight of cash-based food assistance projects. Furthermore, the FFP officers told us that USAID is not prescriptive in the financial oversight procedures it expects from its implementing partners. Additionally, they noted that USAID has not set a quantitative target for site visits by FFP officers. FFP officers in our four case study countries told us that they use a risk-based approach to select which sites to visit. We concluded that without systematic financial oversight of the distribution of cash and voucher activities in the field, USAID is hampered in providing reasonable assurance that is EFSP funds and are being used for their intended purposes. In our March 2015 report, we recommended that USAID require its staff to conduct systematic financial oversight of USAID’s cash-based food assistance projects in the field. USAID concurred with this recommendation. As of June 2015, USAID reported that it is working to develop training for its staff and will continue to explore using third-party monitors where security constraints may be an issue. USAID plans to complete these actions by April 2016. We have not yet verified this information to determine whether it addresses the issues we identified. Chairman Rouzer, Ranking Member Costa, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have questions about this testimony, please contact Thomas Melito, Director, International Affairs and Trade at (202) 512- 9601 or melitot@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Joy Labez (Assistant Director), Rathi Bose, Ming Chen, Beryl H. Davis, David Dayton, Martin De Alteriis, Fang He, Teresa Abruzzo Heger, Dainia Lawes, Kimberly McGatlin, Diane Morris, Shannon Roe, Barbara Shields, Sushmita Srikanth, and Dan Will.
For over 60 years, the United States has provided assistance to food-insecure countries primarily in the form of food commodities procured in the United States and transported overseas. In recent years, the United States has joined other major donors in increasingly providing food assistance in the form of cash or vouchers. In fiscal year 2014, U.S.-funded cash and voucher projects in 28 countries totaled about $410 million, the majority of which was for the Syria crisis, making the United States the largest single donor of cash-based food assistance. This testimony summarizes GAO's March 2015 report (GAO-15-328) that (1) reviewed USAID's processes for awarding and modifying cash-based food assistance projects and (2) assessed the extent to which USAID and its implementing partners have implemented financial controls to help ensure appropriate oversight of such projects. GAO analyzed program data and documents for selected projects in Jordan, Kenya, Niger, and Somalia; interviewed relevant officials; and conducted fieldwork in Jordan, Kenya, and Niger. The U.S. Agency for International Development (USAID) awards new cash-based food assistance grants under its Emergency Food Security Program (EFSP) through a competitive proposal review or an expedited noncompetitive process; however, USAID lacks formal internal guidance for modifying awards. In its March 2015 review of 22 grant awards, GAO found that USAID made 13 through its competitive process, 7 through an abbreviated noncompetitive review, and 2 under authorities allowing an expedited emergency response. According to USAID, the agency follows a similar process for modification requests. Partners may propose cost or no-cost modifications for a variety of reasons, such as an increase in the number of beneficiaries or changing market conditions affecting food prices. In its review of 13 grant awards that had been modified, GAO found that cost modifications for 8 awards resulted in an increase in funding for the 13 awards from about $91 million to $626 million. According to USAID, procedures for modifying awards have been updated but GAO has yet to verify this information. GAO also found that though USAID requires partners to monitor market conditions—a key factor that may trigger an award modification—it did not provide guidance on when and how to respond to changing market conditions. GAO concluded that, until USAID institutes formal guidance, it cannot hold its staff and implementing partners accountable for taking all necessary steps to justify and document the medication of awards. USAID relies on implementing partners for financial oversight of EFSP projects but did not require them to conduct comprehensive risk assessments to plan financial oversight activities, and it provided little related procedural guidance to partners and its own staff. For projects in four case study countries reviewed in its March 2015 report, GAO found that neither USAID nor its implementing partners conducted comprehensive risk assessments to identify and mitigate financial vulnerabilities. Additionally, although USAID's partners had generally implemented financial controls over cash and voucher distributions that GAO reviewed, some partners' guidance for financial oversight had weaknesses, such as a lack of information on how to estimate and report losses. In addition, GAO found that USAID had limited guidance on financial control activities and provided no information to aid partners in estimating and reporting losses. As a result, partners may neglect to implement appropriate financial controls in areas that are most vulnerable to fraud, diversion, and misuse of EFSP funding. GAO's March 2015 report included recommendations to strengthen USAID's guidance for staff on approving award modifications and guidance for partners on responding to changing market conditions. GAO also made recommendations to strengthen financial oversight of cash-based food assistance projects by addressing gaps in USAID's guidance on risk assessments and mitigation plans and on financial control activities. USAID concurred with the recommendations.
In 1990, as part of its effort to meet federal clean air standards, California adopted a requirement that effectively requires automobile companies to offer electric vehicles (EV) for sale there beginning in 1998. Subsequently, similar legislation was passed in several northeastern states. However, the automobile companies believed that without suitable advanced batteries, EVs would be expensive and limited in performance and therefore difficult to sell in the quantities mandated by the states. To address this need for advanced batteries by jointly sponsoring research, Chrysler, Ford, and General Motors established the United States Advanced Battery Consortium (USABC) in early 1991. Because EVs could help reduce mobile-source air pollution while allowing electric utilities to utilize excess capacity during off-peak hours, the Electric Power Research Institute (EPRI), along with several individual utility companies, agreed to participate in the consortium in mid-1991. Then, responding to a legislative mandate to pursue the benefits of EVs, DOE agreed to cooperate with the consortium’s research effort in late 1991. The relationship between DOE and the three automotive partners in the USABC is governed by a cooperative agreement. This agreement requires DOE to be substantially involved in managing the program and explains how it should be involved. (App. I provides information on DOE’s role in managing the consortium.) In addition, the cooperative agreement spells out the details of other important issues, such as the ownership of new technology developed under the program and the potential recoupment, or repayment, of DOE’s investment in the program. (App. II provides information on repayment provisions applicable to both DOE and the three USABC partners.) The consortium carries out its work through contracts with seven battery firms and through cooperative research and development agreements (CRADA) with five of DOE’s national laboratories. In some instances, the consortium has selected two battery developers to work on the same technology to encourage competition, enhance the chances for success, and potentially provide the automobile companies with multiple battery suppliers. The five DOE national laboratories were generally selected on the basis of past experience with promising technologies and/or their ability to objectively test battery hardware. (App. IV provides details on the consortium’s contracts and CRADAs.) According to USABC’s original budget proposal, the consortium hoped to obtain about 28 percent of the total program budget, or about $74 million, from the battery developers through cost-sharing provisions in their contracts. However, the cost-sharing percentages vary for each developer, and some developers may eventually join or leave the program. Therefore, the exact cost-sharing percentages for the industry participants will not be known until the end of the program. DOE’s national laboratories did not provide any more funding beyond the 50-percent share committed by DOE. On the one hand, advanced batteries meeting USABC’s long-term goals have not yet been proven technically feasible. On the other hand, batteries meeting the consortium’s mid-term goals, while potentially achievable, will not enable EVs to offer performance or costs comparable to those of gasoline-powered vehicles and therefore offer limited market potential. (Information on the consortium’s long-term and mid-term goals can be found in app. V.) The automobile companies established USABC because they believed that existing battery technologies would result in EVs with limited driving range—generally well under 100 miles and sometimes as short as 30 miles, depending upon the terrain and weather conditions. Moreover, existing batteries would have to be replaced frequently, greatly increasing the operating cost of EVs. According to DOE and consortium officials, the automakers did not believe such vehicles would be acceptable to consumers and therefore originally proposed that the consortium set its sights strictly on long-term goals. Batteries meeting these goals would store enough energy and power to give EVs the driving range and acceleration of gasoline-powered vehicles at approximately the same lifetime costs. Such EVs would be fully competitive with conventional vehicles. However, during early discussions with DOE officials in charge of this program, it became clear to the automobile companies that long-term batteries were unlikely to become practical in time to help address the states’ EV mandates. In fact, DOE officials who had considerable experience with advanced battery research convinced the automobile companies that considerable uncertainty existed as to whether long-term batteries could be successfully developed. DOE suggested that it would be prudent to also pursue a second set of more readily achievable mid-term goals. DOE stated that (1) mid-term batteries were worth pursuing in their own right because they are significantly better than current technology; (2) developing a successful mid-term battery could help reestablish a strong domestic battery industry; and (3) mid-term batteries could enable the automakers to gather data on the performance of EVs that would apply to long-term batteries if and when they are developed. As a consequence, the consortium adopted both the long-term goals originally championed by the automobile companies and a more readily achievable set of mid-term goals recommended by DOE. At that time, consortium officials believed that the mid-term goals could be reached within the 1990s, making mid-term batteries available within the approximate time frame when the states’ EV mandates would take effect. EVs with long-term batteries are expected to be competitive with gasoline-powered vehicles in terms of performance and cost. If that goal is achieved, such EVs could significantly penetrate the consumer vehicle market. Significant EV sales could reduce petroleum use and increase energy security by replacing imported and domestic petroleum fuels used by conventional vehicles with electricity, which is generated mostly with domestically produced fuels. In addition, the air quality benefits that could follow from replacing petroleum would support important national environmental objectives. The emissions from a relatively small number of stationary electricity generating plants can be more easily controlled than the emissions from a large number of conventional vehicles. However, it remains unclear whether the feasibility of a long-term battery will be demonstrated. The consortium’s original goal was to demonstrate the design feasibility of a long-term battery pack by 1994. Had this goal been achieved, pilot-plant production could potentially have begun several years later, leading to full-scale production early in the next decade. As of mid-1995, long-term battery research still involved small cells with about 1,000 times less energy than a vehicle-size battery pack. However, under the best-case scenario, if breakthroughs are achieved, a battery pack for one type of long-term battery could be proven feasible by 1998. This scenario could lead to pilot-plant production by 2000 and to full production by the middle of the next decade. According to USABC officials, batteries that achieve the mid-term goals are likely to be feasible but vehicles with them will have a much shorter driving range than gasoline-powered vehicles and are likely to cost more. The officials added that while such performance may help in meeting the states’ mandates, they do not believe these vehicles will perform well enough to have wide appeal to large numbers of consumers. Consequently, they do not believe that sales in excess of the state-mandated quantities are likely. The driving range of EVs with mid-term batteries is expected to be about 100 miles under realistic conditions that require extra power for such things as heating, cooling, and climbing hills. According to officials of the consortium’s Management Committee, this range will be acceptable only to a limited number of consumers. Also, automobile companies’ market research indicates that consumers will be unwilling to pay a high premium for EVs with limited range. According to the consortium’s cost estimates, during the early years of commercialization, before full-scale production is achieved, a mid-term battery pack alone would cost from $9,000 to $15,000. Even after full-scale production is achieved, the consortium’s latest estimate is that such batteries would cost about $7,000. This figure exceeds the mid-term cost goal by about $2,500. Therefore, particularly during these early years, the automobile companies believe that large subsidies will be needed to sell even the mandated quantities of EVs. Furthermore, according to consortium officials, government and industry have so far been unable or unwilling to offer adequate subsidies. The only currently available federal subsidy, provided under the Energy Policy Act of 1992, is limited to a $4,000 tax credit per vehicle and may be reduced below that amount for several reasons. Consortium officials also contend that state and local governments have not stepped forward to offer significant subsidies for EVs. DOE officials in charge of the program are more optimistic than consortium officials about the prospects for EVs with mid-term batteries. In commenting on our draft report, DOE stated that light-duty vans and passenger cars with mid-term batteries can achieve a reliable range from 70 to over 100 miles, respectively, on a single charge. They believe these vehicles will satisfy the needs of many fleet operators as well as private consumers whose daily driving distances are relatively short. For example, DOE officials believe that electric vehicles with mid-term batteries might be successful in niche markets, such as electric utility fleets. They also believe that full-scale production and lower cost can be achieved relatively quickly through such strategies as developing other customers for mid-term batteries in the recreational vehicle and foreign EV markets. Using this approach, DOE officials have estimated that with the federal incentive, an EV with a mid-term battery will exceed the cost of a gasoline-powered vehicle by only about $2,500. They believe sales incentives from the automobile companies themselves would be sufficient to address this remaining cost increment. Despite DOE’s estimates, consortium officials doubt that EVs with mid-term batteries can achieve any significant market penetration. While mid-term batteries would extend the range of EVs beyond that provided by using existing batteries, the consortium and DOE both agree that these EVs would not be comparable in performance to gasoline-powered vehicles and would cost more. Both the consortium and DOE also believe that significant market penetration would be required to achieve any widespread benefits in terms of energy security or environmental improvement. Despite their concerns about the market’s acceptance of mid-term batteries, USABC officials concluded that they are a necessary step toward the ultimate goal of commercializing long-term batteries. Therefore, the consortium has continued to pursue the mid-term goals and has made progress toward developing a workable mid-term battery. The consortium originally planned to demonstrate the feasibility and begin the pilot-plant production of mid-term batteries by 1994; full commercial production (over 10,000 battery packs a year) could then have begun by approximately 1998. As of mid-1995, most aspects of technical feasibility had been demonstrated for one mid-term battery technology. However, the developers needed more time to demonstrate that the battery could meet the goal of lasting 5 years. The consortium now expects that the pilot-plant production of this battery will begin in 1996 and that full production will begin in 2000 or 2001. Hence, existing lead-acid batteries are likely to be used during the first few years of the states’ EV mandates that will begin in 1998. DOE and USABC officials attributed their inability to meet the original target dates for both mid-term and long-term batteries to two factors. First, lengthy contract and CRADA negotiations delayed the start of some work for as much as a year. For example, negotiations with battery companies were delayed because of their reluctance to agree to consortium-required cost-sharing provisions and DOE-required patent provisions that threatened companies’ ownership of previously developed background technology. Second, technical challenges proved to be more difficult than anticipated, causing delays of a year or more. Each of the technologies under development has presented significant technical barriers, including the high cost of certain materials, the difficulty of fabricating battery components to meet demanding specifications, and a shortened battery life caused by corrosive materials. Phase I of the USABC program began in 1991 and has included research on both mid-term and long-term batteries. Consortium officials believe that the budget originally planned for Phase I can carry the program through 1997, during which time most of the work on mid-term batteries would be completed. The consortium has also formulated a Phase II plan that would focus primarily on continuing research on long-term batteries through 1999. The consortium’s original budget was $262 million for 1991 through 1995. However, as explained above, progress has been delayed because of difficulties in negotiating agreements and contracts and greater-than-expected technical barriers. Consequently, spending has been slower than anticipated. As of March 1995, about $123 million, or less than half the total budget, had been paid out by the consortium for all expenses. Meanwhile, through June 1995, planned expenditures included a total of about $181 million of Phase I funds that have been obligated through contracts and CRADAs, of which about 45 percent was allocated to mid-term projects and 55 percent to long-term projects. Additionally, in-kind contributions and general and administrative expenses in connection with these obligations were expected to total about $11 million by the end of Phase I. (App. VI contains information on appropriations received by DOE to cover its 50-percent share of these expenditures.) USABC officials hope to extend the cooperative agreement beyond 1995 to complete the development effort. The consortium believes the original $262 million will be sufficient to continue the work through 1997. During this period, Phase I, including most of the work on mid-term battery development, would be completed, and Phase II would begin, focusing primarily on the continued development of long-term batteries. During 1998 and 1999, the consortium hopes to continue Phase II to complete the work on long-term batteries. Phase II would require a total of about $81 million in additional funds for those 2 years. The consortium plans to ask DOE to provide almost half of that amount—about $38 million—or about $19 million a year. DOE is aware of this plan but has not yet endorsed it, pending a more detailed explanation from consortium officials of how these funds would be used by the battery developers. According to USABC officials, Phase II of the program depends upon fully funding Phase I, which is now somewhat in jeopardy. The consortium’s original budget called for the battery development companies to share a portion of the program’s total costs. However, early in the program, the consortium decided to significantly increase the amount of testing and battery development work to be performed by DOE’s national laboratories, thereby reducing the amount of work to be done by the battery companies. Since the laboratories do not contribute any share of the costs beyond DOE’s 50-percent share, this change has resulted in a shortfall of about $19.6 million that would have been provided by battery companies, had they done the work now being performed by the laboratories. When combined with lost matching funds that would have come from DOE, the $19.6 million shortfall becomes about $39 million, dropping the program’s total funding from the planned $262 million to only $223 million. Consortium officials stated that the automobile companies are willing to increase their contribution enough to restore full funding for Phase I if the government makes a commitment to Phase II. However, they indicated that the automobile companies may be unwilling to commit these extra funds if Phase II is not approved. Consortium officials also stated that they will need a clear signal by approximately October 1995 as to whether the Congress intends to provide funding for Phase II, or they will have to begin cutting back on existing contracts to avoid jeopardizing their ability to complete any of them. However, they believe that cutting back on work by specific battery developers this year would require decisions based on expert opinion rather than on actual test results, which are not yet available. As a result, companies that might develop a viable advanced battery, given more time, might be prematurely eliminated from the program. Without funding for Phase II, consortium officials expect that, at best, just one of the three current long-term contracts could be continued. They believe such a cutback would seriously diminish the chances for success in the long-term program. According to DOE and industry officials, DOE program officials have made valuable contributions to the management of the consortium. In addition, DOE contracting officials plan audits to ensure that the program’s costs are adequately accounted for. Nonetheless, greater attention by DOE to lessons learned during the program could improve the efficiency of both USABC and similar cooperative efforts in the future. Both the members of the USABC Management Committee and the DOE officials in charge of the program stated that DOE has been actively involved in managing and overseeing the consortium. DOE helps manage the consortium through participation in the Management Committee, which is responsible for key decisions such as selecting the technology to be developed and contractors. This committee includes executives from each of the three automobile companies and representatives of DOE and EPRI. Technically, DOE does not have voting authority on the committee. However, an automobile industry committee member stated that no important decisions are made without the concurrence of DOE’s representative. Moreover, DOE officials said that their control of half the program’s funding gives them de facto veto power over key management decisions. On a more technical level, DOE personnel provide oversight and guidance to the consortium through representation on the Technical Advisory Committee, which supports the Management Committee. Besides actively participating in meetings of the Technical Advisory Committee, DOE experts also serve as members of individual working groups that oversee the work of each battery developer and national laboratory involved in the program. Ensuring the allowability of costs is a concern because the costs submitted by the consortium and the developers are used as the basis to compute DOE’s 50-percent share of the program’s costs. The consortium partners and developers are required to make significant cost-sharing contributions to the program. Ensuring that only allowable costs are submitted is important in order to avoid reimbursing the USABC partners or the developers for costs that should be covered by their own contributions. During the course of our review, we raised a potential concern about the adequacy of the audit coverage of the costs claimed by the consortium and the battery developers. DOE contracting officials told us that an audit to determine the allowability of costs claimed by the consortium is not required but that one might be done at the conclusion of the program. The DOE officials also said that USABC is responsible for determining the allowability of costs claimed by the battery developers, with technical input from DOE’s program manager. USABC Management Committee officials said that the primary responsibility for ensuring the allowability of the developers’ costs lies with the individual USABC program managers, who are primarily automobile company employees. They also said they were not certain whether the consortium would conduct formal close-out audits of the battery developers. At the conclusion of our review, the DOE contracting officer responsible for the USABC program told us that there would definitely be a close-out audit of the consortium and that this audit would include a review of the costs claimed by the battery developers and reported to the consortium. The contracting officer also said that DOE would initiate individual audits of the developers’ books, if necessary. In addition, the contracting officer said that the consortium is required by the regulations governing the cooperative agreement to conduct close-out audits of battery developers’ costs and, if necessary, to request DOE’s assistance in conducting such audits. Greater attention to lessons learned during the USABC program could improve cooperative efforts between DOE and the industry. After a meeting of industry and government officials initiated by DOE in July 1993, the problems encountered and recommended actions to address them were compiled by a consultant in a “lessons learned” document. Most of the problems identified involved industry’s perceptions of barriers created by DOE or overall government policies and procedures. Many of the 17 items discussed in the document concerned delays. For example, the document pointed out that the program was delayed by the lengthy negotiation of contracts. Contributing to these delays was the reluctance on the part of battery developers to agree to certain DOE-required provisions, such as “march-in” rights that enable DOE to take ownership of technology if it is not commercialized by the developer within a suitable time period. In addition, industry officials attributed delays to DOE’s policy of avoiding direct involvement in contract negotiations but subsequently insisting on reviewing and approving each contract provision and modification. In the case of CRADAs, industry officials contended that multiple levels of review and approval by several DOE field offices and by DOE headquarters resulted in additional delays in the program. The lessons learned document included several industry-recommended actions to be taken by DOE to eliminate or minimize these causes of delays, including (1) developing a new approach to issues involving the ownership and use of technology that would be specifically applicable to cooperative agreements; (2) allowing DOE contracting officers to actively participate in negotiations with contractors; and (3) streamlining the process of reviewing and approving contracts and agreements. DOE officials who participated in a meeting devoted to the lessons learned agreed with some of the recommendations, such as one that called for DOE to develop a model CRADA to standardize the process of negotiating CRADAs with the various DOE national laboratories. Subsequently, a model CRADA was developed. However, these DOE officials also argued that some of the recommended actions were unnecessary or inappropriate. For example, they did not agree that provisions about the government’s ownership and use of technology were the main reason for delays in negotiating contracts, nor did they believe that a new approach to these provisions was imperative. Also, the DOE officials argued that it would be a conflict of interest for contracting officers, who are responsible for approving contracts, to also take part in negotiating those contracts. In subsequent discussions with us, however, several DOE officials endorsed certain of the recommendations in the lessons learned document. For example, the patent attorney overseeing USABC’s affairs in DOE’s Chicago Operations Office agreed with the assertion that there is a need for new rules governing the government’s ownership and use of technology for programs like the USABC program. Also, the director of DOE’s Office of Procurement, Assistance, and Property stated that industry’s call for greater involvement in negotiations by DOE contracting officers contained some logic and was worth considering. In addition, the director of DOE’s Electric and Hybrid Propulsion Division told us that new procurement rules, such as those recommended in the lessons learned document, are needed to provide greater flexibility for cooperative agreements. According to DOE program officials, implementing many of the recommendations would require action by other DOE offices, such as those responsible for procurement and patent rights. They said they had sent copies of the lessons learned document to these offices and encouraged them to implement changes where feasible. However, when we contacted officials of these other offices, they indicated that the document’s recommendations had not received serious consideration. The procurement official cited above told us that program officials had not built a convincing enough case for the changes they sought by merely distributing copies of the lessons learned document. Thus, it appears that uncertainties and/or disagreements within DOE about the utility or appropriateness of some recommendations in the lessons learned document were not addressed, and others were not carefully evaluated by DOE officials in a position to effect changes. Consequently, it is uncertain which, if any, of the recommendations would have been practical to implement or what improvements to the USABC program might have resulted from their implementation. Moreover, by not following through on the lessons learned in this program, DOE may have missed an opportunity to improve the efficiency of future cooperative efforts with industry. Advanced batteries that would make electric vehicles fully competitive with gasoline-powered vehicles have not yet been proven to be feasible, although DOE and United States Advanced Battery Consortium officials believe that continued research on these batteries is justified. Progress has been made toward developing mid-term battery technologies, but these batteries will probably not be available until several years after the states’ mandates for the sale of electric vehicles begin in 1998. Because the batteries will not make electric vehicles fully competitive with gasoline-powered vehicles, the energy security and environmental benefits of mid-term batteries appear limited. To reach the goal of developing a long-term advanced battery, consortium officials believe that approximately $38 million in additional federal appropriations will be needed. If these funds are received, the consortium hopes that pilot-plant production can begin by 2000. But if there is no indication that the extra funds will be available, consortium officials believe that some contracts may have to be terminated before sufficient data are available to aid in decision-making. They believe that such action would significantly reduce the chances for the successful development of a long-term battery. DOE did not follow up on several lessons learned during this program that could benefit future efforts based on similar cooperative agreements. Industry officials believe that certain actions, such as streamlining DOE’s contract review procedures, could help prevent programs like the United States Advanced Battery Consortium program from falling behind schedule. GAO recommends that the Secretary of Energy give more careful consideration to the document entitled Lessons Learned Under the United States Advanced Battery Consortium to determine whether any of its recommendations should be implemented and develop an action plan for implementing those that are warranted. We provided a draft of this report to DOE for written comments. (These comments are contained in app. VII.) While agreeing with our characterization of the feasibility of long-term batteries, DOE said that our draft report underestimated the potential prospects for the technology used for mid-term batteries. We had stated in the draft report that DOE officials are more optimistic about mid-term batteries than consortium officials and had summarized their reasons for being optimistic. DOE’s written comments provide more detailed information on this point, some of which we have added to our final report. With respect to information in the draft report summarizing the consortium’s plan to seek additional funds for the program, DOE stated that it views the plan as prudent and well considered. DOE accepted our recommendation that it give more careful consideration to implementing changes called for in the document identifying lessons learned under the program. DOE also provided suggested editorial changes, which we have made where appropriate. To respond to your request, we met with officials of DOE, USABC, the electric utility industry, battery development contractors, and national laboratories. We also had discussions with representatives of an independent EV manufacturer and a producer of currently available EV batteries. We also obtained and reviewed pertinent documentation from these sources. We conducted our review between September 1994 and June 1995 in accordance with generally accepted government auditing standards. (App. VIII provides a more detailed discussion of our objectives, scope, and methodology, including a complete listing of the persons contacted during our review.) Unless you publicly announce its content earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to interested congressional committees, the Secretary of Energy, and the USABC Management Committee. We will make copies available to others upon request. Please call me at (202) 512-3841 if you have any questions. Major contributors to this report are listed in appendix IX. A list of GAO products related to this issue appears on the last page of this report. The U.S. Advanced Battery Consortium (USABC) established a framework for cooperation among various organizations seeking to develop advanced batteries for electric vehicles (EV) in the United States. This framework was established by a series of agreements among the stakeholders—automobile companies, the Department of Energy (DOE), electric utilities, battery developers, and DOE’s national laboratories. This appendix summarizes these agreements, the management structure resulting from them, and DOE’s role in that structure. The relationships between the parties in USABC are governed by a partnership agreement, a participation agreement, and a cooperative agreement, all signed during 1991. Subsequently, USABC signed development contracts with battery companies and cooperative research and development agreements (CRADA) with DOE’s national laboratories. Figure I.1 illustrates the relationships established by these agreements. USABC’s organizational structure includes four committees—the Partner’s Committee, the Management Committee, the Project Committee, and the Technical Advisory Committee, each responsible for different aspects of USABC’s decision-making process. In addition, each battery development contract and laboratory CRADA is assigned a battery technology work group, headed by a program manager. Figure I.2 illustrates how these structures contribute to accomplishing USABC’s work. Partners Committee: One senior executive from each of Chrysler, Ford, and General Motors Sets high-level objectives and policy Management Committee: Managers from Chrysler, Ford, and General Motors with voting rights; nonvoting participants from DOE and EPRI Sets management policies, selects technologies and contractors, allocates funds, etc. DOE’s cooperative agreement with USABC requires DOE to be actively involved in the management of the consortium. To fulfill this requirement, DOE headquarters staff provide management and technical input into USABC’s battery development efforts. Other DOE headquarters staff deal with legal and contracting issues pertaining to the consortium. DOE’s Chicago Operations Office and various area offices are also involved in supporting DOE activities with USABC. Figure I.3 illustrates the various roles played by DOE staff in the management and oversight of USABC. Both DOE and the USABC partners are, under certain conditions, allowed repayment of their financial contributions to the consortium. Repayment is to be made by battery producers after batteries developed by USABC are commercialized. Provisions for repayment were negotiated between DOE and USABC in the cooperative agreement and subsequently between USABC and the battery development contractors. As required by the agreement, USABC included a provision for repaying DOE in all the battery development contracts. Repayment provisions outlined in the USABC cooperative agreement and in the battery development contracts stipulate that DOE’s repayment is based upon (1) revenue received by USABC or its battery developers from the licensing of patents to third-party battery manufacturers and (2) any payments to USABC or its contractors upon the liquidation or winding up of USABC’s business. Exempt from the repayment provisions are USABC, the USABC partners, certain companies associated with the partners, EPRI, and EPRI participants, who all can acquire a license to use the patents without paying licensing fees to DOE. In addition, a subsequent amendment to the cooperative agreement allows repayment to DOE on the basis of revenues from battery sales in addition to licensing fees. However, such a provision has been included in only one battery development contract. DOE is to be repaid an amount no greater than the total amount of funding it provides to the program. The repayment obligation ends either after 20 years or when the entire DOE contribution has been repaid, whichever occurs first. The repayment obligation can be waived, in whole or in part, if DOE determines that repayment places USABC or its battery developers at a competitive disadvantage. Three of the battery development contracts place an additional stipulation on DOE’s ability to obtain repayment on the basis of licensing fees. That is, repayment does not begin until battery sales by the developer and/or licensee reach a specified level. As noted earlier, one contract does contain a provision granting DOE an opportunity to obtain repayment on the basis of revenues from the sale of batteries by the developer. The USABC partners are also entitled to obtain repayment of their financial contributions to the consortium. In most instances, the partners’ ability to receive repayment depends upon two sources—battery sales revenues and license fees. In addition, most contract repayment provisions allow the USABC partners to receive up to 20 percent more than they contributed to the battery developer. According to consortium officials, this extra repayment will compensate the USABC partners for the financial risks of supporting the battery developers. At the same time, it will enable them to compete favorably in the EV market with other automobile companies that did not support the research effort but may be able to purchase advanced batteries developed under the program at the same price as the USABC partners. Figure II.1 shows some of the types of repayment provisions for DOE and the USABC partners and the number of battery development contracts containing each provision. In addition to repayment provisions based on revenues from EV battery sales and license fees, some contracts contain provisions that allow repayment on the basis of revenues from the sale of batteries for non-EV automobile applications and for use by electric utilities. As shown in the figure, five of the eight battery development contracts allow the USABC partners to receive repayment from revenues generated by battery sales. In addition, the repayment provisions allow the USABC partners to be repaid more than they contributed. Therefore, if the batteries developed by the USABC are commercially produced for sale, the USABC partners are likely to be repaid for their financial investment in the consortium. The provisions for repaying DOE that are based upon licensing fees are contained in all eight battery development contracts, but only one contract contains an additional repayment provision that is based upon the sale of batteries. The extent to which the USABC or its battery development contractors will choose to grant licenses to third-party manufacturers is uncertain. Consequently, any potential revenue from license fees and the corresponding amount of repayment DOE would receive could be fairly limited. A DOE official explained that DOE’s repayment terms were negotiated with USABC early in the program as part of the cooperative agreement. At that time, there were no formal requirements in place concerning what type of repayment DOE was expected to obtain. In lieu of formal guidance, the repayment provisions in the cooperative agreement were modeled upon similar provisions developed by DOE’s Clean Coal program, an earlier cooperative effort between DOE and industry. The program official also explained that the USABC partners’ repayment terms were negotiated later by the consortium during contract negotiations with battery developers. USABC negotiators were free to negotiate different terms from those that applied to DOE if they could convince the battery developers to agree. During our discussions with USABC’s management and contractors, we became aware of a potential for future litigation over patent rights between two USABC contractors. At issue is the interpretation of patents for a promising mid-term technology. The two contractors, Ovonic Battery Company and Saft America, Inc., are both working on nickel metal hydride batteries. Ovonic Battery Company, a small U.S.-based firm, holds a number of background patents for this technology that are based on work conducted before it contracted with USABC. Ovonic has sold licenses to other companies to use its technology in small consumer batteries. Earlier, Ovonic had charged that several Japanese electronics firms had violated its patents, and Ovonic filed patent infringement claims with the International Trade Commission. However, in December 1994, the dispute was amicably resolved, and Ovonic signed licensing agreements with these firms. In the case of the USABC program, Ovonic’s officials are concerned that if Saft America eventually produces a commercial nickel metal hydride battery, some of Ovonic’s patented technology may be used in this battery. They have stated that they might file suit against Saft America if this occurs and a satisfactory licensing agreement cannot be worked out. Saft America, Inc., is a U.S.-based subsidiary of a large French battery manufacturer and the largest manufacturer of nickel cadmium batteries in the United States. Saft officials maintain that the approach they are taking to nickel metal hydride technology is significantly different from that taken by Ovonic. Therefore, they believe it is possible that any battery they ultimately produce may not use technology patented by Ovonic. However, they also stated that if it does turn out that they use Ovonic’s technology, they are willing to pay Ovonic a reasonable licensing fee for such use. They believe that Ovonic’s existing licensing agreements will provide precedents for determining appropriate licensing fees. Thus they believe litigation will not be necessary. In the meantime, they point out that before commercial production begins, there are no restrictions on conducting research on already patented technologies. Therefore, they are free to use Ovonic’s technology in their experiments if they wish to do so. DOE and USABC officials are aware of this potential problem, but they do not expect a real problem to develop because of the differences in the technological approaches being taken by the two firms. Moreover, they believe that any dispute that may occur down the road can be resolved by negotiating a licensing agreement, thereby avoiding litigation. However, if litigation does occur, they believe the terms of the cooperative agreement and USABC’s contracts protect DOE and the consortium from any liability. Overall, they believe that this situation is unlikely to cause a delay in the availability of nickel metal hydride batteries in the United States. Both DOE and USABC believe that any risk involved in sponsoring both of these battery developers is outweighed by the increased chance of achieving a technological breakthrough. USABC has entered into eight contracts for the development of advanced battery technologies. Five of those involve mid-term technologies, and three involve long-term technologies. In addition to basic agreements on the dollar amount of the contract, the scope of work, schedules, and deliverables, the contracts generally include provisions on cost-sharing, ownership of intellectual property, the repayment of DOE and USABC funds, and domestic production. The contractors generally agreed to share a portion of the costs of conducting the research and development work. While the exact percentage was determined during negotiations and varies from one company to another, the average share was expected to be about 28 percent. Repayment provisions, under which developers are required to repay some or all of the money invested by USABC and DOE, are discussed in appendix II. Table IV.1 summarizes USABC’s mid-term contracts. Of the five mid-term contractors, Ovonic Battery Company and Saft America, Inc., were among the earliest to sign contracts and continue to develop nickel metal hydride batteries. These contracts were signed during 1992. USABC believes both the competition and the varying approaches that result from having two contractors work on the same technology will increase the chances of success. In support of those two programs, Yardney is working on ways to develop a low-cost nickel electrode. In 1993, Silent Power received a contract to work on sodium sulfur batteries. In 1994, USABC announced that it had awarded a contract to partners Duracell, Inc., and Varta Batterie AG to develop lithium ion technology. This battery has the potential to eventually exceed the mid-term goals by a substantial margin, but because it does not have the potential to reach the long-term criteria, it is classified as a mid-term battery. Table IV.2 summarizes USABC’s long-term contracts. Of USABC’s three long-term contracts, the two with W.R. Grace and 3M involve lithium polymer technology. The two firms are taking somewhat different approaches to the same technology, and USABC hopes the competition between them will bring about rapid results. Both companies are working with partners. Grace heads a team that also includes Johnson Controls and a number of smaller participants. 3M is teamed with Hydro-Quebec, a Canadian utility that has worked extensively on lithium polymer technology. Meanwhile, Saft America, the only company with both a mid-term and long-term contract, is working on lithium iron disulfide (or, more specifically, lithium-aluminum iron disulfide) batteries. These batteries have very high energy potential but also present serious corrosion and life expectancy challenges because they operate at extremely high temperatures. USABC has entered into a series of cooperative research and development agreements with five of DOE’s national laboratories. In some cases, because the laboratories had experience in research and development of some of the technologies of interest to the consortium, it made sense to take advantage of their experience. In other cases, the laboratories had the equipment and expertise needed to conduct independent testing of battery hardware and give USABC consistent and objective test results on deliverables provided by a variety of battery developers. This testing capability has been useful in screening and selecting contractors for the program. It has also been valuable in assessing progress by the developers once they have been awarded contracts and have begun producing prototype hardware. Table IV.3 summarizes USABC’s CRADAs with the national laboratories. As the table shows, some laboratories do only testing or development work, but two (Argonne and Sandia) are involved in both types of activity. USABC established separate long-term and mid-term goals for advanced batteries, measured according to a variety of criteria that measure critical battery characteristics such as power, durability, and cost. While all of the criteria are important to achieve viable advanced batteries, this appendix discusses five key criteria that the automobile companies believe are essential to offering EVs that will meet consumers’ needs. Table V.1 identifies the mid-term and long-term goals for each criterion. An explanation of the five criteria follows the table. Specific power is a measure of the amount of power provided by a given battery mass. This goal is related to EV performance characteristics such as acceleration and hill-climbing ability. Specific energy is a measure of the amount of total energy contained in a given battery mass. This goal is related to the crucial EV characteristic of driving range. Generally, the higher the specific energy of the battery, the more miles the vehicle will be able to travel between recharges. Calendar life refers to the number of years a battery will last, irrespective of the number of times it is charged and recharged. This measure is important because a battery’s performance can deteriorate over time because of factors other than use. For example, the performance of batteries that operate at very high temperatures can be reduced by the corrosion that takes place as time passes. Cycle life is a measure of the number of times a battery can be discharged and recharged before its performance deteriorates to unacceptable levels. This characteristic will determine how much an EV can be used before its battery pack needs replacement, and its impact on a battery’s life expectancy depends upon daily usage patterns. For example, in a heavily used EV, a battery just meeting the mid-term goal of 600 cycles would last fewer than 2 years if it were discharged and recharged each day. On the other hand, DOE officials believe that actual EV usage patterns will require recharging only every 2 to 4 days, so that mid-term batteries would last much longer than 2 years. Ultimate price is a measure of the cost EV manufacturers would pay per unit of energy once vehicle-sized battery packs are in large-scale production—at least 10,000 units annually. This criterion is critical to the automakers’ ability to offer EVs at prices that will make them competitive with conventional vehicles. Table VI.1 shows the amounts of actual or anticipated appropriations since 1991 for DOE’s battery development programs. The portions of the total appropriations not allocated to USABC are used for several other purposes, including overhead expenses, preparation of reports, and research on critical battery technologies and other high-power storage devices in support of USABC contracts and/or the Partnership for a New Generation of Vehicles. The amounts in the table are based on information provided by DOE’s manager of the Electric and Hybrid Propulsion Division. The total amount that DOE plans to request for battery development in 1997 is unknown at this time. As table VI.1 shows, as of fiscal year 1995, DOE had received appropriations of nearly $96 million for USABC. Meanwhile, DOE’s share of the funds expended through March 1995 was approximately $61 million. Spending was heavier on mid-term contracts early in the program. However, the portion spent on long-term work has gradually increased as the long-term contracts get up to speed while the mid-term contracts near completion. Overall, USABC expects the pace of spending to accelerate this year, and therefore most of DOE’s accumulated appropriations will be paid out under the existing contracts and CRADAs during 1995. The objectives of the review were to determine (1) the progress that the United States Advanced Battery Consortium has made toward reaching its long-term and mid-term goals; (2) the funding that has been spent thus far and the additional amounts, if any, that will be needed; and (3) DOE’s role in managing the USABC. To address these objectives, we conducted extensive interviews with officials of DOE, USABC, the Electric Power Research Institute, USABC contractors, national laboratories, and several other interested parties outside of USABC. We also obtained and reviewed pertinent documents from these sources, as discussed below. The following list identifies the agencies and organizations contacted. Department of Energy Program management officials in DOE’s Electric and Hybrid Propulsion Division. Other DOE headquarters officials responsible for procurement and patent issues. Contracting officials and legal counsel in DOE’s Chicago Operations Office. Representatives of DOE’s Argonne Area Office, which oversees the work of DOE’s Argonne National Laboratory. Officials of the USABC Management Committee, including the chairman and treasurer. Legal counsel for USABC. Four USABC program managers who are responsible for managing specific battery development contracts and/or CRADAs and are also members of USABC’s Technical Advisory Committee. An official of the Electric Power Research Institute who represents the electric utilities on the USABC Management Committee. Officials of two battery firms—Ovonic Battery Company and Saft America, Inc.—which are conducting research and development under contracts with USABC. Officials of the Argonne National Laboratory and the National Renewable Energy Laboratory, which are conducting battery research and testing under CRADAs with USABC. A representative of an independent producer of EVs, U.S. Electricar, which converts several conventional vehicles, including those of the big three automobile companies, to electric drive. A representative of the Advanced Lead-Acid Battery Consortium, which sponsors research and development of advanced lead-acid batteries. An official of Electrosource, Inc., a major producer of advanced lead-acid batteries for EVs. Officials of the Advanced Research Projects Agency, which sponsors research on advanced lead-acid batteries for EVs and demonstrations of EVs using those batteries. An official of the Office of Naval Research, which sponsors research on battery technologies for military applications. To determine the progress that USABC had made toward reaching its long-term and mid-term goals, we interviewed DOE, USABC, and national laboratory officials to discuss the status of work in developing the battery technologies and their expected completion dates and reviewed applicable progress reports. We also met with two battery development firms—Ovonic Battery Company and Saft America, Inc.—to discuss their progress to date. These two firms were selected because they were the first to sign contracts with USABC and appeared to have made the greatest progress in developing a mid-term battery. In addition, Saft is developing a long-term battery. To address the objective on funding issues, we interviewed DOE and USABC officials to discuss the funds appropriated and allocated to the development of advanced batteries, the expenditures to date, and the need for additional funds to complete the work. We reviewed pertinent program budgets, appropriation documents, and expenditure reports. To address the objective on management issues, we interviewed many of the previously listed officials to discuss DOE’s roles and responsibilities in relation to the other consortium members and DOE’s procedures, processes, and actions taken to oversee the management of federal funds and the scope of the work being carried out to develop advanced batteries. We reviewed the cooperative agreement—which specifies the roles and responsibilities, organizational structure, funding and cost-sharing, and rights to technology when developed, of DOE and the participating industry groups, and we reviewed the provisions of battery development contracts dealing with the repayment of federal funds. We also reviewed the management issues identified in the document entitled Lessons Learned Under the USABC. In addition, we reviewed an independent public accounting report that looked into the cost controls of one of the battery developers. Electric Vehicles: Likely Consequences of U.S, and Other Nations’ Programs and Policies (GAO/PEMD-95-7, Dec. 30, 1994). Alternative-Fueled Vehicles: Progress Made in Accelerating Federal Purchases, but Benefits and Costs Remain Uncertain (GAO/RCED-94-161, July 15, 1994). Energy and Science Reports and Testimony: 1993 (GAO/RCED-94-176W, June 1994). Energy: Bibliography of GAO Documents January 1986-December 1989 (GAO/RCED-90-179, July 1990). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the development of advanced batteries for electric vehicles, focusing on the: (1) progress that the United States Advanced Battery Consortium (USABC) has made in reaching its long-term and mid-term research goals; (2) funding that has been spent as of fiscal year 1995 and the additional amounts, if any, that will be needed; and (3) Department of Energy's (DOE) role in managing the consortium. GAO found that: (1) the consortium's long-term goal is to develop a battery that allows electric vehicles to compete fully with gasoline-powered vehicles in terms of performance and cost, but the feasibility of such a battery has not been demonstrated; (2) USABC is developing a mid-term battery that allows an electric vehicle to travel at least 100 miles under real world conditions; (3) electric vehicles using the mid-term battery would not likely achieve much commercial success, due to the battery's high costs and low driving range; (4) the consortium's budget for 1991 through 1995 was $262 million, but USABC spent only $123 million through March 1995, because of technical problems and delays in negotiating agreements; (5) the original USABC budget should sustain USABC initial research efforts through 1997, after which USABC will seek $38 million from DOE to complete the development of batteries meeting its long-term goals; (6) DOE reviews and approves USABC contracts and agreements with battery developers and national laboratories and participates in USABC management and technical committees; and (7) DOE plans an audit of USABC and will require USABC to conduct close-out audits of the individual battery developers.
The insular areas of American Samoa, the CNMI, and Guam are located in the Pacific Ocean, some 4,100 to 6,000 miles from the U.S. mainland (see fig. 1). The USVI is located about 1,000 miles southeast of Miami in the Caribbean Sea. American Samoa, which had a population of about 65,628 in 2009, lies about 2,600 miles southwest of Hawaii and consists of seven islands covering a land area of 76 square miles. The main island of Tutuila has very little level land and is mostly rugged. Agricultural production on the island is limited by the scarcity of arable land, and tourism is impaired by the island’s remote location and lack of tourist-rated facilities. Most of American Samoa’s economic activity—primarily tuna canning—and government operations take place on Tutuila in the Pago Pago Bay area. In September 2009, one of American Samoa’s two canneries closed operations. The CNMI—a group of 14 islands with a total land area of 183 square miles—is located in the western Pacific Ocean, just north of Guam and 5,500 miles from the U.S. mainland. Most of the CNMI’s population— 51,484 in 2009—resides on the island of Saipan, with additional residents on the islands of Rota and Tinian. Historically, the CNMI’s economy has depended on garment manufacturing and tourism. Beginning in 1998, garment industry shipments began falling, and the last garment factory closed in early 2009. Guam is located about 50 miles south of the southernmost island of the CNMI. It has long been a strategic location for the U.S. military, which currently controls about 62 square miles of the island’s total 212 square miles. By 2020, the Department of Defense plans to increase the U.S. military presence on Guam by more than two-and-a-half times the island’s current military population of 15,000. In July 2009, the total population of the island was estimated at 178,430. The USVI is composed of three main islands—St. Croix, St. John, and St. Thomas—and many other surrounding islands. Most of the insular area’s population (estimated at 109,825 in July 2009) resides in St. Thomas and St. Croix. The USVI’s economy is more diversified than other insular areas, with tourism as the primary activity, followed by manufacturing including petroleum refining, rum distilling, and textile manufacturing. While the United States exercises sovereignty over these insular areas, each administers its local government functions through popularly elected governors. American Samoa and the CNMI are self-governed under locally adopted constitutions, while Guam and the USVI have not adopted local constitutions and remain under organic acts approved by Congress. These insular areas receive hundreds of millions of dollars in federal grants from a variety of federal agencies, including the Departments of Agriculture, Education, Health and Human Services, Homeland Security, the Interior, Labor, and Transportation. The Secretary of the Interior has administrative responsibility over the insular areas for all matters that do not fall within the program responsibility of another federal department or agency. OIA, established in 1995, is responsible for carrying out the Secretary’s responsibilities for U.S. insular areas. OIA’s mission is to promote the self-sufficiency of the insular areas by providing financial and technical assistance, encouraging private sector economic development, promoting sound financial management practices in the insular governments, and increasing federal responsiveness to the unique needs of the island communities. Much of the assistance that OIA administers to insular areas is in the form of what it considers mandatory assistance, including compact assistance, permanent payments to U.S. territories, American Samoa operations funding, and capital improvement project grants. OIA also administers discretionary assistance through, for example, technical assistance grants and operations and maintenance improvement program grants. The administration and management of OIA grants is guided by OIA’s Financial Assistance Manual. OIA grants other than compact assistance are subject to Interior’s Grants Management Common Rule, relevant Office of Management and Budget (OMB) circulars, and specific terms and conditions that OIA outlines in each grant agreement, such as semiannual narrative and financial reporting and grant expiration dates. Within OIA, two divisions are largely responsible for grant administration and management—the Budget and Grants Management Division and the Technical Assistance Division. The Budget and Grants Management Division, which covers capital improvement project and operations and maintenance improvement program grants, has a director and three grant managers. The Technical Assistance Division, which administers several types of technical assistance, has a director and two grant managers. A third OIA division—the Policy and Liaison Division—also provides some staff for grant-related tasks, including staff that focus on OIA’s accountability and audit responsibilities. The majority of OIA’s budget is directed to compact assistance and permanent fiscal payments (see table 1). About 2 percent of OIA’s budget is dedicated to administrative costs, leaving less than 16 percent for noncompact grants and technical assistance. On the basis of our review of grant files from a random probability sample of grant projects, we determined that previously reported internal control weaknesses still exist and estimate that 39 percent of the 1,771 grant projects in OIA’s grant management database demonstrate at least one internal control weakness that may increase the projects’ susceptibility to mismanagement. The eight internal control weaknesses we assessed can be grouped into three categories based on the entity responsible for the action: grant recipient actions, OIA grant management actions, or joint actions between grant recipients and OIA. As shown in figure 2, the internal control weaknesses we identified were most often associated with grant recipient activities, followed by joint activities and OIA grant ant management activities. management activities. We also determined how frequently each of the eight internal control weaknesses was found among OIA grant projects in the database (see table 2). Internal control weaknesses associated with grant recipient activities were the most common internal control weaknesses we found, accounting for 62 percent of the weaknesses exhibited by OIA grant projects. By accepting a grant, recipients agree to a set of terms and conditions that are part of OIA’s internal controls. We assessed grant recipients’ consistency in meeting requirements of the grant terms and conditions by examining project files for adherence to four key requirements that we determined are relevant to a grant management program: semiannual financial and narrative reporting, project close-out reporting, grant expiration dates, and reimbursable funding. For example, grant recipients are required to submit regular financial and status reports to OIA within a set time frame. We found that recipients in 60 percent of grant projects with semiannual reporting requirements did not submit these reports as required. In many cases, reports were submitted after the deadline had passed, but in some cases, reports were never submitted. These financial and narrative status reports are a key monitoring tool for OIA grant managers, and incomplete information can hinder OIA’s ability to identify and address any issues. In addition, we found that recipients of 58 percent of grant projects failed to submit final reports (or project close-out reports) on time. Final financial and narrative reports are required to be submitted within 90 days of grant expiration or project termination; failure to do so can delay the deobligation of any unspent grant funds from the project account. We also found that recipients of 19 percent of grant projects expect to or did actually complete the project after the grant expiration date. Grant terms and conditions state clearly that grant funds are only available until the grant expiration date, and the grant recipient should not continue to spend federal funds after they have expired. It is important to note, however, that our assessment relied on grant expiration dates as reflected in the file or database; project extensions may have been granted but not recorded. Nevertheless, situations where the grant expiration date has been or will be breached may be an indication of poor initial planning or problems with the grant project. Finally, we compared the proportion of awarded funds that had been disbursed to grant recipients with the progress made toward project completion to ensure that OIA grant recipients were requesting funds on a reimbursement basis, as required. We found that all open projects satisfied the reimbursement requirement. Of the joint activities between OIA and grant recipients, project redirection, whereby grant funds may be moved between projects, can contribute to increased susceptibility to mismanagement. This practice, which OIA refers to as “reprogramming,” accounted for 24 percent of the overall internal control weaknesses that we found in OIA grant projects. However, the presence of project redirection is not, in and of itself, an indication of weakness. As we will describe later in this report, project redirection can be used as a tool to improve timely use of federal funds and expedite project completion. However, if not used appropriately, project redirection can also impede project completion and contribute to wasted funds or prolonged holding of grant funds. Specifically, if project redirection is approved in cases where an insular area starts a project and expends funds, but then wants to redirect the funds to another project without completing the initial project, those expended funds may be wasted. In addition, frequent project redirection can result in projects that are started but do not have sufficient funds to be completed. Based on our review, project redirection occurred in 30 percent of applicable grant projects. Grant recipients generally initiate project redirection through a request to OIA. OIA policy requires that grant recipients must obtain written approval from OIA before any funds may be moved between grant projects and that technical assistance project funds may not be redirected. We did not identify any cases where these requirements were not met. For OIA grant management activities, we found that the presence of internal control weaknesses accounted for 14 percent of the overall internal control weaknesses that we identified for OIA grant projects. We assessed OIA’s consistency in following its recordkeeping, monitoring, oversight, and close-out procedures, each of which help OIA to ensure that grant funds are being used as intended, in accordance with relevant laws and regulations, and that the projects will achieve the planned results. While OIA generally follows its close-out procedures, we identified a number of concerns with OIA’s record keeping and monitoring and oversight activities: OIA grant managers generally use OIA’s internal grant management database as a monitoring tool to track key information about grants they oversee; they also use the database to create reports by insular area and grant type to respond to inquiries from Congress and others. However, for 41 percent of OIA grants in the database, we found that the database contains at least one piece of information that does not match corresponding information in the grant file. For example, among applicable grant projects, the grant expiration date was the element most often improperly recorded in the database. We also found cases where individual fund drawdowns were not entered into the database in a timely manner. Such inconsistent and inaccurate data can limit the ability of OIA grant managers to efficiently and effectively monitor whether grant projects are being completed on time and within budget and may increase the susceptibility of these projects to mismanagement. OIA officials explained that procedures for entering some data elements into the database—such as the date the grant was awarded and redirected funds— have changed over time and may have accounted for some of the inconsistent data we found. Some OIA grant managers reported using tools other than the database to track grant progress. For example, capital improvement project grant managers use their own spreadsheets to track information such as grant expiration dates, grant status, and when reports were last received. However, reliance on such informal systems can also introduce internal control risk because they are not subject to policies, procedures, or internal controls to ensure the information maintained in them is accurate. OIA also monitors capital improvement grant projects through site visits and related oversight activities. OIA field representatives stationed in two insular areas—American Samoa and the CNMI—can assist grant managers in headquarters with grant monitoring and oversight by conducting site visits of ongoing projects and advising headquarters staff of any issues that may arise. It is important to note that while the field representative in the CNMI has official grant management responsibilities, the field representative in American Samoa works for OIA’s policy division and has no formal grant management responsibilities. However, the American Samoa representative estimates that she spends nearly half of her time addressing issues relating to capital improvement grant projects. Although these resources are available for oversight, we found that 10 percent of capital improvement grant projects in these insular areas were visited more often by headquarters grant managers than by field representatives, which raises some concerns about the effectiveness of having field representatives in these areas. However, OIA field representatives told us that they have informal interactions with project managers that are not captured by site visit reports; and our review of grant files indicates that field representative reports may not be submitted to headquarters or included in the grant files. This inconsistent transmission of site visit reports is contrary to OIA policy and has the potential to impact the amount of information that headquarters oversight staff have about the status of various grant projects. OIA follows close-out procedures once grant projects are complete. We assessed OIA’s consistency in applying these procedures as an internal control weakness. We found that unexpended grant funds were properly deobligated from project accounts for all closed grant projects in the database. Insular areas confront both project planning and management challenges in implementing OIA grant projects as a result of decisions made by the insular area governments and external factors. Project planning challenges include frequently changing local government decisions, natural disaster impacts, and other factors. Project management challenges include issues such as a limited local capacity for implementing OIA projects and poor contractor performance. External factors include issues such as declining economic conditions and various U.S. policies. Officials in all of the four insular areas we visited reported facing some of these challenges. These challenges were most often noted for capital improvement projects in American Samoa and the CNMI that we reviewed. While some of these challenges, which influence the insular areas’ abilities to effectively complete OIA grant projects, are beyond their control, others can be overcome. Figure 3 summarizes our analysis of the project planning and project management challenges experienced by insular areas for the 24 selected grant projects. Many of the challenges that can contribute to OIA grant project delays stem from local government project planning decisions: Frequently changing priorities. Some insular area governments regularly shift priorities and frequently redirect grant project funds. While some changes in priorities are to be expected, when these priorities change frequently, it can lead to project delays and wasted resources. Of the four insular areas we reviewed, only American Samoa has and adheres to a master plan that lists planned capital improvement projects and categorizes them into one of three priority areas. In contrast, as of December 2009, OIA reported that the CNMI did not have a master plan for its capital improvement projects or established priority areas. Without established local government priorities, frequent priority shifts can more easily occur that affect which projects are pursued, and in turn, grant funds can be more frequently redirected between projects with widely different goals, often leading to project delays or incomplete projects. For example, in 2005 the CNMI government shifted OIA funds from a project originally funded in 2004 updating a Tinian school building to a project developing a wastewater system, and then in 2007 funds were again shifted from this incomplete wastewater project to a project developing a Tinian airport instrument landing system, which has since been suspended. To this end, OIA and the CNMI government acknowledged that establishing and enforcing a master plan would be helpful to guide priorities for capital improvement project funding. OIA has taken steps to encourage the development of a CNMI infrastructure master plan, and forward movement has been made with the submittal of a budget and scope of work by the U.S. Army Corps of Engineers; however, as of January 2010, the CNMI had not yet fully identified funding sources for the plan’s completion. In contrast, OIA and American Samoa use its master plan to help guide which projects should be funded and which project redirection requests should be approved. As a result of the long-term planning, we found that projects in American Samoa generally do not experience delays due to project redirection and that funds are redirected in a way that aids project completion. According to OIA, project redirection generally occurs within a priority area and between projects listed in the master plan. For example, funds were redirected from the Petesa-Happy Valley Village Road project, where the project was facing delays in getting access to necessary lands, to the Taputimu Village Road project, which was then able to be completed in February 2008. In addition, American Samoa replaced the redirected sum with an equal amount from a later fiscal year’s funding for the Taputimu Village Road project. OIA officials attributed much of American Samoa’s success in using project redirection effectively to the insular area’s leadership. Natural disaster impacts. Natural disasters are unexpected challenges that are beyond insular area governments’ control; however, local government project planning decisions can mitigate some of these effects. Frequently occurring natural disasters such as typhoons, cyclones, and hurricanes can have a significant impact on the condition of the insular area’s economy, health, and physical infrastructure. Recovering from such disasters can demand a considerable amount of local and federal resources to be directed to immediate disaster recovery efforts rather than long-term future or current economic and infrastructure development. For example, the U.S. Department of Homeland Security Federal Emergency Management Agency disbursed, as of January 2010, roughly $22 million for individual and household assistance to American Samoa victims of the September 2009 tsunami. In addition, the agency estimates damages to American Samoa public infrastructure to cost roughly $80 million. Similarly, Interior reported that the combined economic costs to the USVI for damage caused by Hurricanes Hugo in 1989 and Marilyn in 1995 ranged from $3 billion to $4 billion. The shifting of both local and federal efforts and resources to repair these damages can contribute to challenges in project planning and implementation. Insular areas’ remote locations and limited natural resources can further exacerbate the effects of natural disasters by increasing costs and the amount of time for reconstruction. These factors can cause project delays and contribute to OIA project budget increases because of the difficulty in estimating fluctuating material and fuel costs. Limited land access. Local governments’ project planning decisions regarding how to proceed when land access issues arise can also result in some grant project delays. Limited access to land was cited by some insular area officials, specifically in American Samoa and the CNMI, as a challenge they face in completing OIA capital improvement project grants. For example, in the American Samoa Petesa Happy Valley Road capital improvement project, land access has been a major contributor to delays since the project was initially funded in 2003. Although both of the insular area governments have the power of eminent domain over their land, that authority has not always been asserted. According to some American Samoa and CNMI officials, communities sometimes resist government land acquisition efforts, which can lead to project delays. When land access issues affect project implementation, local governm can choose to address the issue by enforcing their authority or by choosing to fund oth er projects. Lack of local operations and maintenance funding. Another project planning challenge identified by CNMI officials is a lack of local operations and maintenance funding—which in part is a result of the local government’s decision to not prioritize operations and maintenance activities for local funds and to not use a portion of OIA funds for operations and maintenance of OIA grant projects. For example, according to the CNMI Lieutenant Governor, the CNMI government does not provide funding specifically dedicated to operations or maintenance of its infrastructure, including OIA capital improvement projects. However, we believe that it is significantly more cost effective to perform preventative maintenance rather than to perform repairs. Further, if agencies only perform maintenance on a reactive basis, then the critical services they provide can be disrupted. For example, the CNMI experienced intermittent electricity blackouts from 2006 to 2008, which were in part caused by aging power generators that had not been properly maintained. This crisis management approach can be disruptive to ongoing projects because critical services may not be available as planned and local government resources and contractors may be diverted to address the crisis. According to OIA, because capital improvement project funding awarded to the CNMI was required to have a significant local match, it was more challenging for the CNMI to dedicate adequate additional funds for project operations and maintenance. However, since fiscal year 2005, this match has not been required. In addition, according to OIA, operation costs in the CNMI are not eligible for capital improvement funds, but in 2008 and 2009, the office provided the CNMI with pilot grants of $350,000 for maintenance. According to the OIA grant manager for capital improvement projects in the CNMI, the goal of the pilot grants was to ensure that the CNMI would spend the funds on maintenance if OIA provided them. OIA reported that the CNMI had spent the 2008 funds on maintenance and that if the 2009 funds were similarly spent, the next step would be to regularly provide the CNMI with a percentage of each grant’s funding specifically for maintenance. In contrast, the American Samoa government sets aside 5 percent of its OIA capital improvement project grant funds for maintenance. The American Samoa government also provides a 100 percent match to all OIA funds directed to maintenance. This maintenance set-aside program requires specific plans from the local government for the use of the money, as well as reporting procedures to account for this fund. Several project management challenges, including the following, also limit the ability of some insular areas to manage and implement OIA grants: Limited local capacity for OIA project implementation. Some insular area officials reported that they face a shortage of skilled workers and limited opportunities for training and education in disciplines such as grant management. Insular governments have access to training funds— for example, through OIA technical assistance grants. However, as we previously reported, because citizens of insular areas are free to migrate to the United States, it is difficult to retain highly educated or skilled workers. Further, in the CNMI, several officials reported a shortage of funding for staff, although they did not provide us with data to quantify this issue. This concentrates key knowledge in few individuals and can cause high staff burn-out rates, increasing the likelihood that projects will encounter delays if these staff leave before projects are completed. In the future, access to low-cost foreign labor in the CNMI and American Samoa could change due to the rising minimum wages. In addition, the CNMI’s access could also be affected by the transition to the U.S. immigration system which began November 28, 2009, under recent legislation. The effect of the legislation’s implementation on the CNMI’s foreign labor pool will largely depend on various U.S. government agency decisions regarding the provision of foreign worker temporary permits, as we reported in March 2008. Contractor issues. Some insular area officials also attributed project delays to poor performing contractors. In several of the American Samoa and CNMI projects we reviewed, insular officials identified poor contractor performance as a significant cause of project delays and cost increases. In some cases, contracts had to be canceled and, in other cases, insular area project managers had to either redesign or expand the project. Poor contractor performance is a particular concern when insular area governors declare a state of emergency. In the CNMI in particular, along with an emergency declaration, a governor may waive local standard procurement regulations. In bypassing the standard procurement regulations, the government increases the likelihood that poor performing contractors are hired. For example, during the implementation of the CNMI’s power plant rehabilitation project, the Governor declared a state of emergency to address the plant engines’ inability to provide necessary power to the island, which was caused by wear and inadequate maintenance. During the state of emergency, the Commonwealth Utilities Corporation hired a contractor that performed poorly, causing the agency to cancel the contract and delay the project, which was approved in 2007 and was not yet completed at the time of our visit in September 2009 (see app. II for more information). To address the issue of contractor performance, CNMI regulations require contractors to carry payment and performance bonds—whereby payment is ensured for all employees, subcontractors, and suppliers involved in a project, and monetary reparations will be made in the event of contractor nonperformance—for construction projects in excess of $25,000. For example, the CNMI recently imposed a $17,000 damage claim and initiated debarment proceedings against a delinquent contractor in an OIA Commonwealth Health Center project, according to the CNMI Capital Improvement Project Administrator. According to the OIA grant manager responsible for capital improvement project grants in American Samoa, American Samoa regulations require performance bonds for contracts over $100,000, USVI regulations allow but do not mandate the government to require performance bonds, and Guam generally requires 100 percent Surety Performance Bonds, but some exceptions have been made to allow only 50 percent. Varying effectiveness of central grant management offices. American Samoa, the CNMI, and the USVI have central grant management agencies that are similarly structured and act as a liaison between OIA and the local government agencies receiving grants. However, the effectiveness of the central grant management agencies varies, in part based on the capacity of their staff. In the CNMI, OIA capital improvement project grants are largely administered through the Governor’s Capital Improvement Project office. OIA officials noted that the CNMI central management office is more effective, particularly in comparison to American Samoa’s central grant management agencies. The OIA grant manager that works with the CNMI office said that the staff are essential to her efforts to monitor ongoing projects; however, a CNMI office representative reported concerns about limited resources. A key position, the Capital Improvement Project Administrator, is appointed by the CNMI Governor, which makes the position subject to change when local government administrations change. Another key position, the Capital Improvement Project Contracting Officer, is an OIA-funded contract employee, which means the position could be eliminated if funding is not continued. To this end, OIA has taken action, including awarding funds specifically for capital improvement project administration, to ensure that the office is adequately staffed to manage projects. In American Samoa, the Territorial Office of Fiscal Reform and the Capital Improvement Project Committee are primarily responsible for OIA grant administration. OIA and American Samoa officials reported that this central grant management arrangement has contributed to project delays. For example, the Director of the Territorial Office of Fiscal Reform, who is also the Chairman of the Capital Improvement Project Committee, is responsible for overseeing efforts to adhere to American Samoa’s fiscal reform plan. In addition, he reviews and approves all of the projects that go through the Capital Improvement Project Committee, such as OIA grant project plans and approvals. During the last 3 years, however, this official has been absent from the island but has retained his responsibilities and has not delegated them to anyone else, according to OIA and American Samoa officials. As a result of his absence and the lack of delegation, implementation of capital improvement projects has been delayed and the committee has become ineffective, according to the officials we spoke with. In the USVI, the local Office of Management and Budget is primarily responsible for OIA grant administration. OIA officials told us that the agency can be effective but can also delay the project administration process, including financial and status report submissions, because they do not always expeditiously provide reports submitted by the agencies receiving grants to OIA. A local government official told us that the USVI Office of Management and Budget has become more effective in its administration of OIA grants over the past few years. Limited local auditing agency resources. Insular area governments have not prioritized oversight of OIA grant projects through local auditing agencies, which may contribute to the potential for project fraud, waste, abuse and mismanagement by both the agencies receiving the grants and contractors. For example, the CNMI’s local audit agency has not reviewed any federal grants for several years and the position of the Territorial Auditor in American Samoa was vacant from 2005 until the fall of 2009. The following external factors, some of which we have previously reported on, can also contribute to project delays and inefficiencies: Declining economic conditions. Although all insular areas we reviewed face serious economic challenges, the CNMI and American Samoa face particularly difficult obstacles as a result of their dependence on a few key industries. If economic conditions further destabilize, the CNMI and American Samoa could face difficulties in funding local government agencies and the administrative costs required for OIA grant project implementation, as can be seen in the following examples: The CNMI relied mainly on two industries for its economic prosperity— garment manufacturing and tourism—until early 2009 when the last of its garment factories closed. Together both industries had accounted for 85 percent of the CNMI’s economic activity. The CNMI now relies largely on its tourism industry to support its economy, which is a volatile industry and is susceptible to both local and global crises. American Samoa’s economy depends primarily on the tuna canning industry, which recently endured two major setbacks—the closure of one tuna cannery and a significant reduction in workforce at the other cannery. American Samoa also experienced a tsunami in the fall of 2009, which caused considerable damage. U.S. government policies. Several recent changes in U.S. government policies may also likely contribute to OIA project implementation challenges. First, as mentioned previously, the federal minimum wage in American Samoa and the CNMI began rising in 2007 and will continue to do so until they equal the U.S. minimum wage. This may increase the cost of OIA-funded projects. Additionally, if the economies falter and local revenues fall, OIA grants still requiring a local government match may face delays or noncompletion as they become increasingly expensive for the local government agencies to fund. Second, in response to U.S. legislation, the CNMI’s immigration system was federalized on November 28, 2009. Accordingly, some of the foreign workers that made up a majority of the workforce, as of 2005, may not be able to reside in the CNMI in the future. For those OIA grant implementing agencies that employ foreign workers, it is possible that the departure of these workers could disrupt project progress and basic service provision. Third, the planned U.S. military buildup on Guam is expected to challenge the island’s infrastructure. According to a recent GAO report, the U.S. Department of Defense is expected to relocate 8,000 Marines and their estimated 9,000 dependents from Okinawa, Japan, to Guam by 2014, and also plans to expand the capabilities and presence of Navy, Air Force, and Army forces on Guam. As a result, the military population, including dependents, on Guam is expected to grow by over 160 percent, from its current population of about 15,000 to over 39,000 by 2020. The Guam government has not yet identified a strategy to expand the roads, power, water, wastewater and solid waste systems to accommodate this population increase. Furthermore, according to the OIA Budget and Grants Management Division Director, the infrastructure development will require significant labor, thus contractor availability may be affected for many other U.S. insular areas. OIA has taken several important steps to improve grant project implementation and management but faces several obstacles in its efforts to compel insular areas to complete their projects in a timely and effective manner. Over the past 5 years, OIA has taken steps to improve project implementation and management, including implementing a competitive allocation system that establishes incentives for insular areas to make financial management improvements and complete projects; establishing grant expiration dates; and taking steps to improve administrative continuity in insular areas. Specifically, OIA has taken steps in the following areas: Competitive allocation system. In fiscal year 2005, OIA implemented a new competitive allocation system for the $27.7 million in capital improvement project grants that it administers to the insular areas. This system provides incentives for financial management improvements and project completion by tying a portion of each insular area’s annual allocation to the insular governments’ efforts in these areas—such as their efforts to submit financial and status reports on time. Through this system, OIA scores each insular area against a set of performance-based criteria and increases allocations to those insular areas with higher scores, thereby lowering allocations to insular areas with lower scores. To date, the competitive allocation criteria have measured the insular governments’ abilities to exercise prudent financial management practices and to meet certain federal grant requirements. As described in OIA’s Budget Justification for fiscal year 2010, there are 10 competitive criteria, which include the extent to which the applicant is in general compliance with deadlines established under the Single Audit Act, has complied with all grant reporting requirements, and has properly functioning internal controls—including the presence of a qualified independent auditor, an adequately funded office, and strong safeguards to ensure the office’s independence. (See table 3 for a list of the 10 criteria and the insular areas’ scores for fiscal year 2010.) The criteria have had a positive impact on insular governments’ financial management practices. For example, although the four insular areas initially had trouble submitting their Single Audits in a timely manner, according to Interior’s fiscal year 2008 Annual Performance and Accountability Report, as of fiscal year 2006, each of the insular areas has been in compliance with the requirement for annual Single Audits. In September 2009, OIA announced it will add another criterion to the competitive allocation criteria for fiscal year 2011 allocations to encourage more efficient project completion and use of unspent funds. The new criterion will measure the rate at which territories expend funds over a 5-year period. According to OIA officials, this measure was largely added to address the roughly $52 million unspent capital improvement project fund balance that the CNMI currently carries, as well as the smaller but proportionally higher balance carried by the USVI (approximately $18 million) in comparison to American Samoa (approximately $20 million) and Guam (approximately $10 million). Grant expiration dates. Beginning in 2005, to encourage expeditious use of funds, OIA established 5-year expiration dates in the terms and conditions of new capital improvement project grants. Beginning in 2008, OIA also notified insular area officials of expiration dates for grant projects that had been ongoing for more than 5 years with no or limited progress. OIA officials explained that while the expiration dates have not yet pushed all of the insular areas to complete projects, they have encouraged some areas to do so. The officials also stated that the expiration dates have helped OIA grant managers administer and manage grants—which they believe has improved accountability—and have been useful for insular area grantees whose agencies have high staff turnover and were unaware of the status of older grants. Actions to improve insular area continuity. OIA has also taken steps to help with the continuity of grant administration at the insular level. For example, in March 2008, OIA awarded a $770,000 grant for capital improvement project administration in the CNMI, which provided funding for positions in the local central grant management office in that insular area. According to the grant manager for CNMI capital improvement projects, the grant was given to help ensure that the central grant management office had the staff necessary to help move implementation of projects forward. Among other positions, the grant funded three project manager positions; these managers have worked on three of the projects we discuss in appendix II—the Jose T. Villagomez Center for Public Health and Dialysis (Commonwealth Health Center dialysis facility), the Rota Health Center, and the Tinian Landfill Projects. Although the continuity of the office itself is vulnerable to changes in the CNMI’s administration, the grant manager stated that OIA is hopeful that in the long term, even if the insular area’s central grant management office does dissolve, OIA will have helped develop the capacity—in terms of knowledge and resources— that could go back to the local agencies for continued progress. Despite these efforts, some insular areas are still not completing their projects in a timely and effective manner, and OIA faces the following key obstacles in compelling them to do so: Lack of sanctions for delayed or inefficient projects. Current OIA grant procedures provide few sanctions for delayed or inefficient projects. For example, although OIA established grant expiration dates, they have little practical effect. In theory, a grant expiration date encourages timely completion of a project because if a project is not completed on time, the funds are taken away from the recipient. However, if an insular area’s OIA grant funds expire, while the funds do not remain immediately available for the project, the insular area does not lose the funds because OIA treats its capital improvement project grants as mandatory funding with “no-year funds,” based on the agency’s interpretation of relevant laws. Thus, after a grant expires, OIA deobligates the funds and they are returned to the insular area’s capital improvement project account to be reobligated for the same or other projects. Along the same lines, OIA’s application of the competitive allocation criteria can reduce an insular area’s capital improvement project allocation if the insular area is not performing well, but reductions must stay within a range of $2 million below or above the baseline funding that has been established for each insular area. As indicated in OIA’s fiscal year 2010 budget justification, the office’s intention for the competitive allocation process is to allow the governments an opportunity to compete each year for a greater portion of the guaranteed funding rather than to signal declining performance. Recently, OIA has taken steps to identify possible solutions and actions that could help provide effective sanctions for insular areas that do not efficiently complete projects and expend funds. In doing so, OIA has faced uncertainty regarding the authorities it has to change its current policies and practices, which are guided by many special agreements, laws, and regulations. Accordingly, OIA has sought an Interior Solicitor’s opinion on a few discrete issues regarding its authority to take different actions when projects are not completed, grant funds expire, or insular areas sustain large balances of unexpended funds. In response, an attorney with the Solicitor’s office orally advised OIA that it did not have the authority to reallocate funds away from the insular areas whose funds expire. However, the attorney acknowledged to us that this advice was not based on a comprehensive review of all potentially relevant sources of law and that there are still some unresolved questions. For example, recent appropriations acts have appropriated funds to OIA for capital improvement project funding for American Samoa, the CNMI, Guam, and the USVI on the condition that these funds are provided according to the Agreement of the Special Representatives on Future United States Financial Assistance for the Northern Mariana Islands approved by Public Law 104-134; however, this 1992 agreement is now expired. The new agreement, entered in 2004, has a different title—Section 702 Funding Agreement—and has not been approved in any law. Unless the reference to the now-expired 1992 agreement is read to mean the 2004 agreement, then OIA may have more discretion with respect to reallocation than it currently exercises. The Interior Solicitor indicated this legal discrepancy has not been resolved and that some documents, such as appendices to the 2004 agreement and legislative history of recent appropriations acts, were not consulted. In addition, OIA is considering using a provision of the 2004 agreement that allows OIA to deviate from the baseline allocations under certain circumstances, including a substantial backlog of prior years’ unspent funds. Any such deviations under this provision, however, require the approval of Congress. The Interior Solicitor has not yet determined or advised OIA on how this approval requirement may be met. OIA Resource Constraints. OIA officials report that resource constraints impede effective project completion and proactive monitoring and oversight. Although they could not provide us with data, numerous officials in OIA asserted that heavy workloads are a key challenge in managing grants. The effects of insufficient resources vary across grant type but include impacts on the ability to maintain files, adopt a proactive oversight approach that could aid project completion, conduct more detailed financial reviews of projects, and conduct site visits to more projects to better ensure that mismanagement is detected. Importantly, although grant managers for capital improvement projects noted that the most effective action they can take to move projects along is to conduct site visits, they also asserted that their current workloads only afford one visit per year. Furthermore, the grant managers explained that the duration of the visits, and therefore the number of projects visited, is limited to ensure that they are able to meet their requirements and responsibilities at headquarters. In addition, the grant managers reported that their heavy workloads make it harder for them to take a proactive approach, including looking ahead to grants with impending expiration dates, reaching out to determine causes for delays, and taking earlier action to help insular areas move projects forward. Two of the insular areas—American Samoa and the CNMI—have OIA field representatives whose broad job descriptions include regular site visits to projects to monitor progress. These positions are intended to help ease the workload burden of headquarters grant managers. According to OIA officials, the field representative in American Samoa is effective and a critical contributor to OIA’s efforts to monitor projects. However, OIA officials noted that the American Samoa field representative formally works for OIA’s policy division and has many other roles and responsibilities to fulfill—including acting as a liaison between the American Samoa government and federal agencies, including, but not limited to, Interior—resulting in more work than they believe should be assigned to one person. In contrast, the CNMI has two field representatives, one of whom is specifically assigned to grants management; however, OIA officials believe that the field representative has not been as effective as the American Samoa representative. Resource constraints also limit OIA’s efforts to assist insular areas in responding to Single Audit report findings, which can help address issues that may lead to mismanagement or ineffective project implementation. Currently, only one OIA auditor works with insular areas to ensure they respond to Single Audit report findings and has numerous other responsibilities, including responding to other external audits and conducting reviews of grant managers’ project files. The auditor explained that when the insular areas were delinquent in complying with Single Audit reporting requirements, the workload was manageable. Now that OIA has taken steps to help improve the timeliness of these reports and each of the insular areas is complying and providing a timely report to meet the Single Audit requirements, OIA officials believe that the workload associated with assisting insular areas in responding to the findings is significantly larger than one auditor can handle. The support that Interior’s Office of Inspector General provides generally does not reduce OIA’s oversight workloads. According to the OIA officials we spoke with, currently, Interior’s Office of Inspector General does not typically provide much oversight support on individual grant concerns; rather, the Inspector General’s Office of Audits, Inspections, and Evaluations and its Office of Investigations focus their efforts on higher- priority issues that cover a broader spectrum and pertain to more significant instances of misconduct. Historically, the predecessor to OIA— the Office of Territorial and International Affairs—received oversight support from federal comptrollers located in American Samoa, the CNMI, Guam, and the USVI. For example, in fiscal year 1982, 44 full-time positions in the federal comptroller offices—36 of which were professional audit staff—were responsible for auditing the territorial governments. Then, in 1982, legislation transferred responsibility for audits from the federal comptrollers to Interior’s Office of Inspector General in an effort to improve independence in the audit oversight of the insular governments. Staff in the Office of Inspector General’s regional offices became responsible for performing the functions of the insular area comptrollers by conducting audits of property, receipts, revenues, and expenditures. The Office of Inspector General initially had insular field offices in American Samoa, the CNMI, Guam, and the USVI. However, by 2002, all but the USVI office was closed, despite concerns that the move away from the territories might make it more difficult to provide a satisfactory level of oversight. When the last of the Pacific insular area offices closed, the Office of Inspector General opened its Honolulu field office. According to the Office of Inspector General’s Semiannual Report to the Congress in April 2003, the Guam office was moved to Honolulu in an effort to expand the audit and investigation coverage of the department and to address the long-standing challenges facing insular area governments as a whole, while still maintaining an effective presence. Over time, these changes, and the need for the Office of Inspector General to prioritize its resources on broader management issues and more significant cases of misconduct, have reduced some of the oversight support available to OIA on individual grants. According to OIA officials, the responsibility for detailed audits of OIA grants currently falls primarily upon the external auditors conducting Single Audits and the one OIA auditor responsible for following up on the results of those audits. Moreover, because Single Audits are by design risk-based and sample from all federal grants—not just Interior’s OIA grants—to a given insular area, they cannot provide comprehensive coverage for every program or transaction. Despite their concurrence that additional resources are needed, OIA division directors confirmed that they have not formally communicated these needs to decision makers, or higher levels within Interior, and have not developed a workforce plan or other formal process that demonstrates a need for additional resources. Moreover, OIA does not track workload measures, such as the number of grants handled by each grant manager, to show changes over time that would help justify the need for additional resources. Interior’s own Workforce Planning Instruction Manual emphasizes that workforce planning is a fundamental tool, critical to quality performance that will contribute to the achievement of program objectives by providing a basis for justifying budget allocation and workload staffing levels. As we have previously reported, it is important for agencies to determine the critical skills and competencies that will be needed to achieve current and future programmatic results through workforce planning, and in doing so, it is important to involve agency managers, supervisors, and staff to ensure that the agency understands the need for and benefits of the workforce plan. Inconsistent and insufficiently documented project redirection policies. OIA’s current project redirection approval practices do little to discourage insular areas from redirecting project funds in ways that hinder project completion. As previously discussed, insular areas shift priorities and frequently redirect grant project funds, which in some cases expedites project completion and in other cases impedes it. Currently, OIA’s policies for granting project redirection requests vary across insular areas. Specifically, in American Samoa, project redirection is limited to changes within a priority category because the insular area’s grants are issued by priority areas. In contrast, the other insular areas each receive grants as one capital improvement grant and are able to redirect money between projects with widely different purposes. OIA’s policies for granting project redirection requests are also not well- documented. While the 2003 version of OIA’s Financial Assistance Manual contained some specific criteria regarding the level of approval that was needed for various project redirection requests, there are no thresholds or specified levels of approval in OIA’s 2009 update to the manual. According to OIA officials, that information was omitted because OIA’s current practice is for grant managers to approve most project redirection requests. The officials further stated that although they believe OIA has the authority to deny redirection requests, the office has not done so in the CNMI, even though there have been instances when they believed requests should have been denied, but were instead ultimately approved. For example, as we previously discussed, in 2007, funds that had previously been redirected (from a project updating a Tinian school building to a project developing a wastewater system) were again shifted from the unfinished wastewater system project to a Tinian airport instrument landing system, which has since been suspended. Correspondence documented in the grant file for the wastewater project shows that some OIA officials did not believe the request should be granted and expressed concern that the project redirection request, if approved, would result in a significant funding shortfall in the already underfunded Tinian wastewater project, leading to capital improvement project funds remaining unspent for a considerable length of time. However, this request was eventually approved. In contrast, in American Samoa, OIA has denied project redirection requests in cases where the insular area wanted to redirect project funds from one priority area to another or when the new project was not on American Samoa’s master plan. Project redirection is a particular concern in instances where a project starts and federal money is expended but the project is never completed, leading to the waste of both federal resources and the local governments’ limited technical capacity to implement projects. With regard to federal resources, OIA does sometimes recover funds by disallowing costs for projects that are not completed or by offsetting previously obligated costs by reducing reimbursements to insular areas for other projects. However, according to OIA officials, costs are disallowed or offset only about 50 percent of the time such a project redirection situation arises, and the decision as to whether to pursue the costs depends upon the particular project and circumstances. Importantly, OIA does not currently have established criteria to guide these decisions. In the previously discussed example of project redirection from the Tinian wastewater system project in the CNMI to an airport instrument landing system project, OIA reimbursed the CNMI approximately $53,000 for costs associated with completing an environmental assessment for the Tinian wastewater facility, which is expected to be canceled because the remaining funds expired on December 30, 2009. According to OIA officials, these costs could be disallowed, but OIA opted not to pursue them. Even in cases where the costs are recovered, the waste of limited technical capacity on the island may contribute to the insular area’s difficulty in efficiently completing grant projects. Inefficient grant management system. OIA’s current data system for tracking grants is limited in the data elements it contains, leading to inconsistencies in the data that some grant managers rely on for monitoring and oversight activities. Grant managers vary in the degree to which they rely upon OIA’s database, as well as the priority they place on keeping information in the database up to date. While grant managers for all grant types reported relying on the database for information on the amount of funds drawn down from grants and for responding to requests for data from outside parties (such as Interior’s Office of Inspector General and GAO), some told us that they do not find OIA’s database useful and therefore maintain their own separate spreadsheets to track some information, including expiration dates, grant status, and receipt dates for the most recent financial and narrative reports. Because these grant managers do not rely on OIA’s database, they do not always keep information on their grants in OIA’s database up to date, leading to inconsistent or incomplete information in the database. Importantly, when grant managers do rely on the database, they may be relying on inaccurate or unreliable data. As we previously discussed, database elements, including grant expiration dates, were sometimes improperly recorded, and we found cases where individual fund drawdowns were not entered into the database in a timely manner. Such occurrences increase the susceptibility of grant funds to mismanagement. As reported in the Domestic Working Group’s Guide to Opportunities for Improving Grant Accountability, consolidating information systems can enable agencies to better manage grants. Along these lines, Interior is currently phasing in a centralized agencywide system—the Financial and Business Management System—that is scheduled to be implemented in OIA in 2011. By design, Interior’s system will incorporate the majority of the department’s financial management functions into one system and will eliminate over 80 departmentwide and bureau-specific systems, including OIA’s grant management system. Interior has already implemented the system in its Bureau of Land Management, Office of Surface Mining Reclamation and Enforcement, and Minerals Management Service. According to the Interior officials leading this effort, the system has a financial assistance module with a real-time interface to Interior’s accounting system and is to be used by all of Interior’s grant-making organizations and programs. Among other capabilities, the system can receive applications electronically and conduct several postaward tasks. Specifically, among other things, grantees will be able to submit financial reports and status reports electronically, grant managers will be able to set up electronic reminders for reports with impending due dates, and drawdown requests and payments will have the capability to be processed electronically. During our site visits, some insular area grantees reported that a centralized electronic database that is accessible to them, such as those used by other federal agencies, would make it easier to meet reporting requirements and request fund drawdowns. However, OIA officials expressed some concern about whether the new system will have the flexibility needed to address OIA’s specific needs for grants management. Specifically, the officials are concerned that because Interior’s goal is to standardize the system used by all of Interior’s grant- making organizations and programs, the system may not provide for the level of detail that OIA needs. For example, OIA is often called on to generate reports for external parties, such as members of Congress and auditors, that are sorted by specific fields, including fiscal year, grant type, and insular area. Because Interior’s existing agencywide financial system does not provide this capability, OIA created its current grant management system database. Because Interior will require that all grant-making organizations and programs stop using other databases or spreadsheets once the Financial and Business Management System is implemented, OIA officials want to be sure that the capabilities of the new system will be responsive to their particular needs. In addition to flexibility concerns, OIA officials expressed general concern about the capabilities of the financial assistance module, noting that Interior recently changed the software for the module in response to issues that arose during implementation in other bureaus. Interior officials responsible for the conversion to the new system indicated that they do plan to be responsive to the needs of each office and bureau and have means to configure the software to meet any individualized requirements. OIA has made important strides in implementing grant reforms, particularly in its efforts to establish disincentives for insular areas that do not complete grant projects in a timely and effective manner. However, the unique characteristics and situations facing insular area governments, and the need to mindfully balance respect for insular governments’ self- governance and political processes with the desire to promote efficiency in grant project implementation, limit as a practical matter some of the actions that OIA can take to improve the implementation of grant projects. Nonetheless, OIA has not exhausted its opportunities to better oversee grants and reduce the potential for mismanagement. In light of OIA’s concerns that limited authority to withhold or reallocate unexpended grant funds impedes the imposition of sanctions on projects that are wasteful of government resources, it is important that the office has a clear understanding of its available authorities and any additional authorities that are needed to ensure that insular area project personnel, agency heads, and administrative officials more effectively and expeditiously utilize large balances of unexpended funds. In addition, although OIA officials are concerned that limited resources impede more rigorous and proactive grant project monitoring, OIA has not formally communicated its needs to key decision makers and has not developed a workforce plan or other formal process that demonstrates a need for additional resources. Inconsistency among grant managers in the way they consider project redirection requests also raises concerns about OIA’s grant management and oversight processes. OIA lacks a uniform policy to help ensure that insular areas are discouraged from redirecting project funds in ways that hinder project completion. Along these lines, when federal money is expended but projects are not completed after redirection occurs, OIA does not have established criteria to guide decisions regarding whether to disallow costs, leading to inconsistency in those decisions, as well. We recommend that the Secretary of the Interior take the following three actions: To improve OIA’s ability to require insular areas to efficiently complete projects and expend funds, we recommend that the Secretary direct Interior’s Office of the Solicitor to prepare a detailed written evaluation of OIA’s existing authorities that could be used to ensure the more efficient use of funds by insular areas, and work with OIA officials to use such authorities as appropriate and to identify the need, if any, for additional authority. We recommend that if the evaluation identifies the need for additional authorities, the Secretary should submit the evaluation to the Congress. To ensure that OIA’s staffing needs are clearly and accurately communicated to key decision makers, we recommend that the Secretary direct OIA to create a workforce plan and reflect in its plan the staffing levels necessary to adopt a proactive monitoring and oversight approach. To reduce the impact that frequently shifting insular area priorities have on insular areas’ incentives to complete projects and efficiently use federal funds, we recommend that the Secretary direct OIA to develop criteria that establish when project redirection requests should be approved and when they should be denied and update its financial assistance manual with these criteria to clarify OIA policy on redirection. In developing these criteria, OIA should adopt guidelines that minimize ineffective project redirection. In addition, we recommend that the Secretary direct OIA to develop criteria that establish when offset or disallowed costs should be pursued. We provided a draft of this report for review and comment to the Department of the Interior as well as the Governors of American Samoa, the CNMI, Guam, and the USVI. Interior’s Assistant Secretary for Insular Affairs concurred with our recommendations and commented that our report is a useful analysis. Interior’s written comments are reprinted in appendix III. We also received written comments from the Lieutenant Governor of the CNMI (see app. IV) and the Acting Governor of the USVI (see app. V). Both concurred with our recommendations. The Lieutenant Governor of the CNMI noted that the CNMI had recently adopted a new structure to manage OIA grant funds that addresses many of the insular area challenges we identified. We agree that this new structure, as well as the Capital Improvement Project Office’s efforts to address issues that have delayed ongoing grant projects, will reduce the potential for mismanagement among OIA grant programs in the CNMI. We did not receive comments from the Governors of American Samoa and Guam. We are sending copies of this report to the appropriate congressional committees; the Secretary of the Interior; the Governors of American Samoa, the Commonwealth of the Northern Mariana Islands, Guam, and the U.S. Virgin Islands; and other interested parties. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or mittala@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. This appendix details the methods we used to assess the Department of the Interior’s Office of Insular Affairs’ (OIA) management of its grant programs to insular areas. For this review, we determined (1) whether previously reported internal control weaknesses have been addressed and, if not, to what extent they are prevalent among OIA grant projects; (2) the challenges, if any, insular areas face in implementing OIA grants; and (3) the extent to which OIA has taken action to improve grant project implementation and management. For our first objective, our review focused on OIA grants that were provided to all insular areas that receive noncompact types of grants— including American Samoa, the Commonwealth of the Northern Mariana Islands (CNMI), Guam, the U.S. Virgin Islands (USVI), and three Freely Associated States (the Federated States of Micronesia, Palau, and the Republic of the Marshall Islands). We excluded compact funds from the review because we are required to regularly review and report on the effectiveness of U.S. oversight of compact funds. Instead, we focused on grants awarded for capital improvement projects, operations and maintenance projects, technical assistance, and other purposes. Our review covered grant projects awarded during fiscal years 1984 through 2009 that were open or had been closed for less than 3 years as of April 27, 2009. To identify key internal control weaknesses that have been identified in the past, as well as key internal controls relevant to grant management, we first summarized the weaknesses that were identified in our insular area related reports published from 2000-2009, Interior Office of Inspector General reports on insular areas over that period, and the three most recent Single Audit reports for American Samoa, the CNMI, Guam, and the USVI. We also reviewed several documents outlining policies and procedures applicable to OIA’s grant management and oversight responsibilities to determine the internal control activities that OIA has in place, including (1) the Standards for Internal Control in the Federal Government, (2) OIA’s Financial Assistance Manual and Interior’s Grants Management Common Rule, and (3) best practices in grant management as identified by a working group of federal and state audit agencies. From our review of these documents, we determined that the following internal control activities are particularly relevant to OIA: accurate and timely recording of transactions and events, appropriate documentation of transactions and internal control, proper execution of transactions and events, and controls over information processing. Some of these internal control weaknesses we identified do not apply to all projects in the sample (i.e., focusing on open or closed grant projects), and data were analyzed accordingly. See table 4 for a summary of the internal control weaknesses we considered and their applicability to projects in the sample. We reviewed a random probability sample of 173 grant project files to determine whether and the extent to which internal control weaknesses are prevalent. The sample of 173 projects, stratified by project status (i.e., open or closed), was drawn from the 1,771 projects in OIA’s internal grant management database (see table 5). This sample allowed us to make estimates about all projects in the database. With this probability sample, each member of the study population had a nonzero probability o being included, and that probability could be computed for any member. Each sample element was subsequently weighted in the analysis to account statistically for all members of the population, including those who were not selected. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 10 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. All percentage estimates from the file review have margins of error at the 95 percent confidence level of plus or minus 10 percentage points or less, unless otherwise noted. There are limitations to the database, including the fact that the database does not include the full universe of closed grants. Grant files are only retained until the project has been closed for 3 years, after which the physical files are destroyed. In addition, the database has only been in use for all grant types—including capital improvement project grants, operations and maintenance improvement program grants, and technical assistance grants—since fiscal year 2008, and only grants that were open at that time were entered into the database. Despite these limitations, the database is the most comprehensive source of information about OIA grants that includes both open and closed projects. Prior to drawing the sample of grant projects, we modified the database to meet our needs by removing anything outside the scope of our review, including compact funding and reimbursable agreements. To standardize the data at our unit of analysis—individual grant projects—we identified unique projects within the capital improvement block grants given to insular area governments. It was also necessary to unify multiple entries for each project, representing partial payments to the grantees, in order to establish a single database entry for every project that reflected the full amount paid to grantees at the time of our review. We worked with grant managers at OIA to explain, clarify, and correct incomplete or possibly erroneous identification and status information in the database file they provided to us. To assess the reliability of the data, we interviewed agency officials and grant managers about the data system and elements, how the system is used, and the method of data input, among other areas. We also corroborated the data using OIA grant files. Specifically, when reviewing grant files for each project, we compared select data elements from the database with information in the grant files, corrected the data with any updates that were not reflected in the database, and recorded any inconsistencies or inaccuracies as internal control weaknesses that were present. This allowed us to identify cases where the agency’s electronic record keeping was not accurate while using correct information for any analysis using the data from OIA. We did not assess the accuracy of data in the grant files that grant recipients submitted to OIA. We determined that the data we used were sufficiently reliable for our purposes. For each project in the sample we reviewed the grant files maintained by grant managers at OIA headquarters in Washington, D.C., and assessed every project for the presence of relevant internal control weaknesses. Based on that initial review, we ranked the projects by prevalence of the weaknesses. To account for the fact that grant projects were assessed for different numbers of internal control weaknesses, we ranked projects based on the percentage of applicable weaknesses present. We selected 24 of the grant projects with the highest percentages of internal control weaknesses to review in more detail for objectives two and three; those grant projects were located in American Samoa, the CNMI, Guam, and the USVI. During this step, we gathered more information from OIA headquarters grant files on the internal control weaknesses demonstrated by the 24 selected projects and examined the grant files for any other phases or funding years of the same project. Follow-up to the file review addressed objectives two and three. We traveled to American Samoa, the CNMI, Guam, and the USVI and met with representatives for 24 projects. During these visits we interviewed government officials and project managers for each project to follow up on specific issues identified during file reviews, such as late reporting or project delays. In addition, we physically inspected sites for 10 of these projects. We also asked officials and project managers to describe any challenges faced while implementing OIA grant projects and their experiences interacting with OIA officials and grant managers. In American Samoa, we reviewed 7 projects and met with officials from the Office of the Governor, Department of Public Works, American Samoa Power Authority, Territorial Office of Fiscal Reform, Lyndon B. Johnson Tropical Medical Center, Department of Education, and the OIA field representative stationed in American Samoa. In the CNMI, we reviewed 13 projects and met with officials from the Office of the Governor and its Capital Improvement Program Office, Department of Public Works, Public School System, Department of Health, Commonwealth Ports Authority, Commonwealth Utilities Corporation, Office of the Rota Mayor, Office of the Tinian Mayor, Office of the Public Auditor, and the OIA field representative stationed in the CNMI who is responsible for grants. In Guam, we reviewed 1 project and met with Guam Waterworks Authority and Office of the Public Auditor. In the USVI, we reviewed 3 projects and met with the Virgin Islands Office of Management and Budget, Virgin Islands’ Waste Management Authority, University of the Virgin Islands, Bureau of Economic Research, and the Office of the Virgin Islands Inspector General. For the second report objective, we identified common challenges that insular area projects confront during project implementation by (1) analyzing records of the interviews we conducted with insular area officials and project managers to identify common challenges that insular area projects confront during project implementation, (2) reviewing correspondence and other documents we received from these officials and project managers, and (3) reviewing correspondence, project status reports, and other documents from OIA headquarters and field office grant files. For the third report objective, we also reviewed relevant OIA and other documents, including OIA’s Financial Assistance Manual (2003 and 2009 versions); official letters to grantees detailing changes to OIA grant management policies and procedures; OIA Budget Justifications; Interior’s Grants Management Common Rule (as codified in 43 C.F.R. §12); and OMB Circulars A-87, A-102, A-110, and A-133, to gather information on policies and procedures relevant to OIA grant programs. In addition, we interviewed OIA grant managers and division directors to obtain information about how OIA’s policies and procedures are applied across different grant types and insular areas, any changes to the policies and procedures, and their perspectives on any additional changes that would improve OIA’s management of grants and their capacity to do so. We also reviewed documents, including OIA memos detailing possible strategies to address problematic grant situations and an intergovernmental working group’s survey of best practices in grant management across government agencies, to obtain information about alternate approaches to grant management challenges. Additionally, we interviewed Interior officials who are responsible for implementing the departmentwide Financial and Business Management System to obtain information about how the new system will affect OIA’s grant management. We conducted this performance audit from March 2009 to March 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Some examples of OIA grant projects that clearly illustrate how the previously discussed implementation challenges can contribute to delays are discussed below. According to OIA, a capital improvement project grant was provided in 1997 for the addition of a dialysis clinic to the Commonwealth Health Center, a public health facility on the main island of Saipan, CNMI. The dialysis facility was scheduled to be constructed by August 2004 but was not completed until December 2007, according to OIA officials. As of September 2009, the dialysis facility still lacked Medicare and Medicaid certification and was not yet in use (see fig. 4). The main challenges contributing to the delay of this project’s completion include limited local capacity for design and construction, changing local government priorities, and contractor issues. For example, according to CNMI officials, the project implementers lacked the technical expertise to identify a critical structural design flaw in its early stages. When the flaw was discovered, OIA decided to disallow, or not reimburse the CNMI for, roughly $85,000 in project funds. The CNMI then filed a lawsuit against the contractor responsible for the design. The project was halted while it was redesigned, which contributed to the project’s delay and cost overruns. Changing local government priorities contributed to the increase in scope from a dialysis clinic to a full-scale dialysis hospital, which, according to CNMI officials, entailed important legal and engineering distinctions. In addition, according to CNMI officials, the Department of Public Works was without the capacity it needed to manage the project scope expansion—all of which contributed to a delay in implementation and a budget increase. To supplement the increased budget, the CNMI and OIA redirected $2.9 million away from a wastewater project in 2005, which had not been completed at the time of our review. In addition, a contractor damaged a crucial piece of equipment that was, according to a CNMI official, then fraudulently certified by another. In May 2007, OIA approved a capital improvement project grant for the CNMI Commonwealth Utilities Corporation’s rehabilitation of its power plant on the main island of Saipan. As of January 2010, the project was substantially complete. The main challenges contributing to the delay of this project’s completion included contractor issues, lack of maintenance funding, and limited local capacity. In 2006, the CNMI Governor declared a state of emergency in response to a power crisis, which was also when the Commonwealth Utilities Corporation hired a contractor to address the problem. This allowed for standard procurement regulations to be lifted. The contractor performed poorly, as did its replacement, which resulted in project delay and cost overruns. According to the Commonwealth Utilities Corporation’s officials, during the summer 2008, the CNMI experienced over 1,300 hours of unscheduled power outages. In the fall of 2008, the CNMI Governor declared another state of emergency to divert resources to repairing engines damaged by wear and lack of maintenance (see fig. 5). In addition, according to CNMI officials, the Commonwealth Utilities Corporation’s operating capacity was diminished by a CNMI immigration policy change in 2007, when 22 nonresident workers were forced to resign their jobs at the agency. Further, the Commonwealth Utilities Corporation reported that because roughly 70 percent of the agency’s budget is spent on fuel, it is vulnerable to rising fuel prices. This can add to the challenge of estimating project budgets. The Commonwealth Utilities Corporation has responded to fuel cost fluctuations in the past by diverting funds away from its periodic maintenance and required engine overhauls, which s, which increases the risk for future engine failures. increases the risk for future engine failures. According to OIA, it provided a capital improvement project grant in 1989 for the construction of the Rota Health Center on the island of Rota in the CNMI; however, construction began in 1999, then stopped and did not resume until 2005. As of January 2010, only one of the facility’s two buildings was complete. The other is substantially complete. At the time of our site visit in August 2009, the dental facility was not yet open because there was no dentist to staff it, and additional construction on the dental clinic had not been approved. Further, the Rota Health Center has not received Medicare and Medicaid certification due to an inadequate number of medical staff. According to CNMI officials, several challenges contributed to delays in the project, including contractor issues, limited technical expertise, inadequate maintenance funding, and natural disaster impacts. First, the contractor the Rota Health Center initially hired quit before the project was complete, and the contractor it hired to replace it went out of business and abandoned the project, which caused further delay. Second, the project suffered due to lack of technical expertise. This contributed to project delay and a budget increase. For example, the project was delayed and the budget increased in part to redesign the project and address issues identified by both an Interior Office of Inspector General investigation into contractor problems and a U.S. Army Corps of Engineers review of project costs. The latter review resulted in a U.S. Army Corps of Engineers recommendation that OIA disallow $400,000 in material costs. However, OIA chose not to pursue those costs because agency officials did not believe they could identify the appropriate amount to disallow. Third, at the time of our site visit in August 2009, they faced maintenance challenges including flooding, X-ray machines that only occasionally worked, an elevator—the only one in the building––that had been broken for roughly 3 to 4 months, and significant mold present in the loading area (see fig. 6). There is inadequate maintenance funding and local capacity to address these problems, according to project stakeholders. The Rota Health Center staff had not communicated these problems to project administrators in the Department of Public Works or to the Governor’s Capital Improvement Project office. Despite ongoing delays, project administrators had not visited the site in several months to check the project status. Fourth, the Rota Health Center project budget increased as a result of damage caused by Typhoons Pongsona, Tingting, and Chaba. As of January 2010, the CNMI Capital Improvement Project Administrator reported that the CNMI took steps to address some of the Rota Health Center project challenges, such as correcting the elevator outage and humidity causing mold in the loading area. In 2003, OIA approved a $1.7 million capital improvement project grant for the CNMI’s Department of Public Works to close a dump on the island of Tinian that, according to CNMI officials, does not comply with environmental regulations and build a landfill in another location. An environmental assessment was completed in 2008, but construction had not begun as of August 2009. The CNMI encountered multiple challenges during this grant’s implementation, including frequently changing priorities, limited land access, and limited local capacity. For example, project administrators experienced confusion over whether permission was needed to develop the landfill at the proposed site—on land leased to the U.S. military, which contributed to the delay. Nonetheless, the Department of Public Works moved forward and completed the environmental assessment in August 2008 that, according to an OIA official, cost roughly $500,000 and was almost complete before OIA realized that the selected site had not actually been secured. In 2007, roughly $190,000 was redirected into the landfill project from the Tinian Wastewater project. As of the time of our review in February 2010, no funds had been withdrawn since the redirection—leaving $1.6 million in the project account. In January 2010, the CNMI Capital Improvement Project Administrator reported that the project’s design is ready for solicitation. In fiscal year 2003, OIA approved capital improvement project funding for the CNMI to construct a wastewater system on the island of Tinian. The environmental assessment was completed in June 2008, but as of January 2010, OIA officials reported that the project had not moved into the design phase and is expected to be canceled. OIA’s grant project file indicated that there has been no account activity since November 2007. Fluctuating Tinian Delegation priorities and corresponding project redirection have contributed to the project’s delay. For example, the CNMI redirected funds from a school modernization project into the wastewater system project, then redirected them again into a landfill, an airport instrument landing system, and other projects. In 2007, OIA officials initially denied the CNMI’s request to redirect funds for the airport instrument landing system. However, the request was eventually approved to accommodate the Tinian Delegation’s priorities. Of the roughly $8.3 million that was originally awarded and $34,000 that was redirected into the wastewater system project, roughly $6.6 million has since been redirected away from the wastewater system project. After a total of roughly $5.6 million was redirected from the wastewater system project into the airport instrument landing system project, the Tinian Delegation suspended the airport landing system. Although subsequently about $2.2 million was redirected from the airport instrument landing system to a Tinian airport terminals project, in December 2009, the CNMI Capital Improvement Project Administrator reported that the recently elected Tinian Delegation would like to restart the airport instrument landing project. However, that official reported in January 2010 that the CNMI’s current priority, pursuant to the Governor’s October 2009 Declaration of Emergency, is to redirect these funds to repair the Tinian Harbor and its deteriorating seawall. In addition to the contact named above, Jeffery D. Malcolm, Assistant Director; Elizabeth Beardsley; Keesha Egebrecht; Justin Fisher; Laura Gatz; and Isabella Johnson made key contributions to this report. Also contributing to the report were Mark Braza, Emil Friberg, and Alison O’Neill.
The U.S. insular areas of American Samoa, the Commonwealth of the Northern Mariana Islands (CNMI), Guam, and the U.S. Virgin Islands (USVI) face serious economic and fiscal challenges and rely on federal funding to deliver critical services. The Department of the Interior (Interior), through its Office of Insular Affairs (OIA), provides roughly $70 million in grant funds annually to increase insular area self-sufficiency. GAO and others have raised concerns regarding insular areas' internal control weaknesses, which increase the risk of grant fund mismanagement. GAO was asked to determine (1) whether previously reported internal control weaknesses have been addressed and, if not, to what extent they are prevalent among OIA grant projects; (2) the challenges, if any, insular areas face in implementing OIA grant projects; and (3) the extent to which OIA has taken action to improve grant project implementation and management. GAO reviewed a random sample of 173 OIA grant files, conducted site visits, and interviewed OIA and insular area officials. Internal control weaknesses previously reported by GAO and others continue to exist, and about 40 percent of grant projects funded through OIA have these weaknesses, which may increase their susceptibility to mismanagement. These weaknesses, including insufficient reporting and record-keeping discrepancies, can be categorized into three types of activities that may increase the possibility of mismanagement: grant recipient activities, joint activity between grant recipients and OIA, and OIA's grant management activities. Weaknesses associated with grant recipient activities were the most common issues GAO found, encompassing 62 percent of the weaknesses exhibited by OIA grant projects. The joint activity--redirection of grant funds, a practice by which OIA allows insular areas to move grant funds between projects--accounts for 24 percent of the weakness present in OIA grant projects. While project redirection can be a helpful tool, it can contribute to project mismanagement if not used appropriately. Weaknesses associated with OIA grant management activities, including discrepancies in grant management data, account for 14 percent of the weaknesses in grant projects. Insular areas confront a number of challenges in implementing OIA grants, which can be categorized into project planning challenges such as frequently changing local priorities; project management challenges such as limited local capacity for project implementation; and external risk factors, including the declining economic conditions of American Samoa and the CNMI. While some of these challenges are beyond the insular areas' control, others result from decisions made by the insular area governments. These challenges can result in implementation delays for grant projects. Over the past 5 years, OIA has taken steps to improve project implementation and management. Most notably, OIA established incentives for financial management improvements and project completion by tying a portion of each insular area's annual allocation to the insular governments' efforts in these areas--such as their efforts to submit financial and status reports on time. In addition, OIA established expiration dates for grants to encourage expeditious use of the funds. Despite these and other efforts, some insular areas are still not completing their projects in a timely and effective manner, and OIA faces key obstacles in compelling them to do so. Specifically, (1) current OIA grant procedures provide few sanctions for delayed or inefficient projects, and the office is not clear on its authorities to modify its policies; (2) resource constraints impede effective project completion and proactive monitoring and oversight; (3) inconsistent and insufficiently documented project redirection policies do little to discourage insular areas from redirecting grant funds in ways that hinder project completion; and (4) OIA's current data system for tracking grants is limited and lacks specific features that could allow for more efficient grant management. Interior is currently phasing in an agencywide database that is scheduled to be implemented in OIA in 2011, but to be effective, it will require some flexibility to address OIA's needs for grants management.
unable to purchase services. However, over 30 other programs exist. (See appendix for an overview of some of these programs.) These other programs, which collectively spent more than $1 billion a year as of 1996, use one of three strategies aimed to ensure that all populations have access to care. Providing incentives to health professionals practicing in underserved areas. Under the Rural Health Clinic and Medicare Incentive Payment programs, providers are given additional Medicare and/or Medicaid reimbursement to practice in underserved areas. In 1996, these reimbursements amounted to over $400 million. In addition, over $112 million was spent on the National Health Service Corps program, which supports scholarships and repays education loans for health care professionals who agree to practice in designated shortage areas. Under another program, called the J-1 Visa Waiver, U.S. trained foreign physicians are allowed to remain in the United States if they agree to practice in underserved areas. Paying clinics and other providers caring for people who cannot afford to pay. More than $758 million funded programs that provide grants to help underwrite the cost of medical care at community health centers and other federally qualified health centers. These centers also receive higher Medicare and Medicaid payments. Similar providers also receive higher Medicare and Medicaid payments as “look-alikes” under the Federally Qualified Health Center program. Paying institutions to support the education and training of health professionals. Medical schools and other teaching institutions received over $238 million in 1996 to help increase the national supply, distribution, and minority representation of health professionals through various education and training programs under Titles VII and VIII of the Public Health Service Act. number needed to remove federal designation as a shortage area, while 785 shortage areas requesting providers did not receive any providers at all. Of these latter locations, 143 had unsuccessfully requested a National Health Service Corps provider for 3 years or more. Taking other provider placement programs into account shows an even greater problem in effectively distributing scarce provider resources. For example, HHS identified a need for 54 physicians in West Virginia in 1994, but more than twice that number—116 physicians—were placed there using the National Health Service Corps and J-1 Visa Waiver programs. We identified eight states where this occurred in 1995. While almost $2 billion has been spent in the last decade on Title VII and VIII education and training programs, HHS has not gathered the information necessary to evaluate whether these programs had a significant effect on changes that occurred in the national supply, distribution, or minority representation of health professionals or their impact on access to care. Evaluations often did not address these issues, and those that did address them had difficulty establishing a cause-and-effect relationship between federal funding under the programs and any changes that occurred. Such a relationship is difficult to establish because the programs have other objectives besides improving supply, distribution, and minority representation and because no common goals or performance measures for improving access had been established. the problem. Despite 3 decades of federal efforts, the number of areas HHS has classified as underserved using these systems has not decreased. HHS uses two systems to identify and measure underservice: the Health Professional Shortage Area (HPSA) system and the Medically Underserved Area (MUA) system. First used in 1978 to place National Health Service Corps providers, the HPSA system is based primarily on provider-to- population ratios. In general, HPSAs are self-defined locations with fewer than one primary care physician for every 3,500 persons. Developed at about the same time, the MUA system more broadly identifies areas and populations considered to have inadequate health services, using the additional factors of poverty and infant mortality rates and percentage of population aged 65 or over. We previously reported on the long-standing weaknesses in the HPSA and MUA systems in identifying the types of access problems in communities and in measuring how well programs focus services on the people who need them, including the following: The systems have relied on data that are old and inaccurate. About half of the U.S. counties designated as medically underserved areas since the 1970s would no longer qualify as such if updated using 1990 data. Formulas used by the systems, such as physician-to-population ratios, do not count all primary care providers available in communities, overstating the need for additional physicians in shortage areas by 50 percent or more. The systems fail to count the availability of those providers historically used by the nation to improve access to care, such as National Health Service Corps physicians and U.S. trained foreign physicians, as well as nurse practitioners, physician assistants, and nurse midwives. demand for services. As a result, the systems do not accurately identify whether access problems are common for everyone living in the area, or whether only specific subpopulations, such as the uninsured poor, have difficulty accessing primary care resources that are already there but underutilized. Without additional criteria to identify the type of access barriers existing in a community, programs may not benefit the specific subpopulation with insufficient access to care. The Rural Health Clinic program, established to improve access in remote rural areas, illustrates this problem. Under the program, all providers located in rural HPSAs, MUAs, and HHS-approved state-designated shortage areas can request rural health clinic certification to receive greater Medicare and Medicaid reimbursement. However, if the underserved group is the uninsured poor, such reimbursement does little or nothing to address the access problem. Most of the 76 clinics we surveyed said the uninsured poor made up the majority of underserved people in their community, yet only 16 said they offered health services on a sliding-fee scale based on the individual’s ability to pay for care. Even if rural health clinics do not treat the group that is actually underserved, they receive the higher Medicare and Medicaid reimbursement, without maximum payment limits if operated by a hospital or other qualifying facility. These payment benefits continue indefinitely, regardless of whether the clinic is no longer in an area that is rural and underserved. Last February, we testified before this Subcommittee that improved cost controls and additional program criteria were needed for the Rural Health Clinic program. In August of this year, the Balanced Budget Act of 1997 made changes to the program that were consistent with our recommendations. Specifically, the act placed limits, beginning next January, on the amount of Medicare and Medicaid payments made to clinics owned by hospitals with more than 50 beds. The act also made changes to the program’s eligibility criteria in the following three key areas:In addition to being located in a rural HPSA, MUA, or HHS-approved state-designated shortage area, the clinic must also be in an area in which the HHS Secretary determines there is an insufficient number of health care practitioners. Clinics are allowed only in shortage areas designated within the past 3 years. Existing clinics that are no longer located in rural shortage areas can remain in the program only if they are essential for the delivery of primary care that would otherwise be unavailable in the area, according to criteria that the HHS Secretary must establish in regulations by 1999. Limiting payments will help control program costs. But until, and depending on how, the Secretary defines the types of areas needing rural health clinics, HHS will continue to rely on flawed HPSA and MUA systems that assume providing services to anyone living in a designated shortage area will improve access to care. HHS has been studying changes needed to improve the HPSA and MUA systems for most of this decade, but no formal proposals have been published. In the meantime, new legislation continues to require the use of these systems, thereby increasing the problem. For example, the newly enacted Balanced Budget Act authorizes Medicare to pay for telehealth services—consultative health services through telecommunications with a physician or qualifying provider—for beneficiaries living in rural HPSAs. However, since HPSA qualification standards do not distinguish rural communities that are located near a wide range of specialty providers and facilities from truly remote frontier areas, there is little assurance that the provision will benefit those rural residents most in need of telehealth services. To make the Rural Health Clinic program and other federal programs more accountable for improving access to primary care, HHS will have to devise a better management approach to measure need and evaluate individual program success in meeting this need. If effectively implemented, the management approach called for under the Results Act offers such an opportunity. Under the Results Act, HHS would ask some basic questions about its access programs: What are our goals and how can we achieve them? How can we measure our performance? How will we use that information to improve program management and accountability? These questions would be addressed in annual performance plans that define each year’s goals, link these goals to agency programs, and contain indicators for measuring progress in achieving these goals. Using information on how well programs are working to improve access in communities, program managers can decide whether federal intervention has been successful and can be discontinued, or if other strategies for addressing access barriers that still exist in communities would provide a more effective solution. The Results Act provides an opportunity for HHS to make sure its access programs are on track and to identify how efforts under each program will fit within the broader access goals. The Results Act requires that agencies complete multi-year strategic plans by September 30, 1997, that describe the agency’s overall mission, long-term goals, and strategies for achieving these goals. Once these strategic plans are in place, the Results Act requires that for each fiscal year, beginning fiscal year 1999, agencies prepare annual performance plans that expand on the strategic plans by establishing specific performance goals and measures for program activities set forth in the agencies’ budgets. These goals are to be stated in a way that identifies the results—or outcomes—that are expected, and agencies are to measure these outcomes in evaluating program success. Establishing performance goals and measures such as the following could go far to improve accountability in HHS’ primary access programs. The Rural Health Clinic program currently tracks the number of clinics established, while the Medicare Incentive Payment program tracks the number of physicians receiving bonuses and dollars spent. To focus on access outcomes, HHS will need to track how these programs have improved access to care for Medicare and Medicaid populations or other underserved populations. Success of the National Health Service Corps and health center programs has been based on the number of providers placed or how many people they served. To focus on access outcomes, HHS will need to gather the information necessary to report the number of people who received care from National Health Service Corps providers or at the health centers who were otherwise unable to access primary care services available in the community. survey, to measure progress toward this goal by counting the number of people across the nation who do and do not have a usual source of primary care. For those people without a usual source of primary care, the survey categorizes the reasons for this problem that individual programs may need to address, such as people’s inability to pay for services, their perception that they do not need a physician, or the lack of provider availability. Although HHS officials have started to look at how individual programs fit under these national goals, they have not yet established links between the programs and national goals and measures. Such links are important so resources can be clearly focused and directed to achieve the national goals. For example, HHS’ program description, as published in the Federal Register, states that the health center programs directly address the Healthy People 2000 objectives by improving access to preventive and primary care services for underserved populations. While HHS’ fiscal year 1998 budget documents contain some access-related goals for health center programs, it also contains other goals, such as creating 3,500 jobs in medically underserved communities. Although creating jobs may be a desirable by-product of supporting health center operations, it is unclear how this employment goal ties to national objectives to ensure access to care. Under the Results Act, HHS has an opportunity to clarify the relationships between its various program goals and define their relative importance at the program and national levels. Viewing program performance in light of program costs—such as establishing a unit cost per output or outcome achieved—can help HHS and the Congress make informed decisions on the comparative advantage of continuing current programs. For example, HHS and the Congress could better determine whether the effects gained through the program were worth their costs—financial and otherwise—and whether the current program was superior to alternative strategies for achieving the same goals. Unfortunately, in the past, information needed to answer these questions has been lacking or incomplete, making it difficult to determine how to get the “biggest bang for the buck.” cost information to allocate resources between its scholarship and loan repayment programs. While both of these programs pay education expenses for health professionals who agree to work in underserved areas, by law, at least 40 percent of amounts appropriated each year must fund the scholarship program and the rest may be allocated at the HHS Secretary’s discretion. However, our analysis found that the loan repayment program costs the federal government at least one-fourth less than the scholarship program for a year of promised service and was more successful in retaining providers in these communities. Changing the law to allow greater use of the loan repayment program would provide greater opportunity to stretch program dollars and improve provider retention. Comparisons between different types of programs may also indicate areas of greater opportunity to improve access to care. However, the per-person cost of improving access to care under each program is unknown. Collecting and reporting reliable information on the cost-effectiveness of HHS programs is critical for HHS and the Congress to decide how to best spend scarce federal resources. Although the Rural Health Clinic program and other federal programs help to provide health care services to many people, the magnitude of federal investment creates a need to hold these programs accountable for improving access to primary care. The current HPSA and MUA systems are not a valid substitute for developing the program criteria necessary to manage program performance along these lines. The management discipline provided under the Results Act offers direction in improving individual program accountability. Once it finalizes its strategic plan, HHS can develop in its annual performance plans individual program goals for the Rural Health Clinic program and other programs that are consistent with the agency’s overall access goals, as well as outcome measures that can be used to track each program’s progress in addressing access barriers. services, would have greater effect in achieving HHS’ national primary care access goals. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or members of the Subcommittee may have. Program (amount of federal funding) Rural Health Clinic ($295) Medicare Incentive Pay ($107) National Health Service Corps ($112) J-1 Visa Waiver ($0) Health Centers Grants($758) Title VII/VIII Health Education and Training Programs ($238) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Rural Health Clinic Program in the broader context of GAO's past reviews of federal efforts to improve access to primary health care, focusing on: (1) the common problems GAO found and some recent initiatives to address them; and (2) how the type of management changes called for under the Government Performance and Results Act of 1993 can help the Rural Health Clinic and related programs improve accountability. GAO noted that: (1) GAO's work has identified many instances in which the Rural Health Clinic program and other federal programs have provided aid to communities without ensuring that this aid has been used to improve access to primary care; (2) in some cases, programs have provided more than enough assistance to eliminate the defined shortage, while needs in other communities remain unaddressed; (3) GAO's work has identified a pervasive cause for this proa reliance on flawed systems for measuring health care shortages; (4) these systems often do not work effectively to identify which programs would work best in a given setting or how well a program is working to meet the needs of the underserved once it is in place; (5) for several years, the Department of Health and Human Services has tried unsuccessfully to revise these systems to address these problems; and (6) the goal-setting and performance measurement discipline available under the Results Act, however, appears to offer a suitable framework for ensuring that programs are held accountable for improving access to primary care.
SSA’s primary mission is to pay benefits under its Old-Age, Survivors, and Disability Insurance and Supplemental Security Income programs, in accordance with Titles II and XVI of the Social Security Act of 1935 (the Act), as amended.(SSN)—a unique identifier assigned to each person through a process known as “enumeration”—as a way of tracking individuals’ work activities and the benefits paid to retired workers and eligible family members. In order to properly administer payments, SSA also tracks death information of SSN-holders. In addition, SSA issues the Social Security Number SSA issues SSNs and uses them to administer its programs, including tracking U.S. workers’ earnings in order to determine the types and amounts of benefits individuals may be eligible for. As a result, SSA has also historically collected death information about SSN-holders so it does not pay Social Security benefits to deceased individuals and to establish benefits for survivors. The 57 vital records jurisdictions—the 50 states, New York City, the District of Columbia, and five territories—are responsible for registering deaths. Fifty-three of these jurisdictions, along with a number of other parties, provide SSA with decedents’ names, dates of death, dates of birth, and SSNs. SSA receives state death data under contracts with the 50 states, New York City, the District of Columbia, and Puerto Rico.matches that information against corresponding information in its databases, verifies the information for certain cases, and records the individual as deceased. When SSA receives a report of death it Various entities, including federal agencies, can obtain SSA’s death data. The complete file, which we refer to as “SSA’s full death file,” is available to certain eligible entities. A subset of the full death file, which SSA calls “the Death Master File (DMF),” is available to the public. The Social Security Act requires that SSA share its full death file, to the extent feasible, with agencies that provide federally funded benefits (for the purposes of this report, we refer to these as “benefit-paying agencies”), provided the arrangement meets statutory requirements. However, SSA may not include death data received from states in the DMF, which can be accessed publicly. Agencies may use the data to determine if individuals receiving benefits under their respective programs are deceased, and thus no longer entitled to those benefits. GAO has noted the value of using this information for guarding against improper payments. For example, in a June 2013 report, we recommended that the U.S. Department of Agriculture (USDA) match its payment records against SSA’s full death file to prevent improper payments to deceased farmers. Given the focus on guarding against improper payments, there has been new emphasis on how agencies access critical data, such as death records, while maintaining the security of sensitive information such as SSNs. The Improper Payments Elimination and Recovery Improvement Act of 2012 established the Do Not Pay Initiative and requires federal agencies to review a number of databases, as appropriate, including the DMF, to verify eligibility prior to making payments with federal funds. A component within the U.S. Department of Treasury (Treasury) administers the Do Not Pay Business Center, which operates the centralized portal through which agencies can verify individuals’ eligibility to receive payments. Recent legislative proposals have also sought to encourage federal agencies to use SSA’s death information. For example, a Senate bill introduced in July 2013 would provide for federal agency access to the full death file for specified purposes, such as ensuring authorized payments and facilitating other agency functions, including public health or safety, law enforcement, tax administration, health administration oversight, and debt collection, as determined appropriate by the SSA Commissioner. At the same time, other proposals have sought to limit public access to SSA’s death data. For example, proposed legislation introduced in the House in July 2013 seeks to deter identity theft and tax fraud by limiting public access to the DMF, according to the bill’s sponsor. The current administration has also advanced a proposal that includes critical elements of both legislative proposals, and SSA officials have publicly supported their central provisions. SSA receives death reports from a variety of sources, including states, family members, funeral directors, post offices, financial institutions, and other federal agencies. According to agency officials, SSA received about 7 million death reports in 2012. However, except for reports submitted by states, it does not collect data identifying how many reports come from each source.received within 30 days of the date of death. Officials also said nearly all death reports are Because of states’ custodial role in collecting and maintaining death records, SSA considers states to be a critical partner in its collection of death information. Thirty-four states, the District of Columbia, and New York City submit their death reports through an Electronic Death Registration System (EDRS), which automates the electronic registering and processing of death reports in order to improve timeliness and accuracy. SSA considers reports submitted by states through EDRS to be the most accurate because these systems are used by most states to verify the name and SSN of the decedent with SSA databases before the report is submitted to SSA. To get death reports from the states, SSA has established contracts that set forth a payment structure to compensate states for the reasonable costs of providing these records, in accordance with the Social Security Act. Payments are higher for reports submitted via EDRS and pre-verified with SSA databases, and those that are submitted relatively soon after decedents’ deaths. For example, for calendar year 2013, SSA paid $2.93 for each EDRS report that was pre- verified and submitted to SSA within 6 days of the date of death, compared to $0.82 for non-EDRS reports submitted within 120 days of the date of death. Because of the time saved through automated transmission of data and the use of pre-verification to ensure accuracy, SSA has encouraged the expanded use of EDRS. The contracts with the states provide that SSA will not share this information, except as authorized by federal law and section 205(r) of the Act. Some states and other sources provide death information voluntarily to SSA through methods other than EDRS. For example, funeral directors routinely submit a form that includes the decedent’s full name and SSN, as provided by the decedent’s family. In addition, family members often contact SSA field offices directly to report a death. SSA considers reports from families and funeral directors generally to be accurate because those sources have first-hand knowledge of the death and the decedent’s identity. SSA views death reports from post offices, financial institutions, and other government agencies generally to be less accurate because SSA does not consider these sources to have first-hand knowledge of the death. Death reports from sources other than states and federal agencies are generally provided to SSA field offices. in other SSA records so payments to deceased beneficiaries can be stopped. As shown in figure 1, death reports from some sources are sent to DACUS directly, while reports from other sources are entered into DACUS by field office staff. DACUS matches the information in the death reports with SSA’s benefit payment systems to determine if the decedents are currently receiving Social Security program benefits (i.e. whether they are program beneficiaries). It then matches the information to SSA’s database of all SSN-holders, known as the Numerical Index File (Numident). Information in death reports from certain sources (see fig. 2) is also verified by field office staff. If the name, date of birth, gender and SSN all match a record on the Numident, SSA marks that record with a death indicator. The Numident is updated with death information on a daily basis. Numident records with new death indicators are then extracted for inclusion in the death data file also on a daily basis. SSA does not track how long it takes from the time it receives a death report until the death is recorded in the Numident. SSA does not independently verify all death reports it receives. In accordance with policy, the agency only verifies death reports for Social Security beneficiaries, and then verifies only those reports from sources it considers less accurate. For example, SSA only verifies death reports for persons currently receiving Social Security retirement or disability benefits because, according to agency officials, it is essential to SSA’s core mission to stop payments to deceased Social Security beneficiaries. As a result, death reports for non-beneficiaries are not verified. Officials told us it would be difficult to verify death reports for individuals who do not receive Social Security benefits because SSA would not likely have current contact information for these individuals or their family members. As shown in figure 2, SSA verifies death reports of Social Security beneficiaries received from other federal agencies; third parties that learn about the death, such as post offices and financial institutions; and states that do not submit reports via EDRS. SSA considers these sources to be less accurate because they do not have first-hand knowledge of the death and, unlike states using EDRS, do not perform any verification before submitting the death report to SSA. To verify reports, SSA field office staff typically contact families or other parties with first-hand knowledge of the death to confirm the fact and date of death and confirm the decedent’s SSN. SSA does not verify death reports submitted by families, funeral directors, or states using EDRS. According to SSA officials, families and funeral directors have first-hand knowledge of decedents’ identities and deaths, and the information in death reports submitted through EDRS is typically already verified with SSA databases. They also noted, for example, that SSA would very quickly find out about an erroneous report because, if that beneficiary were still alive, he or she would quickly contact SSA once benefit payments stopped. SSA does not track the proportion of death reports it verifies or how long verifications take. Moreover, according to agency officials, SSA has never performed an analysis validating the accuracy of the various sources of death reports, but instead has based its decisions about which ones to verify on general experience over time. For example, officials told us that for death reports submitted by family members, general experience over many years has shown that a large portion of these reports are accurate. Some death reports, including those that SSA cannot match with a Numident record, are not included in SSA’s death data. Agency officials told us that staff conduct some follow-up steps to see if they can match the information in these reports with other agency records, but if these efforts are unsuccessful, the reports are not included. SSA also does not attempt to follow up with the source of these reports. According to agency officials, it is unlikely the sources would know any additional information beyond what they already provided. Moreover, a subsequent death report for the same individual may arrive from another source. They also added that federal privacy laws may prevent SSA from providing identifying information on the decedent because the individual’s living status is unclear. SSA does not track these cases, so officials were unable to tell us how often this occurs. There are also some deaths that SSA cannot reasonably be expected to include in its death information. These include deaths not reported to SSA because the identity of the decedent cannot be established or a body has not been recovered. While improper benefit payments to these individuals may occur, it would not be appropriate to attribute them to lack of a record in SSA’s death data. After receiving death reports and updating the Numident with death information, SSA makes the information available, in the appropriate form, to federal agencies and other parties (see Fig. 3). SSA provides the full death file, containing over 98 million records, directly to federal benefit-paying agencies that have an agreement with SSA for use in preventing improper payments to deceased beneficiaries or program participants. SSA also extracts Numident death records that are reported by non-state sources to create the DMF, which contains over 87 million records. SSA makes the DMF available publicly via the Department of Commerce’s National Technical Information Service (NTIS), from which any interested party or member of the public— including other federal agencies—can make a one-time purchase, subscribe for periodic updates, or subscribe to an on-line query service. For example, financial institutions or firms conducting background investigations can purchase the DMF from NTIS and subscribe to receive monthly or weekly updates. SSA does not guarantee the completeness or accuracy of its death data, stating that SSA does not have a death record for all deceased individuals. SSA also informs users of its death data that all deaths should be verified before any action, such as stopping benefits, is taken. SSA’s methods for processing death reports may result in inaccurate, incomplete, or untimely information for users of its death data. Consequently, this could lead to improper payments if benefit-paying agencies rely on this data. The specific procedures include (1) verifying a limited portion of death reports, (2) not including death reports that do not match with the Numident file, and (3) not performing additional reviews of reports of deaths that occurred years or decades in the past. Because SSA does not verify death reports from sources it considers most accurate, the agency risks having erroneous information in its death data, such as including living individuals or not including deceased individuals. Analysis we performed on records in the full death file that were erroneously included showed that most of these errors would not have occurred if SSA had verified the death reports when it received them. We identified nearly 8,200 deaths SSA deleted from its death data between February 2012 and January 2013. These data reflect cases where a death report matched a record in the Numident and SSA marked, then later removed, a death indicator for that record. SSA officials told us this could occur because the decedent turned out to be alive or was misidentified as another individual, or as a result of data entry errors. We drew a random but non-generalizable sample of 46 cases from this group and asked SSA to search its records to see if it could determine the reasons for deleting them from its death data. In 28 of these cases, SSA was able to identify a reason for deletion. Of these, 12 were false death reports filed while the reported decedent was still alive, and 4 involved decedents for whom identifying information—such as SSNs—of other people was mistakenly included in the death reports. Separately, SSA was also able to determine that 13 of the 46 cases were reported by either family members or funeral directors—sources SSA considers more accurate and from which it does not verify death reports. Nine of the 46 cases involved non-beneficiaries, which are also not verified. Another SSA practice—not contacting the source of a death report that does not match a Numident record—poses a risk to the data’s completeness. As described earlier, if SSA staff cannot match a death report to a corresponding Numident record, they do not contact the source that submitted the report or undertake any other outside investigation to resolve the discrepancy. SSA’s OIG has found that these omissions are substantial. In one case, data for about 182,000 deceased Supplemental Security Income recipients were not included in SSA’s death data. In another, it found that as many as 1.2 million deceased Old Age and Survivors Insurance beneficiaries were not included in the death data. The OIG determined that these gaps occurred because SSA could not match the identifying information for these individuals included in the death reports or other SSA records with Numident records. Therefore, no death indicator was added to the Numident records. Samples of the cases drawn for the OIG’s reviews showed that these individuals had As a result of this been deceased for an average of nearly 17 years. practice, other federal benefit-paying agencies relying on these data could make improper benefit payments. Social Security Administration, Office of the Inspector General, Title XVI Deceased Recipients Who Do Not Have Death Information on the Numident, A-09-12-22132 (Baltimore, MD: May 3, 2013); and Title II Deceased Beneficiaries Who Do Not Have Death Information on the Numident, A-09-11-21171 (Baltimore, MD: July 9, 2012). The OIG found that Social Security benefit payments to a sample of these deceased individuals had been terminated. Finally, we also identified cases in which death reports submitted to SSA in early 2013 listed dates of death that were more than a year old. Specifically, we found about 500 records in which the date of death recorded had occurred in 2011 or earlier; in about 200 of these, the date of death was recorded to be 10 or more years before SSA received the death report. For example, in 11 of the cases, the date of death was in 1976; in another 11, the date was 2004. This is of concern because, if these dates of death are accurate, SSA and other agencies may have been at risk of paying benefits to these individuals for long periods after they died. SSA officials were not able to explain with certainty why this was occurring, but suggested some cases might be the result of data entry errors, and in others, deaths of non-beneficiaries may not be reported to SSA until spouses or families become eligible for survivor’s They informed us that SSA payment systems would identify benefits.benefit payments the agency made after these deaths occurred. When these reports are sent to field offices for verification or other development, they added, it could take an extended period of time to complete because the contact information for someone who died years ago may not be available. We found other instances of potentially erroneous information in the death data that raise questions about its accuracy and usefulness. These included: 130 records where the recorded date of death was before the date of birth; 1,941 records where the recorded age at death was between 115 and 1,826 records where the recorded death preceded 1936, the year SSN’s were first issued, although these decedents had SSNs assigned to them. Agency officials told us SSA has never investigated how these errors occurred or whether they may affect payments to Social Security and other federal program beneficiaries. They did not think these types of errors would have resulted in improper benefit payments because they involved persons recorded as deceased. SSA officials said some of these anomalies were likely associated with records added prior to the mid- 1970’s that were manually processed. For example, SSA staff could have incorrectly keyed in a date of birth that occurred after the reported date of death. Officials added that SSA has undertaken or will soon undertake several initiatives aimed at correcting these types of errors and preventing them in the future. Among those they said have already been implemented are the following: SSA is using an edit check to identify records showing a date of birth that occurs after a date of death and taking corrective action before the report is processed further. SSA, however, has not decided whether to make corrections to these dates in records processed before the check was implemented. In December 2012, SSA identified cases and terminated benefits for individuals over 115 years old whose Numident record showed they were deceased and who had their benefits suspended or were entitled only to Medicare benefits. SSA plans to repeat this match for fiscal year 2013. SSA officials told us that, as a result of this initiative, the agency corrected about 17,000 cases to reflect terminated benefits due to death. In June 2013, SSA began making monthly comparisons of Numident records containing death information with its Title II and Title XVI payment records. It is sending alerts to field offices to resolve cases showing individuals who are receiving or scheduled to receive benefits in the near future even though they are listed as deceased. SSA officials told us, as a result of the initiative, the agency corrected about 14,500 payment records to reflect suspended or terminated payment status due to death. In September, 2013, SSA began a data exchange with CMS to identify beneficiaries ages 90 to 99 who are still receiving Title II benefits but have not used Medicare for 3 years or more and have no other insurance or nursing home information in their records. Agency officials stated that they identified and referred to SSA regional offices for action about 18,600 cases. SSA also plans to introduce a computer code that can be used to terminate benefits for certain Title II beneficiaries who are 115 or older, whose benefits have been continuously suspended for 7 or more years, and for whom SSA does not have a death record. The Social Security Act requires SSA to share its full death file, to the extent feasible, with federal benefit-paying agencies for the purpose of preventing improper payments, if the agency reimburses SSA for its reasonable costs and the arrangement does not conflict with SSA’s duties with respect to state data. However, SSA has no guidance on its process for determining an agency’s eligibility. As of September 2013, seven federal benefit-paying agencies obtained SSA’s full set of death information including the information reported by states, directly from SSA: Centers for Medicare & Medicaid Services (CMS) Department of Defense (Defense Manpower Data Center—DMDC) Department of Veterans Affairs (VA) Internal Revenue Service (IRS) Office of Personnel Management (OPM) Pension Benefit Guaranty Corporation (PBGC) Railroad Retirement Board (RRB) According to SSA officials, CMS also shares the full death file it obtains from SSA with the U.S. Department of Health and Human Services’ Health Resources and Services Administration, under an information exchange agreement between CMS and SSA, as authorized by 42 U.S.C. § 405(r)(9). determine whether they are eligible to receive access under the Act, officials told us that SSA asks the requesting agencies to explain how their proposed use of the information is in accordance with the allowable use outlined in the Act. Specifically, SSA bases its initial eligibility determinations on whether: (1) the agencies pay federally-funded benefits, and (2) the agencies propose to use the full death file to ensure proper payment of those benefits. According to officials, once SSA has determined an agency is eligible, it ensures the remaining statutory requirements regarding cost reimbursement and adherence to SSA’s duties with respect to state data are met, and establishes an information exchange agreement with the agency. However, in making this determination, SSA officials told us they do not use any criteria more specific than the language of the Act to guide decision-making, nor have they developed guidance for the procedures they follow. SSA’s determinations as to whether agencies meet these requirements have varied. In one example of an eligibility determination, officials told us they provide the full death file to IRS for purposes that include allowing IRS to confirm or deny taxpayers’ requests for exemptions and standard deductions. In addition, officials told us SSA would generally have the authority to share the full death file with the OIG at benefit-paying agencies for the purpose of ensuring proper payment of federally-funded benefits. In fact, SSA officials approved a request for access to the full death file for the OIG at the Department of Health and Human Services. However, officials also told us that Treasury, which operates the Do Not Pay Business Center for a similar purpose as OIGs—preventing improper payments—is not eligible to receive the full death file. Officials provided no documentation outlining their rationale for this determination, but explained that one concern they had with providing Treasury with it was that they were not authorized to provide the state-reported death data to Treasury to distribute it to other agencies. Because SSA has some discretion in making such determinations and agencies’ circumstances may differ, this variation in determinations may not represent inconsistency with the Act. However, since SSA does not make available written guidance describing the criteria it uses to make determinations as to whether the agencies meet the statutory requirements, there is no assurance that SSA’s eligibility determinations under the Act are consistent across agencies. Without written guidance explaining SSA’s criteria for approving or denying agencies’ requests for the full death file, such as the factors SSA considers in deciding whether an agency provides federal benefits, potential recipient agencies may not know whether they are eligible. For example, officials at PBGC told us that they undertook a comprehensive review of all the agency’s applicable legal authorities because they were unsure whether the benefits the agency paid met the requirements of the Act. According to federal internal control standards, agencies should have written documentation, such as this type of guidance, and it should be readily available for examination. The absence of written guidance may also pose a risk to the consistency of SSA’s future determinations in the event of staff turnover, changes in administration, or any other disruption that could lead to a loss of institutional knowledge. In a September 2008 report, we found that there was a risk to the management and operational continuity of ongoing projects at SSA due to a lack of written policies and procedures. We found that during organizational change, project objectives, designs, and evaluation may be affected absent comprehensive written policies and procedures. Also, until recently, SSA did not have an officially designated organizational component for monitoring use of death data or making decisions on access to its full death data, which could have introduced additional uncertainty to those decisions. However, officials told us the agency created the Office of Data Exchange in January 2013 to clarify, simplify, and strengthen existing data exchange programs. As part of this effort, it is looking at how SSA makes decisions regarding access to its death data, as well as how it shares the data with other agencies, and is monitoring the data exchange agreements. Any agency that does not access SSA’s full death file can instead access the publicly-available DMF. Agencies can purchase a DMF subscription through the Department of Commerce’s National Technical Information Service (NTIS), which reimburses SSA for the cost of providing the file.In accordance with the Act, SSA excludes state-reported death records from the DMF. Federal entities that purchase the DMF from NTIS include, among others: Department of Justice Department of Homeland Security Drug Enforcement Administration National Institute on Occupational Safety and Health Veterans Affairs medical facilities The death information included in the DMF is less complete and likely less accurate than that contained in SSA’s full death file, which may result in federal agencies that use the DMF receiving less useful information than agencies that use the full death file. According to SSA officials, agencies that purchase the DMF have access to 10 percent fewer records overall than agencies with access to the full death file due to the removal of state-reported deaths. Moreover, SSA officials said they expect the percentage of state-reported deaths as a proportion of all of SSA’s death records to increase over time, which could lead to a greater portion of death data being removed each year to create the DMF. For example, for deaths reported in 2012 alone, the DMF included about 40 percent fewer death records than what was included in SSA’s full death file. As a result, agencies that purchase the DMF will continue to access fewer records over time than those that obtain SSA’s full death file. In addition, because the deaths reported through EDRS by states are generally more accurate, it is likely that federal agencies using the DMF would encounter more errors than agencies using SSA’s full death file. In fact, the SSA OIG found that approximately 98 percent of deaths that SSA erroneously included in its death file were reported by non-state sources. It is not SSA’s practice to proactively notify agencies that may be eligible for access to the complete set of death information. Officials explained that distributing death information to other federal agencies is not a part of SSA’s mission, nor is it an activity for which SSA receives an appropriation. As a result, some agencies may not know to request the full death file directly from SSA and may be relying on the less complete, less accurate DMF to assist them in administering their programs. In at least one case, an agency administering federal benefit-paying programs was using the less comprehensive DMF to match against its payment systems until it received access to the full file in January 2013. In a June 2013 report, we found that two agencies within the U.S. Department of Agriculture (USDA) were not matching beneficiary lists against SSA’s full death file. We spoke with program officials at another benefit-paying agency that uses the DMF—the Department of Labor’s Division of Energy Employees Occupational Illness Compensation, within the Office of Workers’ Compensation Programs—who told us they did not know until very recently that obtaining the full set of death data was an option. Under the Act, one condition of receiving SSA’s full death data is that the recipient agency reimburses SSA for the reasonable cost of sharing death data. However, factors including legal requirements and a quid pro quo arrangement have resulted in varying projected reimbursement amounts for different agencies (see table 1). Some agencies do not reimburse SSA at all. For example, VA is not required to provide reimbursement by statute, while OPM provides federal retirement data to SSA that is critical to its mission, and the agencies have agreed that the expenses involved in the exchanges are reciprocal. However, SSA officials were unable to point to any reciprocity study supporting this decision. For other agencies, some of these differences in projected reimbursement amounts cannot fully be explained by the frequency with which the agencies expect to receive the data. For example, as noted in Table 1, CMS expected to receive updates to the full death file weekly from SSA and CMS officials told us the agency expected to reimburse SSA $9,900 in fiscal year 2013. RRB similarly expected to pay $9,000, despite expecting to receive the file less frequently—monthly, rather than weekly. At the same time, IRS expected to receive weekly updates to the full file plus the full file annually in fiscal year 2013, and PBGC expected to receive the file with the same frequency for fiscal year 2014. However, IRS expected to pay more than $87,000, while PBGC’s expected reimbursement amount was $70,000. While the reimbursement amounts for agencies are sometimes included in the inter-agency agreements governing how SSA provides its full death file, the agreements lack information on how these amounts were determined. According to SSA officials, these agreements specify the permissible purpose for using the death data and limitations on sharing the data within the agency. However, according to officials we spoke with at several agencies that receive the full death file, SSA staff did not provide an explanation for reimbursement amounts. SSA officials told us they calculate a detailed breakdown of expenses in an internal document, but provide only a summary of these expenses in the estimates and billing statements they send to agencies. As a result, recipient agencies do not know the factors that lead to the reimbursement amounts they are charged, which could prevent them from making informed decisions based on the amount they are spending. According to federal internal control standards, financial information is something agencies should communicate for external uses, because it is necessary to determine whether agencies are meeting goals for accountability for effective and efficient use of resources. In addition, SSA officials told us that because the Act provides for SSA to be reimbursed for its costs, the agency will not negotiate the reimbursement amounts if a prospective recipient agency indicates unwillingness to pay the quoted amount. In one case, SSA officials told us that while it approved a request for access to the file from the OIG for the U.S. Department of Health and Human Services, the two entities never finalized an agreement because the OIG determined it wanted to look for a lower-cost option for obtaining death information. Federal benefit programs’ need for accurate administrative data, such as death information, is increasingly evident in an environment of continuing budget shortages, where improper payments due to inaccurate information cost taxpayers billions of dollars in fiscal year 2012. Because of its mission, SSA is uniquely positioned to collect and manage death data at the federal level. SSA already has a responsibility to ensure that this information is as accurate and complete as possible for its own beneficiaries. Further, proposed legislation, if enacted, would require SSA to disseminate full death data to a number of additional eligible federal agencies. Only with more accurate and complete data can these agencies reduce the risk of paying deceased beneficiaries. However, because SSA has never analyzed the risk posed by errors or processes that could result in errors, it is not fully aware of steps that would be needed to address them. As a result, SSA and other federal agencies that use the full death data and the DMF are potentially vulnerable to making improper payments. Similarly, the absence of written guidelines for determining which agencies can access the full death data may impede federal agencies’ ability to obtain that information in a timely and efficient manner. Finally, SSA’s approach to calculating and charging agencies for death data lacks transparency about the fact that federal agencies pay varying amounts for the same information. In such a setting, there is a risk that federal agencies that could otherwise benefit from death information will decline to participate, whether due to confusion over SSA’s access protocols or uncertainty concerning its financial reimbursement policies. In order to enhance the accuracy of and ensure appropriate agency access to SSA’s death data, we recommend that the Social Security Administration’s Acting Commissioner direct the Deputy Commissioner of Operations to take the following three actions: 1. To be more informed about ways to improve the accuracy and completeness of its death information, conduct a risk assessment of SSA’s death information processing systems and policies as a component of redesigning SSA’s death processing system. Such an assessment should identify the scope and extent of errors, and help SSA identify ways to address them. In addition, assess the feasibility and cost effectiveness of addressing various types of errors, given the risk they pose. 2. To clarify how SSA applies the eligibility requirements of the Social Security Act and enhance agencies’ awareness of how to obtain access, develop and publicize guidance it will use to determine whether agencies are eligible to receive SSA’s full death file. 3. To increase transparency among recipient agencies, share a more detailed explanation of how it determines reimbursement amounts for providing agencies with death information. We provided a draft of this report to the Social Security Administration (SSA), the National Technical Information Service, the Department of Treasury (Treasury), the Department of Defense, the Department of Labor, the Centers for Medicare & Medicaid Services (CMS), the Internal Revenue Service (IRS), the Office of Personnel Management, the Pension Benefit Guaranty Corporation, and the Office of Management and Budget (OMB) for review and comment. SSA officials provided written comments, which are reproduced in appendix II and described below. IRS officials provided technical comments that further supported the impact of late-reported deaths on federal users of SSA’s death data, and we incorporated an example they provided into that discussion. The Department of Labor, the Office of Personnel Management, the Pension Benefit Guaranty Corporation, and OMB also provided technical comments, which we incorporated in the report as appropriate. CMS, the Department of Defense, Treasury, and the National Technical Information Service had no comments. In its comments, SSA partially agreed with our first and third recommendations and disagreed with our second. In response to our first recommendation, SSA agreed to perform a risk assessment as part of its death information processing system redesign project, but raised concerns about performing risk assessments for other users of the death data. In making this recommendation, we did not intend for SSA to perform risk assessments for other agencies’ programs. However, we believe that by assessing the risks of inaccuracies in its death data, SSA’s efforts could shed light on risks posed to other agencies’ programs in addition to its own. To clarify this, we deleted the specific reference to other agencies. SSA also partially agreed with our third recommendation, stating that it has implemented improvements in its estimating procedures for future reimbursable agreements to ensure consistent estimates for all customers. However, the agency stated that it is not a typical government business practice to share these detailed costs for reimbursable agreements. We are encouraged that SSA has made efforts to standardize the estimates it shares with its federal partners, though we have not had the chance to evaluate their effectiveness, since these efforts were made recently. Also, we recognize that there may be limitations on the type of cost details SSA can provide to recipient agencies. However, we continue to believe that more transparency in conveying the factors that lead to the estimated and final reimbursement amounts recipient agencies are charged could help them make more informed decisions. SSA disagreed with our second recommendation, stating that each request to obtain the full death file is unique, and that officials must review them on a case-by-case basis to ensure compliance with various legal requirements. It also expressed concern that developing this guidance as we recommended would require agency expenditures unrelated to its mission in an already fiscally constrained environment. We appreciate that agencies may base their request for the full death file on different intended uses, and support SSA’s efforts to ensure compliance with all applicable legal requirements. However, we continue to believe that developing this guidance could help to ensure consistency in SSA’s future decision making by the new Office of Data Exchange, as well as enhance agencies’ ability to obtain the data in a timely and efficient manner. We do not expect such guidance, which could include information such as the factors SSA considers in deciding whether an agency provides federal benefits, would restrict SSA’s flexibility. SSA also outlined two general concerns with the body of the report. First, it expressed concern that we inaccurately described SSA officials’ reasons for determining that the agency could not provide Treasury with full death data for the Do Not Pay Initiative. We made revisions to the report to more accurately describe SSA’s reasoning. Second, SSA was concerned about the use of estimated reimbursement costs in Table 1 because these costs fluctuate throughout the year. SSA suggested instead using figures it provided reflecting the actual costs from fiscal year 2012. The agency also noted that Table 1 contains incorrect information related to the frequency at which the agencies receive the file. While we agree that actual costs are inherently more accurate than estimated costs, we chose to use estimated costs because this is the information federal agencies receive when they are deciding whether and how to obtain death data. In response to SSA’s assertion that our table contains incorrect information, we have added information to the table regarding whether agencies expected to receive the annual file. We followed up with an official, who clarified that our previous table was incomplete because, for some of the agencies, it lacked information about receipt of the annual full file. SSA also provided technical comments, which we incorporated in the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees and the heads of the agencies listed above. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202)512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This report examines (1) how the Social Security Administration (SSA) obtains death reports for inclusion in the death data it maintains and steps it takes to ensure these reports are accurate; and (2) the factors affecting federal agency access to SSA’s death data. To address these objectives, we reviewed applicable federal laws and SSA procedures, as well as relevant reports and evaluations, such as reports from multiple Offices of Inspectors General. We interviewed SSA officials regarding how SSA obtains and processes death reports and maintains and shares its death information. We also interviewed representatives of some entities that provide death reports to SSA. We obtained and reviewed available corroborating documentation such as the data sharing agreements SSA made with other federal agencies. To evaluate SSA’s processes for obtaining, processing, and sharing death information, we reviewed standard criteria, such as the Standards for Internal Controls in the We performed independent testing of SSA’s death Federal Government.data to identify specific types of errors such as dates of birth that followed dates of death. We also drew a randomly selected but non-generalizable sample of cases that SSA removed from the death information for further review in order to identify potential explanations for that action. Finally, we interviewed officials at other federal agencies that use SSA’s death information about how they access and use it. We tested for specific types of errors within SSA’s death data in several different ways using the full death information file; however, we did not attempt to identify all possible errors in the death data. Our reliability tests included identifying cases in which the date of birth was listed as occurring after the date of death. We also looked for cases involving deaths at very old ages—115 and higher—and cases of death that occurred before 1936 (when account numbers were first issued for administering Social Security programs) after observing such cases in a publicly-available version of the Death Master File (DMF). We then systematically tested for incidences of these occurrences in the full death information file. As part of our tests to identify deaths that occurred at age 115 or older and those that occurred before 1936, we examined the monthly update file of cases SSA added in March 2013, which represent new death reports received by SSA. In conducting our analysis, we found a total of 539 deaths were added to the death data during this month that reportedly occurred in years prior to 2012 (and another 9,462 that occurred in 2012). At SSA’s request, we provided a sample of these cases to SSA staff to investigate. We selected and provided SSA lists of those cases in which the deaths were reported to have occurred in 1976 and 2004—a total of 22 cases. We chose these two years because each of these years included a sufficiently large number of cases to allow us to potentially identify patterns with respect to whether SSA had been paying benefits to deceased beneficiaries. We also chose two years that were nearly three decades apart to determine if there were any differences due to time period variation. SSA was unable to research these cases individually because of time constraints, but provided explanations of possible reasons why the agency receives reports this many years after a death. SSA also produces weekly and monthly update files listing deaths to be deleted from and death reports to be added to its death information files. To identify possible reasons for deleting deaths, we drew a random sample of 97 cases from the monthly update files produced from If all cases in this sample were February 2012 through January 2013. reviewed, we would have been able to make generalized observations about all of the deleted cases in this time period. We provided this list of cases listed in the order selected and requested SSA to research them— beginning at the top of the list—to determine if it had any record of the reason for deleting the case from its death data. To identify possible relationships with other characteristics, we also asked officials to provide, if available, the source of the death report, and whether or not the listed decedent was a Social Security beneficiary. We were satisfied we had a sufficient number of cases after SSA had completed work on 46 of the 97 cases. While still randomly-selected, this smaller sample was non- generalizable. In recognition of the time and resources SSA was committing to this work, we determined that the 46 cases would be sufficient to describe characteristics of these cases, even though we would not be able to make generalized statements about all deleted cases in our population. Of these 46 cases, SSA officials were able to determine reasons for deletion in 28 cases, source of the death report in 13, and determine the beneficiary/non-beneficiary status in all 46. They were able to identify all three of these characteristics in 11 of the cases. For all of the tests and sampling just described, we used the full death data file. We determined that the data we used was sufficiently reliable for our reporting purposes. The computer programming we used was checked by a second programmer for accuracy, giving us further assurance that the results we present in the report are reliable. To assess the factors affecting agencies’ access to SSA’s death data, we interviewed officials at seven federal agencies that used the death data— either the full death file or the Death Master File (DMF). We obtained the list of federal agencies that obtain the full death file directly from SSA, as well as the list of federal entities that purchase the public Death Master File (DMF) from the National Technical Information Service (NTIS). We obtained the former through prior discussions with SSA officials and congressional testimonies. For those federal entities that purchase the public DMF, we requested a list of federal customers from officials at NTIS. According to NTIS officials, they had to compile the list manually because, prior to our request, they had no business reason to separate federal customers from other customers. They described their manual compilation process as looking through the NTIS list of approximately 800 DMF subscription customers one-by-one and determining, for each one, whether it represented a federal customer by looking at the name and email address. Officials then sent us a list of 27 federal customers. Our primary criterion for selecting six of the seven agencies to interview was whether they administered programs that pay benefits. We selected the following four benefit-paying agencies that obtain SSA’s full death file because we wanted to gain an understanding of agencies’ experience with accessing and using the full death file: Centers for Medicare & Medicaid Services (CMS) Department of Defense/Defense Manpower Data Center (DMDC) Internal Revenue Service (IRS) Office of Personnel Management (OPM) We based this selection of four agencies on their reported improper payment amounts from 2012, focusing on those with higher amounts, such as CMS and IRS. We also selected one program within a benefit- paying agency—Department of Labor’s Division of Energy Employees Occupational Illness Compensation within the Office of Workers’ Compensation Programs—that was purchasing the DMF rather than obtaining the full file directly from SSA. We sought to understand this agency’s general experience using the DMF, as well as whether it had ever tried to obtain the full file from SSA. Additionally, we selected the Pension Benefit Guaranty Corporation (PBGC) for interviews because it transitioned from purchasing the DMF to obtaining SSA’s full death file during the course of our review, so officials had the unique perspective of receiving both files. For the seventh agency, we interviewed officials from the U.S. Department of Treasury’s (Treasury) Do Not Pay Business Center, even though it does not pay benefits, because of its program goal to prevent improper payments. Also, we had learned that Do Not Pay officials had previously requested—and were denied—access to SSA’s full death file, and we wanted to better understand the circumstances of that interaction. One limitation of the approach we used to identify all federal users of SSA’s death data is the subjectivity with which NTIS officials judged its customers to be associated with federal agencies. As a result, the list we obtained of federal DMF customers may have been incomplete. However, based on our review of reports on improper federal payments and interviews with SSA officials, we are confident that the information we collected from officials at the agencies we selected accurately represents federal customers’ experience obtaining and using SSA death data. We conducted this performance audit from November 2012 to November 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Lori Rectanus, Acting Director; Jeremy Cox, Assistant Director; Keira Dembowski; Joel Marus; and Sara Pelton made significant contributions to this report. Also contributing to this report were Sarah Cornetto, Holly Dye, Justin Fisher, Alex Galuten, Mitch Karpman, Mimi Nguyen, Almeta Spencer, Walter Vance, Michelle Loutoo Wilson, and Amber Yancey-Carroll. GAO, Farm Programs: USDA Needs to Do More to Prevent Improper Payments to Deceased Individuals, GAO-13-503 (Washington, D.C.: June 28, 2013). GAO, Management Report: Improvements Are Needed to Enhance the Internal Revenue Service’s Internal Controls, GAO-13-420R (Washington, D.C.: May 13, 2013). GAO, Social Security Administration: Preliminary Observations on the Death Master File, GAO-13-574T (Washington, D.C.: May 8, 2013).
As the steward of taxpayer dollars, the federal government must guard against improper payments. Federal agencies may avoid paying deceased beneficiaries by matching their payment data with death data SSA maintains and shares. In addition, recent legislation has established additional requirements for federal agencies to use death data to prevent improper payments. However, the SSA Office of Inspector General has identified inaccuracies in SSA's death data, which could diminish its usefulness to federal agencies. GAO was asked to examine SSA's death data. This report explores (1) how SSA obtains death reports and steps it takes to ensure death reports are accurate; and (2) factors affecting federal agency access to SSA's death data. In addressing these objectives, GAO interviewed SSA officials and representatives of entities reporting or using the death data. GAO reviewed applicable federal laws, SSA procedures, and reports. GAO also performed independent testing of SSA's death data for certain errors. The Social Security Administration (SSA) receives death reports from multiple sources, including state vital records agencies (states), family members, and other federal agencies to create its set of death records. In accordance with the Social Security Act (Act), SSA shares its full set of death data with certain agencies that pay federally-funded benefits, for the purpose of ensuring the accuracy of those payments. For other users of SSA's death data, SSA extracts a subset of records into a file called the Death Master File (DMF), which, to comply with the Act, excludes state-reported death data. SSA makes the DMF available via the Department of Commerce's National Technical Information Service, from which any member of the public can purchase DMF data. Certain procedures that SSA uses for collecting, verifying, and maintaining death reports could result in erroneous or untimely death information. For example, SSA does not independently verify all reports before including them in its death records. In accordance with its policy, the agency only verifies death reports for Social Security beneficiaries in order to stop benefit payments, and then, verifies only those reports from sources it considers less accurate, such as other federal agencies. GAO identified instances where this approach led to inaccurate data. For example, GAO's analysis of a sample of death records SSA erroneously included in its death data found that these errors may not have occurred if SSA had verified them. In other cases, when data provided do not match SSA's records, SSA typically does not record these deaths. According to federal internal control standards, agencies should conduct risk assessments of factors impeding their ability to achieve program objectives, such as data errors that could result in improper benefit payments. Agency officials told us SSA has not performed such risk assessments, but has initiated work on a full redesign of its death processing system. SSA lacks written guidelines other than the language in the Act for determining whether agencies are eligible under the Act to access the full death file, and it does not share with agencies how it determines the reasonable cost of sharing the data, which recipients of the full file are required to reimburse SSA. Because SSA has not developed or shared guidance on how it determines agency eligibility, this could create confusion among potential recipients regarding eligibility. Ensuring appropriate access is important because the DMF contains about 10 percent fewer records than the full death file, and officials expect that difference to increase over time. Additionally, there is a lack of transparency of cost information about the amounts recipients expect to pay. As a result of not knowing the factors that lead to the reimbursement amounts, agencies may not have sufficient information to make informed decisions. We found that SSA provided differing estimates for agencies' reimbursement amounts. Some variation is due to legal requirements and a quid pro quo arrangement. For example, one agency does not reimburse SSA for the cost of providing the death data because it provides SSA with its own data, and the agencies have agreed that the expenses involved in the exchanges are reciprocal. However, for some agencies this variation could not fully be explained by the frequency with which they expected to receive the data. For example, two agencies expected to pay similar reimbursement amounts in 2013 despite expecting to receive the file at different frequencies. GAO recommends that SSA assess risks associated with inaccuracies; develop and publicize guidance it will use to determine agency access under the Act; and share detailed reimbursement estimates. SSA partially agreed with the recommendations to assess risks and share detailed reimbursement estimates, but did not agree to develop and publicize guidance, stating that each request is unique. GAO believes that the recommendation remains valid as discussed in the report.
The authority to monetize food aid was established by the Food Security Act of 1985. The act allowed implementing partners that received nonemergency food aid under USAID’s Food for Peace program and USDA’s Food for Progress program to monetize some of the food in recipient countries and use the proceeds to cover associated shipping costs. In 1988, the authorized use of monetization funds was expanded to incorporate funding of food-security-related development projects, and in 1995, a minimum monetization level for nonemergency food assistance was set at 10 percent, which was then increased to 15 percent in 1996. The 2002 Farm Bill authorized the McGovern–Dole International Food for Education and Child Nutrition Program and allowed it to raise cash through monetization. (For a description of these program authorities, see appendix III). The practice of selling commodities for cash to fund development programs originated in part from U.S. government farm subsidies that contributed to a surplus of agricultural commodities owned by the U.S. government. However, the U.S. government no longer has surplus agricultural commodities. Current monetization requires the U.S. government to purchase the commodities from the commercial market and ship them abroad for implementing partners to sell them in another market to generate cash. Neither USAID nor USDA has been required to achieve a set level of cost recovery following an amendment by the 2002 Farm Bill to the Food for Peace Act. Rather, the agencies are required, by statute, to achieve “reasonable market price” for sales of food aid in recipient countries. Prior to 2002, USAID sought to achieve an average cost recovery that was calculated based on the following formula: either (1) 80 percent of commodity value plus freight value, including associated transport and marketing costs, or (2) 100 percent of free alongside ship price on its monetization transactions. USDA’s requirement was to adhere to reasonable market price as its benchmark. According to the conference report for the 2002 Farm Bill, the change from the cost recovery formula to reasonable market price for USAID was made to address two primary concerns. The first concern was that the cost recovery formula requirement was too inflexible, and could either unfairly punish participants where market forces were beyond their control, or not reward situations where the market price was above the formula value. The second concern was that, since both USAID and USDA monetize food aid, sometimes in potentially overlapping markets, having a cost recovery requirement for USAID but not for USDA could cause inconsistencies in monetization and potentially penalizes one or the other agency. The change to a single requirement of reasonable market price for both agencies was intended to establish similar results in determining sales prices. Under new trtegy to ddress global hnger clled the Feed the Fre inititive, the dminitrtion ght the eablihment of Commnity Development Fnd (CDF) to decrease U.S. government relince on the monetiztion of food id to fnd development ctivitie. The CDF fnd was deigned to expnd effort to nrrow the getween hnitrind development assnce in reas thsupport food ecrity. However, the mechni to use the fnd to replce monetiztion were not inclded in the lw, nd the CDF therefore cnnot used to decrease crrent level of monetiztion. According to USAID officil, the gency will contine to work with Congress on fre budget o thpproprite mechni to decrease relince on monetiztion re incorported in the lw. A $75 million reqt to fnd the CDF was inclded in the fil yer 2011 Foreign Opertion Bdget. While Congressas pproprited fnd for the CDF, the finpproprition has not een mde public. In fiscal year 2010, the United States spent about $2.3 billion to provide a total of 2.5 million metric tons of food aid commodities to food-insecure countries. Of that amount, almost $800 million was spent on providing USAID and USDA 890,000 metric tons of nonemergency food aid (see fig. 1). This assistance is provided through both monetization and direct distribution, where commodities are provided directly to beneficiaries through implementing partners. While U.S. food aid legislation mandates that a minimum of 15 percent of USAID’s Food for Peace nonemergency assistance be monetized, actual levels of monetization far exceed the minimum. In fiscal year 2010, more than 313,000 metric tons of food aid were monetized under USAID’s Food for Peace program, accounting for 63 percent of food aid tonnage under that program. In fiscal year 2010, USDA monetized more than 229,000 metric tons of food aid under the Food for Progress program, accounting for 95 percent of food aid tonnage under that program. Monetization has been less prevalent under the McGovern-Dole International Food for Education and Child Nutrition Program since the end of its pilot program in 2003, due to an increase in the amount of cash provided along with food aid for direct distribution. In fiscal year 2010, the McGovern-Dole International Food for Education and Child Nutrition Program did not monetize any food aid shipments. According to KCCO data, between fiscal years 2008 and 2010, more than 1.3 million metric tons of food aid were programmed for monetization in 34 countries (see fig. 2). The countries in which the largest volumes of commodities were programmed to be monetized during that time period are Bangladesh (220,590 metric tons), Mozambique (202,200 metric tons), Haiti (100,000 metric tons), and Uganda (88,400 metric tons). Together, these four countries accounted for 45 percent of all food aid programmed to be monetized. During that same time period, wheat was the commodity most often programmed for monetization, accounting for about 77 percent of all monetization. Other commodities programmed to be monetized during the same period include soy bean meal, milled rice, vegetable oil, and crude soybean oil. (For a complete list of commodities and volumes programmed to be monetized by country, see appendix IV.) Monetization is conducted by implementing partners, usually NGOs that receive grants from USAID or USDA to monetize agreed-upon commodities in certain countries. Monetization grants generally provide development resources over a 3- to 5-year period. The process begins with a call for applications from either USAID or USDA, to which implementing partners respond by submitting grant proposals for development programs that are to be funded in part with monetization proceeds. USAID and USDA independently issue calls for applications and approve applications at different times, based on different guidelines and priorities. Grant proposals include, among other things, information on the commodity to be monetized, commodity volumes requested, estimated sales price, estimated cost recovery, considerations of market impact assessments, and projects that will be funded based on the estimated sales proceeds. Implementing partners that receive grants have the responsibility to manage and oversee the monetization process. As part of their responsibilities, implementing partners must secure a buyer in the recipient country before a call forward (or purchase order) can be approved by the relevant agency. After either USAID or USDA receives a call forward request from the implementing partner in their Web Based Supply Chain Management (WBSCM) system, the agency approves or disapproves the request, which is then routed to KCCO. KCCO purchases the requested commodities from U.S. producers in the United States and ships them to the implementing partner in the recipient country. To adhere to cargo trade preference requirements, DOT assists in identifying qualified ocean carriers to ship the commodities to the recipient country. The commodities are delivered to the implementing partner in the recipient country, where the implementing partner executes the sales contract with the buyer and collects payment. The implementing partner uses the proceeds to implement the development projects. Figure 3 depicts the general steps in the monetization process, from submitting a grant proposal to obtaining proceeds to completing development projects. (See appendix V for more information). According to an implementing partner we interviewed, the monetization process consists of nearly 50 substeps, including steps to complete the application, conduct market assessments, coordinate requests and shipment, identify buyers and obtain bids, deliver commodities, and collect payments. While implementing partners are not required to follow a particular process to conduct food aid sales for monetization, the two most common approaches reported by implementing partners are the following: Several implementing partners might form a consortium in which one of the partners serves as the selling agent. Consortiums are often formed when several implementing partners obtain grants for the same country to monetize the same commodity. Generally, one of the implementing partners in the consortium takes the lead in conducting monetization sales. The lead implementing partner is responsible for identifying the buyers; preparing a single call forward drawing from each partner’s food allocation; arranging for commodities to be shipped in a single shipment; finalizing the sale in-country; and distributing proceeds among participating consortium members, as appropriate. Typically, the lead implementing partner charges a fee of 3 to 5 percent of total sales to handle monetization, while in some cases, the lead is rotated among consortium members. Fifteen of the 29 implementing partners we interviewed reported being part of a monetization consortium. A single implementing partner might independently sell the commodities granted to it. When a single implementing partner monetizes only its own commodities, it must hire or train staff to conduct the sales or contract a selling agent to sell the food. Selling agents that we interviewed generally charge a fee of 3 to 5 percent of monetization sales. Fourteen of the 29 implementing partners we interviewed said that they monetize only the food granted to their organization. Implementing partners generally sell commodities to private buyers in the recipient countries in the open market. Sales are generally conducted through a public tender process organized by the implementing partner or its selling agent, where an open bidding will take place. This is the preferred method for both USAID and USDA, on the assumption that a public tender process will most likely produce a competitive sales price. In some cases, however, implementing partners sell commodities through direct negotiation, where the implementing partners or their agents enter into a one-on-one dialogue with individual buyers. According to USDA officials, monetization sales through direct negotiation are only permitted when the public tender process is not feasible or does not initially result in a sale. In some cases, implementing partners work with the recipient country’s national government to conduct monetization. For example, an implementing partner may enter into direct negotiation with the government, as in the case of Bangladesh, where the government is the buyer and purchases all USAID Food for Peace nonemergency food aid that is monetized in the country. In Haiti, the government requires that monetization transactions to private buyers be facilitated through a government entity called the Monetization Bureau. According to an implementing partner, the bureau must approve each transaction and charges a monetization fee of 2 to 5 percent. Development projects funded through monetization are expected to address food insecurity in recipient countries. According to USAID guidance, goals of nonemergency food aid programming are to reduce risks and vulnerabilities to food insecurity and increase food availability, access, utilization, and consumption. Within this framework, monetization is built around two main objectives—to enhance food security and generate foreign currency to support development activities. Therefore, the range of activities that USAID funds through monetization includes projects to improve and promote sustainable agricultural production and marketing; natural resource management; nonagricultural income generation; health, nutrition, water and sanitation; education; emergency preparedness and mitigation; vulnerable group feeding; and social safety nets. According to USDA guidance, commodities for monetization are made for use in developing countries and emerging democracies that have made commitments to introduce or expand free enterprise elements in their agricultural economies. Within these constraints, USDA gives priority consideration to proposals for countries that have economic and social indicators that demonstrate the need for assistance. These indicators include income level, prevalence of child stunting, political freedom, USDA overseas coverage, and other considerations such as market conditions. Therefore, according to USDA, development projects funded through monetization by the agency’s Food for Progress program should focus on private sector development of agricultural sectors such as improved agricultural techniques, marketing systems, and farmer education. Figure 4 provides examples of USAID and USDA projects funded through monetization in countries that we visited. Proceeds generated through monetization to fund development projects are less than what the U.S. government expends to procure and ship the commodities that are monetized. USAID and USDA are not required to achieve a specific level of cost recovery for their monetization transactions. Instead, they are only required to achieve reasonable market price, which has not been clearly defined. More than one-third of the monetization transactions we examined fell short of import parity price, a quantifiable measure of reasonable market price. Various factors can adversely impact cost recovery. For instance, ocean transportation constitutes a substantial cost to the U.S. government, and cargo preference requirements raise this cost even further. Furthermore, USAID and USDA conduct only limited monitoring of the sales prices, though monitoring is necessary to ensure that the implementing partners generate as much funding as possible for their development projects. The agencies’ monitoring efforts are further hindered by deficiencies in their reporting and information management systems. Finally, implementing partners face increased risk and uncertainty in their project budgets due to long lag times throughout the approval and sales process. Proceeds generated to fund development projects through monetization were less than what the U.S. government expended to procure and ship the monetized commodities. Cost recovery, the ratio between the proceeds the implementing partners generate through monetization and the cost the U.S. government incurs to procure and ship the commodities to recipient countries for monetization, is an important measure to assess the efficiency of the monetization process in generating development funding. Table 1 shows USAID’s and USDA’s average cost recovery, from fiscal years 2008 through 2010 and fiscal years 2007 through 2009, respectively, as well as their lowest and highest cost recovery transactions. The table also shows the difference, in dollars, between the proceeds from monetization sales to fund development projects, and the cost to the U.S. government to procure and ship the commodities. (For a detailed discussion of our methodology for calculating cost recovery, see appendix I). We found that between fiscal years 2008 and 2010, USAID achieved an average cost recovery of 76 percent, or about $91 million less in proceeds than what the U.S. government spent on procuring and shipping commodities, over these 3 years. USDA achieved an average cost recovery of 58 percent, or about $128 million less than what was expended between fiscal years 2007 and 2009. Therefore, a combined total of $219 million of appropriated funds was ultimately not available for development projects. Figure 5 shows funds being used for procuring and shipping commodities, with the commodities then being sold for cash, and the difference between the final proceeds and the original expended amounts, for both USAID and USDA. USAID’s cost recovery rates ranged from 34 percent to 165 percent, while USDA’s ranged from 25 percent to 88 percent. While USAID’s monetization transactions most often achieved cost recovery between 60 and 100 percent, USDA’s transactions most often achieved between 40 and 80 percent cost recovery. Fifteen of USAID’s monetization transactions achieved cost recovery greater than 100 percent, meaning that the amount of proceeds generated exceeded the costs the government incurred, while none of USDA’s monetization transactions did so. Figure 6 shows the distribution of cost recovery over the 3 years we examined for these selected monetization transactions by USAID and USDA, respectively. USDA’s level of cost recovery is lower for government-to-government monetization transactions, which accounted for about 18 percent of USDA’s monetization from fiscal years 2007 through 2009. While most grants involving monetization are provided to NGOs and educational institutions, USDA also allows monetization by sovereign governments, known as government-to-government monetization. These transactions are completed by host country governments, largely through the same process that is used by other implementing partners. Implementing partners of government-to-government monetization between fiscal years 2007 and 2010 have included Afghanistan, the Dominican Republic, El Salvador, Nicaragua, Niger, and Pakistan. Government-to-government transactions achieved an average cost recovery level of 45 percent from fiscal year 2007 through 2009. Our cost recovery calculations included costs for commodity procurement and ocean shipping but did not include other costs that are not solely associated with monetization. USAID and USDA incur these additional costs when they provide funding for direct distribution and monetization of food aid, as follows: USAID provides its implementing partners with cash from two sources to cover administrative costs other than commodity procurement and ocean shipping costs that are associated with monetization. The first source is internal transportation, shipping, and handling (ITSH), which is cash for shipping and handling the commodities, if necessary, once they arrive at the destination port, to the final point of sale. The second source is funding through Section 202(e) of the Food for Peace Act, which is provided to implementing partners to assist in meeting administrative, personnel, distribution, and other costs associated with Food for Peace programs. Since most Food for Peace grants include both monetized and direct distribution food aid, USAID does not track ITSH and 202(e) specifically for monetization purposes. However, in 2010, ITSH and 202(e) costs for all of USAID nonemergency assistance, including both monetization and direct distribution, were $123.3 million, or about 30 percent of USAID’s total nonemergency costs for the year. According to agency officials, USDA provides its implementing partners with cash through Commodity Credit Corporation (CCC) funds, to cover various administrative costs that are associated with food aid. While this money primarily covers administrative costs associated with the implementation of development projects, some of it pays for costs associated with the monetization process. The CCC is an agency within USDA that authorizes the sale of agricultural commodities to other government agencies and foreign governments and authorizes the donation of food to domestic, foreign, or international relief agencies. The CCC also assists in the development of new domestic and foreign markets and marketing facilities for agricultural commodities. According to a USDA official, such costs could include hiring a monetization agent to facilitate a monetization transaction, or the salaries and benefits of the staff that carry out the monetization transaction. USDA provided implementing partners $23 million in CCC funding between fiscal years 2008 and 2010. USDA stated that it provided $3.57 million in a combination of CCC funding and monetization proceeds to cover the administrative costs associated with monetization from fiscal years 2007 through 2009. Neither USAID nor USDA currently has a required minimum cost recovery benchmark for monetization transactions, and there is no specific target that monetization transactions must reach or exceed. Instead, the Food for Peace Act requires that monetization transactions through both USAID and USDA achieve “reasonable market price” in the recipient countries where U.S. commodities are monetized. The statute does not define reasonable market price, and does not refer to a specific cost recovery benchmark. USAID recommends two sales methods in its 1998 Monetization Field Manual—which has not been updated since its issuance—to achieve reasonable market price, but neither of these methods provides a specific metric. More than three-quarters of the implementing partners we surveyed said they used the field manual as a source of guidance on monetization. Both USAID and USDA have stated a preference for the first method—conducting sales by public tender—to determine a reasonable market price. According to the field manual, public tender, generally an open auction where traders are allowed to bid on the commodities, allows competitive price information to determine the market price for monetized food aid. When public tender sales are not feasible, the manual recommends direct negotiation between buyers and sellers as a second, alternative method. Both agencies recommend taking into account prices for the same or comparable commodities from other suppliers in the marketplace in order to achieve reasonable market price. Specifically, USAID states that reasonable market price is one which “compares favorably with the lowest landed price or parity price for the same or comparable commodity from competing suppliers.” We found that more than one-third of the monetization transactions we reviewed, carried out in various years and countries, were conducted at prices below a quantitative and objective metric for reasonable market price that could be used across time, markets, and individual transactions. In the absence of a quantitative benchmark of reasonable market price, we used the prices of comparable commercial imports for a given country, commodity, and year—the IPP referred to in USAID’s guidance. Others in the economics field, including researchers of food aid and monetization, use IPP as a measure of market price in a given country and time frame. Additionally, the World Food Program, the single largest multilateral provider of food aid in the world, uses IPP to determine whether or not to procure its food in a given market, in order to gain an accurate picture of the potential impact the purchase may have. Comparing monetization sales prices to the IPP tells us the extent to which the monetization transaction occurred at a fair and competitive market price for commercially imported commodities. We found that more than one-third of the 42 transactions we examined, for which we had IPP and sales price data, had prices lower than 90 percent of the commercial import prices, an indication that they might have been able to achieve higher prices. For example, in 2008, USAID allowed an implementing partner in Burkina Faso to monetize rice at a price that was 67 percent of the IPP, while at the same time in Guatemala, USAID permitted monetization of vegetable oil at 70 percent of the IPP. Ocean freight cost is a significant component of the monetization cost, and due in part to cargo preference requirements, U.S.-flag carriers have higher shipping rates on average than foreign-flag carriers, further lowering cost recovery. The cargo preference mandate requires that 75 percent of U.S. food aid be shipped on U.S.-flag vessels. Another mandate, known as the Great Lakes Set-Aside, requires that up to 25 percent of Title II bagged food aid tonnage be allocated to Great Lakes ports each month. These legal requirements limit competition and potentially reduce food aid shipping capacity, leading to higher freight rates. Figure 7 shows the share of freight costs in food aid procurement and the costs associated with cargo preference for monetized food aid. (For a detailed discussion of our methodology in assessing the costs associated with cargo preference, see appendix II.) Between fiscal years 2008 and 2010, ocean shipping accounted for about one-third, or $235 million, of the cost to procure and ship monetized food aid (see fig. 7). For low-value commodities, such as bulk wheat, ocean shipping costs take up a higher percentage of the total cost. In 15 percent of the monetization transactions between fiscal years 2008 and 2010, shipping costs accounted for more than 40 percent of the total cost of procurement and shipping, amounting to more than $91 million in ocean shipping. For several of these transactions, shipping cost was higher than commodity procurement cost. For example, while it cost $3.9 million to purchase the shipment of 10,000 metric tons of wheat to be sent to Malawi in 2008 for monetization, it cost $4.5 million in ocean shipping. The freight rate for USAID and USDA food aid shipments on foreign-flag carriers cost on average $25 per ton less than the freight rate on U.S.-flag carriers, controlling for shipping routes, the shipping time and term, and the type of commodities shipped. The difference in freight rate between U.S.- and foreign-flag carriers also depends on the type of commodities shipped. Figure 8 shows the difference in the average freight rate per metric ton between U.S.- and foreign-flag carriers. The freight rate for bulk commodities averaged $8 per ton lower and the rate for non-bulk commodities averaged $30 per ton lower for foreign-flag carriers than U.S.- flag carriers for shipments with the same shipping routes and the same shipping times and terms. We estimate that between fiscal years 2008 and 2010, cargo preference potentially cost the food aid programs approximately $30 million because of the higher rates U.S.-flag carriers charged. When surveyed, 19 of the 29 implementing partners stated that allowing more shipping on foreign-flag carriers would “greatly improve” or “very greatly improve” cost recovery rates. Food aid shipping competition may be further limited by the requirement in the Cargo Preference Act that foreign-built vessels that reflag into the U.S. registry wait 3 years before participating in the transportation of food aid cargo. According to a DOT official, the 3-year requirement was established in 1961 to provide employment opportunities to U.S. shipyards by discouraging vessels from reflagging into and out of the U.S. registry. The requirement, which does not apply to the Maritime Security Fleet or to vessels transporting cargos financed by the U.S. Export-Import Bank, seeks to ensure that vessels transporting 75 percent of food aid are not only U.S.-flagged, but also constructed in U.S. shipyards. However, since 2005, U.S. shipyards have built only two new U.S-flag vessels appropriate for transporting food aid and these vessels have not been awarded a food aid contract. Further, DOT has no record of an ocean transportation contract awarded to a U.S.-flag vessel that reflagged into the U.S. registry and waited the 3 years prior to applying for food aid contracts. Limited competition contributes to fewer ships winning the majority of the food aid shipping contracts. Based on KCCO data, from fiscal years 2002 to 2010, the number of U.S.-flag vessels awarded food aid contracts declined by 50 percent, from 134 to 67 vessels. In a 2009 report to Congress, USAID and USDA stated that, due to the declining size of the U.S.-flag commercial fleet, USAID and USDA are forced to compete with the Department of Defense and other exporters for space aboard the few remaining U.S.-flag vessels, thereby limiting competition in transportation contracting and leading to higher freight rates. When surveyed about what could be done to improve the monetization process, 13 implementing partners with USAID and 16 with USDA stated that exploring options for lowering transportation costs would lead to “great” or “very great” improvement. USAID and USDA conduct only limited monitoring of the sales prices that implementing partners achieve through monetization to ensure that the transactions generate as much funding as possible for the development projects funded by monetization proceeds. While implementing partners report cost recovery data to USAID and USDA, the agencies do not use the data to monitor sales prices over time. USAID requires annual reports on multiyear assistance programs (MYAP), referred to as pipeline and resource estimate proposals, to be submitted for the coming fiscal year. Additionally, implementing partners report their monetization proceeds in Annual Results Reports. These reports include fiscal year levels of metric tonnage to be called forward, anticipated monetization proceeds, and any 202(e) or ITSH funds used for the program, which encompass both direct distribution and monetization activities. In addition, they record the results of any monetization transactions from the previous year. USDA requires semiannual reports, known as Logistics and Monetization Reports. These reports record the metric tonnage of commodity monetized in that time period. In addition, they record the date and price at which the commodity was sold, as well as the date received and a breakdown of the specific use of the funds. As part of the review and evaluation criteria of proposals, USAID’s 1998 Monetization Field Manual requires verification that the amount of money generated in the monetization transaction(s) meets or exceeds the cost recovery benchmark. However, USAID officials said that they no longer hold implementing partners accountable for meeting the cost recovery benchmark of 80 percent referenced in the field manual, because (1) the 2002 Farm Bill changed the requirement to achieving “reasonable market price,” and (2) USAID has not officially reissued the field manual since 1998. Although the USAID mission in-country can recommend against monetization transactions if it disagrees with the sales price analysis conducted by the implementing partner in its attempts to sell the commodity, USAID officials told us that the missions have never made such recommendations. Furthermore, USAID stated that the agency has not established criteria to monitor sale prices. According to USAID officials, because food aid monetization transactions are governed by grants, not contracts, the agency cannot be overly directive towards its implementing partners. USAID works to have ongoing conversations with the implementing partners in order to identify potential problems, troubleshoot, and help with potential alternatives. USDA does not have a process to monitor sale prices either. USDA officials in charge of the Food for Progress program told us that they did not know what the level of cost recovery of monetization transactions is and did not have enough information to develop an estimate. USDA officials said that they rely on their agricultural attachés to act as a “reality check” in determining reasonable market price, and determine acceptable cost recovery on a case-by-case basis, looking at the U.S. prices and the circumstances surrounding the sale. The agencies’ monitoring of monetization cost recovery is further hindered by deficiencies in their reporting and information management systems. USAID and USDA acknowledged that their current information systems are not capable of systematically capturing cost recovery information, which would help them monitor the sales prices implementing partners achieve. Both agencies had to manually generate the cost recovery information to fulfill our data request by going through the individual reports submitted by implementing partners to collect the information needed to calculate cost recovery for monetization transactions, inputting them into a spreadsheet for us to conduct our analysis. Furthermore, these spreadsheets contained numerous errors and inconsistencies. Examples include transactions that were recorded in the incorrect year, double- counted, or not counted at all. In other instances, the calculation of cost recovery was incorrect, due to incorrect values inputted into the cells. In addition, multiple transactions were missing the actual sales prices. In our 2007 report, we recommended that both USAID and USDA develop an information collection system to track monetization transactions. Despite this recommendation, both USAID and USDA acknowledged that their current information systems are still not capable of systematically capturing cost recovery information. Both USAID and USDA are in the process of implementing information systems that aim to better capture the information generated by the implementing partners regarding monetization transactions. USAID plans to have its new information system fully operational by summer 2012, including monitoring and evaluation components. USDA’s Food Aid Information System will tie into its procurement, payment, and accounting system—Web Based Supply Chain Management—tracking budgeting and planning, solicitations, proposals and negotiations, payment and compliance for the Foreign Agricultural Service. The final components of the system are due to come online in fall 2011. Long lag times between a monetization proposal’s approval by USAID or USDA and the time of the commodities’ final sale increases the transaction’s exposure to market volatility. This makes it difficult to accurately project the funding level monetization can generate, and to design and implement the development projects accordingly. Two-thirds of the implementing partners we surveyed said that if they received less funding than expected, they would curtail the scope of their projects as well as the number of beneficiaries served. One implementing partner commented that while monetization transactions must do their best to achieve “reasonable market price,” timing is a constraint. The process of getting a proposal approved, finding a foreign buyer, and conducting an actual sale can be time-consuming, and market conditions can change significantly from when the implementing partners first submitted the proposals. For example, when an implementing partner monetized through USDA’s Food for Progress program in Bangladesh, it submitted its initial proposal in August 2008, including the volume and estimated sales prices for the proposed commodity, but the sale of the commodities was not made until 17 months later, in December 2009. Market conditions changed significantly during the process from the time of the initial proposal to the final sale, and the commodity price fell by close to 40 percent, leading to a diminished return on the transaction. The implementing partner’s actual sales price of $800 per metric ton was more than a third less than the estimated price in the original proposal of $1,300 per metric ton. In another example, an implementing partner stated that it wanted to monetize its commodities at a certain point when prices were high, but missed the opportunity to do so due to delays in the approval process. As a result, its cost recovery was lower than estimated. This situation forced the implementing partner to reduce its number of beneficiaries by roughly a third, and eliminate one of its targeted geographical regions within the country. All but 2 of the 29 implementing partners we surveyed reported that they experienced delays during monetization transactions at least “sometimes.” In addition, 19 of the 29 implementing partners we surveyed reported that delivery delays were a factor that hindered their ability to conduct monetization. By law, USAID and USDA must ensure that monetization transactions do not entail substantial disincentive to, or interference with, domestic production or marketing of the same or similar commodities. In addition, the agencies are to ensure that the transactions do not cause disruption in normal patterns of commercial trade. However, we found that the volume programmed for monetization was more than 25 percent of the commercial import volume, in more than a quarter of the cases, increasing the risk of displacing commercial trade. As part of an effort to meet its legal requirements, in 2008, USAID hired a private contractor as an independent third party to conduct market analyses and recommend commodities and volumes to monetize without causing adverse market impact. Separately, USDA conducts assessments called the Usual Marketing Requirement (UMR) that determine the maximum volume of a given commodity to be programmed for monetization without disrupting commercial trade, and relies on its implementing partners to conduct broader market analyses to address the Bellmon requirements. However, we found that USAID’s assessments were conducted for a limited number of countries and have not yet been updated to reflect changing market conditions. We also found that USDA’s UMRs contained weaknesses, such as a lack of methodology and errors in formulas. Further, we found that USAID’s and USDA’s recommended limits for monetization differed significantly from each other, and that the volume of commodity programmed for monetization by the agencies has at times exceeded the recommended limits. Finally, because both agencies do not conduct post- monetization market impact evaluations, they cannot determine the effectiveness of steps taken to ensure that monetization transactions do not cause adverse market impacts and what, if any, adverse impacts may have resulted. These adverse impacts may include discouraging food production by local farmers, which in turn could undermine the food security goals of the development projects funded by monetization. By law, USAID and USDA are required to ensure that monetization transactions do not lead to adverse market impacts, such as causing disincentives to, or interference with, domestic production or marketing of the same or similar commodities. Additionally, the agencies are to ensure that the transactions avoid causing disruption in normal patterns of commercial trade, which is also an adverse market impact. Monetization has the potential to discourage food production by local farmers, and as a result may undermine the broader agricultural development and food security goals of the Food for Peace and Food for Progress programs. For example, when large volumes of food are monetized at once, the prices of the same or competitive commodities in the recipient country may be depressed, creating disincentives to local producers and possibly resulting in a decline in local production. The risk of depressing prices increases when the commodities arrive while supply is at a peak, such as during a harvest period for the same or competitive commodities. Monetization also has the potential to displace commercial trade, especially if the monetized food is sold on more favorable terms than what is available commercially. Buyers in the recipient country, such as domestic importers and millers, would then have an incentive to purchase monetized commodities over commercial ones. When sold in significant volumes, monetized food has the potential to substantially reduce demand for exports from the United States, other developed countries, and regional partners, thus hurting competitive commercial trade. Furthermore, if local production decreases or if commercial trade is displaced, repeated monetization of food aid commodities over time can increase the risk of market dependency on this source of food. Food aid programmed for monetization constituted more than 25 percent of commercial import volume in more than a quarter of cases for certain commodities between fiscal years 2008 and 2010. As mentioned earlier, monetized food aid has the potential to displace commercial trade from developed countries or regional partners, a cost that impacts U.S. agribusiness and other exporters of the same commodity. Monetizing large volumes of food aid relative to commercial import volume increases the risk that commercial trade is displaced. Fintrac recommends that the total volume monetized of a given commodity should not exceed 10 percent of the commodity’s commercial import volume in a given country in a given year. We examined the total volume programmed for monetization by both agencies for each commodity in each country and each year between fiscal years 2008 and 2010, for which we could obtain the commercial import volume—a total of 87 cases. For each country and year, we compared the total volume programmed for monetization of a given commodity to the commodity’s reported commercial import volume. We found that the total volume programmed for monetization as a percentage of the reported commercial import volume ranged from less than 1 percent to 1,190 percent during this period. In about half of the 87 cases, the total volume programmed for monetization exceeded 10 percent of the commercial import volume. Further, in 24 of the 87 cases, the total volume programmed for monetization exceeded 25 percent of the reported commercial import volume for that commodity. Moreover, in 10 of the 87 cases, the total volume programmed for monetization was more than 100 percent of the reported commercial import volume. For example, about 30,000 metric tons of wheat were programmed for monetization in Uganda in 2008, which was more than 1.5 times the reported commercial import volume of wheat for that year. Figure 9 shows the distribution of the total volume programmed for monetization by both agencies as a percentage of the reported commercial imports. USAID uses a private contractor to conduct market assessments to help ensure that monetization transactions do not entail substantial disincentive to domestic production, as required by the Bellmon Amendment, for its 20 priority countries. In August 2008, USAID hired Fintrac to improve the market analysis required before food aid programs are approved in recipient countries, known as the Bellmon Estimation for Title II (BEST). Prior to 2008, USAID made determinations about market impact based solely on the Bellmon analyses conducted by its implementing partners, who as the recipients of monetization grants are not independent. As of May 2011, 13 of the 20 BEST analyses had been completed. According to USAID, the BEST analysis is to complement and not substitute the implementing partners’ market analyses and surveillance. Therefore, in many cases implementing partners continue to conduct their own market analysis, which estimates the price they are likely to receive for the commodity to be monetized. Overall, 17 of the 29 implementing partners we surveyed stated that the BEST analysis was sufficient for determining which commodities to monetize and 13 of the 29 implementing partners stated that it was sufficient for determining how much to monetize. The methodology for the BEST analysis includes identifying potential commodities to be monetized, ensuring that recipient country policies and regulations are favorable (i.e., there are no barriers or restrictions on the commodity to be monetized), reviewing local market structure as well as previous and planned food aid initiatives, and examining the likelihood of achieving fair and competitive market price. In conducting its analysis, Fintrac also considers the latest 5-year trends in import volumes and domestic production data to ensure that the commodity to be monetized has been imported in significant volumes and that local production is insufficient to meet demand. As noted above, another important step in the analysis is to assess the likelihood that the monetized commodity will achieve fair and competitive market price. Fintrac uses the IPP as the most precise estimate of fair and competitive price for commercially- imported commodities. As discussed earlier, the IPP is the price a commercial importer in the recipient country pays to import the same or similar commodities from the most common exporting country. Based on all of these components, Fintrac makes a recommendation on monetization. When Fintrac recommends monetization, it does so in volumes that generally do not exceed 10 percent of the commodity’s commercial import volume in order to avoid substantial displacement of trade. Fintrac’s analysis also relies on field visits to obtain additional data, and interviews with stakeholders in the recipient country such as implementing partners; commercial importers; and potential buyers, including millers and processors. According to Fintrac, its methodology allows it to replicate these market assessments from country to country and ensures that all implementing partners are provided with the same information for their monetization applications. USAID’s ability to ensure that monetization does not cause adverse market impact is limited, because the BEST analyses have only been conducted for a limited number of countries and have not yet been updated to reflect changes in market conditions. While Fintrac has conducted 13 BEST analyses, these analyses were available for only 11 of 63 cases in which USAID monetized since 2008. For the remaining cases, USAID relied on Bellmon analyses conducted by their implementing partners, which are not independent. In addition, while the BEST offers an independent and consistent methodology and considers the latest 5 year trends in import volumes and domestic production data, it is not updated to capture changes in market conditions that may occur by the time sales transactions take place. We found that in certain cases, in the interval between the completion of the BEST and the sale of the monetized commodity, there was a relatively long lag, during which market conditions may have changed. Further, MYAPs generally last for 3 to 5 years, and monetization sales can take place in each of those years. The BEST would likely not be useful for monetization transactions that take place beyond the initial MYAP approval. According to USAID, while there is market information that changes rapidly and requires continued assessments, the BEST includes historical and cyclical information that can be used for many years. However, we found that the BEST does not include projection analysis that could take into account potential price spikes and the volatile nature of the market. As a result, the findings in the BEST have the potential to be irrelevant by the time the commodities reach the recipient country and implementing partners may not have adequately considered the impact of monetization on local markets and trade when applying for grants. USDA relies on its implementing partners to conduct market analyses that are used to address the Bellmon requirements. As the recipients of both the monetized commodities and the proceeds of their sales, implementing partners lack independence in conducting market analyses. Further, USDA does not provide guidance to the implementing partners on what methodology should be used for the market assessments that are intended to address Bellmon requirements. USDA does not conduct its own Bellmon requirement assessments to verify whether or not the conclusions reached by the implementing partners are reliable or reasonable. Without doing so, USDA cannot accurately determine whether monetization will result in substantial disincentive to, or interference with, domestic production or marketing of the same or similar commodities. USDA officials explained that they are currently unable to conduct independent analyses, such as those conducted by Fintrac, due to lack of resources. However, these officials also stated that they encouraged implementing partners to use the BEST analysis when available. USDA conducts its own market assessment—the UMR—to meet its requirement to determine that monetization does not cause disruption in normal patterns of commercial trade. The UMR for a given commodity is an Excel spreadsheet that contains data on the recipient country’s consumption needs or apparent consumption, imports, and production. USDA determines the maximum allowable volume for U.S. programming, including monetization, for a given commodity in a specific country and year by subtracting the volume of imports and production from that of the consumption needs or apparent consumption, as follows: Maximum Allowable Volume for U.S. Programming = Consumption + Exports + Stocks – (Domestic Production + Imports) USDA officials told us that the UMRs are conducted after they receive monetization grant applications and that they issue about 30 to 40 UMRs per year. According to USDA officials, UMRs are not shared with the public because the information is intended solely for the use of U.S. government agencies, and the UMRs are considered market-sensitive because they include forecasts about consumption needs. Further, USDA officials told us that the UMRs are not held to the same rigorous review and verification processes that official USDA documents intended for external distribution must undergo. We found weaknesses in the UMRs, such as no explanation or source for values used to calculate the consumption needs or apparent consumption in each UMR. In addition, some of the UMRs we reviewed included errors in formulas and mistakes in calculations. The standard methodology for estimating consumption would show it as the sum of production and imports minus exports adjusted for the changes in stock. However, the consumption needs and apparent consumption figures in the spreadsheets are not based on any formula and appear as a data entry. For example, the consumption need noted in one UMR was more than double the consumption need that was calculated for that commodity using the standard methodology mentioned above. In another UMR, the consumption need was 80 percent greater than the calculated amount using the standard methodology. Further, our review of the UMRs showed that while data sources are listed at the bottom of these spreadsheets, it is impossible to identify which information came from which of the listed sources. In addition, we found errors in formulas and mistakes in calculations in 8 of the 12 UMRs that we reviewed. For example, columns in the Excel spreadsheets were not added correctly and resulted in totals that were smaller than the components summed to create them. We also found inconsistencies in how numbers and formulas were created. In addition, averages that were supposed to be based on 5 years’ worth of data were based on only 3 years’ worth of data and treated as averages of 5 years’ worth of data. In other cases, calculations included figures that should have been excluded, such as concessionary sales. Further, we found that some formulas included circular references, meaning that the total of the summation was also included as part of the summation itself. When we shared these findings with USDA, the agency corrected the eight UMRs. Such weaknesses impact the calculation for the maximum volume of recommended food aid programming and ultimately the decisions on how much food aid can be monetized. USDA officials told us that they conduct ad-hoc spot checks of the UMRs but do not have a formal quality control process in place. The assessments that USAID and USDA use to help set recommended limits for monetization volumes vary widely in their conclusions. In all of the 12 cases in which we could compare USAID’s and USDA’s limits, these limits were significantly different from each another. In some cases, the UMR analyses recommended monetization of a commodity, while the BEST did not. For example, the 2010 UMR for wheat in Burundi concluded that up to 6,000 metric tons of wheat could be monetized, but the 2010 BEST analysis for Burundi concluded that the market was not suitable for monetization of any commodity, including wheat, and recommended that regional monetization be considered. Further, in all 12 cases, the maximum allowable volume for U.S. programming found in the UMR was higher than the recommended maximum volume found in the BEST (see table 2). According to Fintrac, these volumes vary greatly because USAID conducts its assessments based on multiple factors, including the purchasing power of the buyers, which impacts the ability of the market to absorb additional commodities. The volume of commodity programmed for monetization has at times exceeded the recommended limits set by the agencies. The purpose of setting these limits is to help ensure that these transactions do not cause adverse market impacts. However, the limits have been exceeded by the very agencies that set them. We examined the total volume programmed for monetization by each agency and the aggregate of both agencies for each commodity in each country and each year between fiscal years 2008 and 2010 (for a complete list, see table 5 in appendix IV). We then compared these totals to the recommended limits found in the BEST and UMRs. We found that USAID exceeded the limits recommended by the BEST analyses in 6 of 11 possible cases. For example, the 2010 BEST analysis for Liberia recommended that a maximum of 3,427 metric tons of rice could be monetized; however, USAID programmed 10,100 metric tons of rice to be monetized in Liberia in 2010. In addition, USDA exceeded the recommended limit found in the BEST in 2 of 3 possible cases. For example, in 2009 USDA programmed 15,000 metric tons of wheat in Uganda to be monetized, despite a recommendation in the BEST analysis that USDA should not monetize more than 7,500 metric tons of wheat. We also found that USDA exceeded the limit set by its UMR in 5 of 34 possible cases. For example, in 2008 USDA programmed 6,000 metric tons of soybean meal to be monetized in Armenia when the maximum allowable volume was set at 200 metric tons in the corresponding UMR. USAID exceeded the UMR’s limit for U.S. programming in 10 of 59 possible cases. For example, in 2009, USAID monetized 2,390 metric tons of rice in Senegal even though the corresponding UMR did not recommend programming any rice for monetization. See table 3 for all the cases in which USAID and/or USDA exceeded the limit recommended by the BEST analysis and/or set by the UMR between fiscal years 2008 and 2010. Further, for 3 of 6 possible cases in which both agencies programmed the same commodity for monetization in the same country and the same year, the combined volume programmed for monetization by both agencies exceeded the recommendation in the BEST and/or the UMR. For example, for wheat in Malawi in 2009, the BEST recommended 8,000 metric tons and the UMR set the limit at 29,200 metric tons; however, both agencies programmed wheat for monetization and their combined total of 51,140 metric tons was more than six times the BEST’s recommendation and 75 percent above the UMR’s limit. According to USAID officials, the recommended limits are at times exceeded because these market assessments are part of a larger decision- making process, which includes informal discussions between headquarters, the field-mission, and the implementing partners. Officials stated that through these discussions a decision on the commodity choice and volume to be monetized is made. However, USAID acknowledged that these discussions and the rationale for the decisions are not systematically documented. According to USDA officials, the agency considers other market information after agreements are signed and the UMRs are not the only information used to make programming decisions. Further, USDA officials stated that in some cases, they have documented the justification to exceed the programming limits set by the UMR. However, USDA did not provide this documentation for the cases that are discussed in this report. The actual impacts of programming monetization and monetizing above the limits recommended by the BEST and UMR have not been determined, since neither USAID nor USDA conduct evaluations after monetization transactions have taken place. Both agencies require implementing partners to report the sales price achieved for their monetization transactions, and USAID’s Monetization Field Manual recommends that implementing partners establish a process for regularly monitoring local market prices. However, USAID and USDA have neither used the data on sale prices reported by the implementing partners to assess the impact monetization had on local production and trade nor established ways to systematically monitor the local markets in countries where they monetize. USAID and USDA have depended on the BEST, UMR, and market assessments conducted by their implementing partners to help meet their requirement to ensure that monetization does not result in adverse market impacts in the recipient countries. However, without conducting evaluations after monetization has occurred, they cannot determine the impact the sale of donated food had on local production and trade. Furthermore, they cannot assess the effectiveness of the BEST and UMR in preventing adverse market impact. According to a 2009 study by the Partnership to Cut Hunger and Poverty in Africa, for the United States to demonstrate commitment to minimizing market risks in recipient countries, more systematic evaluation of the monetization process is needed. Providing developing countries with assistance to improve food security is a vital humanitarian and foreign policy objective. However, monetization of U.S. food aid—the U.S. government’s primary approach to meeting this objective—is an inherently inefficient way to fund development projects and can cause adverse market impacts in recipient countries. The monetization process results in the expenditure of a significant amount of appropriated funds in unrelated areas such as transportation and logistics, rather than development projects. Moreover, the potential for adverse market impacts, such as artificially suppressing the price of a commodity due to excessive monetization, could work against the agricultural development goals for which the funding was originally provided. The inefficiencies of monetization stem directly from the multiple transactions required by the process and, except in rare cases, prevent full cost recovery on monetization transactions. Therefore, as a source of funding for development assistance, monetization cannot be as efficient as a standard development program which provides cash grants directly to implementing partners. While monetization continues, however, it is important that the agencies strive to maximize the resources available for implementing development projects funded through monetization. The absence of a clearly defined benchmark or indicator for reasonable market price hinders their efforts to forestall transactions that provide a very low rate of return. In addition, since the agencies conduct only limited monitoring of the sales prices that implementing partners achieve through monetization, they cannot ensure that the transactions obtain the highest price and thereby generate as much funding as possible for development projects. The agencies are required by law to ensure that monetization does not cause adverse market impacts, but their market assessments contain weaknesses that diminish their usefulness for informing decisions on what, where, whether, and how much to monetize. Moreover, whatever limits these assessments attempt to establish are often exceeded and could contribute to disincentives to local food production and displacement of commercial trade. Furthermore, without conducting post-monetization transaction impact evaluations, the agencies cannot determine the actual impacts of monetization, even when the volume of the commodity monetized is more than 25 percent of the commodity’s commercial import volume. Finally, transportation costs constitute about a third of the overall costs of monetization over the 3-year period we examined, and the 3-year reflagging rule—which only applies to food aid and not to the defense agencies and the U.S. Export-Import Bank—can limit competition among ships eligible to transport U.S. food aid, further increasing cost. Consistent with rules that apply to the Maritime Security Fleet and vessels transporting other U.S. government cargo, Congress should consider amending the Cargo Preference Act of 1954 to eliminate the 3-year waiting period imposed on foreign vessels that acquire U.S.-flag registry before they are eligible for carriage of preference food aid cargos. This could potentially increase the number of U.S.-flag vessels eligible for carriage of preference food aid cargo, thereby increasing competition and possibly reducing costs. To improve the extent to which monetization proceeds cover commodity and other associated costs and the agencies’ ability to meet requirements to ensure that monetization does not cause adverse market impacts, we recommend that the Administrator of USAID and the Secretary of Agriculture take the following four actions: 1. jointly develop an agreed-upon benchmark or indicator to determine “reasonable market price” for sales of U.S. food aid for monetization; 2. monitor food aid sales transactions to ensure that the benchmark set to achieve “reasonable market price” in the country where the commodities are being sold is achieved, as required by law; 3. improve market assessments and coordinate to develop them in countries where both USAID and USDA may monetize; and 4. conduct market impact evaluations after monetization transactions have taken place to determine whether they caused adverse market impacts. USAID and USDA, the two principal agencies that manage U.S. food aid monetization programs, and DOT, the principal agency responsible for the implementation of cargo preference rules, provided written comments on a draft of this report. We have reprinted their comments in appendixes VII, VIII, and IX, respectively. These agencies also provided technical comments and updated information, which we have incorporated throughout this report, as appropriate. The Department of State and the Office of Management and Budget did not provide written comments. DOT disagreed with our Matter for Congressional Consideration on the basis of its concern regarding the potentially detrimental impact the statutory change may have on the U.S. maritime industry. However, we maintain that Congress should consider amending the Cargo Preference Act of 1954 to eliminate the 3-year waiting period imposed on foreign vessels that acquire U.S.-flag registry before they are eligible for carriage of preference food aid cargos. We are suggesting this proposed amendment on the basis of the following four factors: First, the number of vessels participating in U.S. food aid programs has declined. In a 2009 report to Congress, USAID and USDA jointly stated that, due to the declining size of the U.S.-flag commercial fleet, USAID and USDA are forced to compete with the Department of Defense and other exporters for space aboard the few remaining U.S.-flag vessels, thereby limiting competition in transportation contracting and leading to higher freight rates. Second, our analysis of ocean transportation costs showed that food aid shipments on foreign-flag carriers cost the U.S. government, on average, $25 per ton less than U.S.-flag carriers. Third, although the 3-year requirement was established to provide employment opportunities to U.S. shipyards, since 2005, U.S. shipyards have built only two new U.S.-flag vessels appropriate for transporting food and the vessels have not been awarded a food aid contract. Fourth, the 3-year rule applies only to food aid and not to defense agencies and the U.S. Export-Import Bank. The elimination of the 3-year waiting period can ease entry of new vessels into the U.S. food aid program, with the potential to increase competition among eligible U.S.-flag ships and reduce the cost of transportation. DOT also said that we overstated the overall cost of transportation. Our calculation of transportation cost was based on an analysis of all actual monetization transactions over a 3-year period and is thus a precise calculation of the actual cost to the U.S. government. In addition, DOT said that the number of vessels participating in the program has declined by less than what we found. However, our analysis was based on the number of actual vessels booked for all food aid contracts awarded from fiscal years 2002 to 2010. USAID generally concurred with our recommendations, noting ongoing and planned actions to address them. Specifically, USAID stated that it will work with USDA to explore options of setting a benchmark or indicator for the sale of U.S. food aid through monetization. USAID noted that it has regional and country-based food aid monitoring and evaluation specialists who review U.S. food aid programs, including monetization sales, and that the agency’s BEST project is well-accepted by its implementing partners. Additionally, USAID is updating its Monetization Field Manual, which includes market assessment guidance. Finally, USAID stated that it will explore possible cost-effective ways to conduct post-sale market impact evaluations with its partners. USDA also generally concurred with our recommendations, stating that they will be useful in ongoing efforts to generate cash development resources and improve overall program management. USDA noted that an advantage of monetization is that it can encourage commercial markets for agricultural products and contribute to other market-building activities. However, we found that the agencies cannot ensure that monetization does not cause adverse market impacts, including discouraging food production by local farmers. USDA also noted actions it is exploring to reduce the cost of food aid shipments, such as the recipient host government paying for the cost of ocean transportation and combining shipments to obtain volume discounts. Further, USDA stated that it will work with USAID to develop improved benchmarks for reasonable local market prices. Finally, USDA stated that it will coordinate with USAID to improve market assessments and it will consider revising its regulations to require market impact evaluations. We are sending copies of this report to interested members of Congress, the Administrator of USAID, the Secretary of Agriculture, and relevant agency heads. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9601 or melitot@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix X. Our objectives were to (1) assess the extent to which monetization proceeds cover commodity and other associated costs and (2) examine the extent to which U.S. agencies meet requirements to ensure that monetization does not cause adverse market impacts. To address these objectives, we analyzed emergency and nonemergency food aid program data provided by the U.S. Agency for International Development (USAID), the U.S. Department of Agriculture (USDA), and USDA’s Kansas City Commodity Office (KCCO). Our analysis focused on nonemergency food aid that was monetized. The agencies relied on various reports their implementing partners submitted to manually generate the cost recovery data for fiscal years 2008 through 2010 for our review. We worked with the agencies to correct errors in the data and determined that the data used in our analysis were sufficiently reliable for our purposes. We surveyed all the nongovernmental organizations (NGO) that served as implementing partners under USAID and USDA and conducted monetization between fiscal years 2008 through 2010. To determine the universe of NGOs that served as implementing partners during this time period, we obtained a list of all implementing partners with call forwards for monetized food aid from KCCO between fiscal years 2008 and 2010, which consisted of a total of 33 implementing partners. Three of these implementing partners were foreign governments and we excluded these from our sample. A fourth implementing partner was excluded because it had not conducted monetization before the end of fiscal year 2010. As a result, we determined that the universe of implementing partners that had monetized between fiscal years 2008 and 2010 was 29. We developed a structured instrument for our survey in October of 2010, and pre-tested it on two implementing partners. The instrument contained both closed and open ended questions in four general areas: (1) the monetization process, (2) the U.S. government role, (3) market analysis, and (4) cost recovery. We sent the instrument to all 29 implementing partners via e-mail in November 2010 and received completed instruments from all 29 of them. As part of our process for this survey, we conducted phone interviews with each implementing partner after we received its completed instrument to ensure the accuracy of their responses. In Washington, D.C., we interviewed officials from USAID, USDA, the Departments of State and Transportation, and the Office of Management and Budget. We also met with a number of subject matter experts, as well as officials representing NGOs that serve as implementing partners to USAID and USDA in carrying out U.S. food aid monetization programs overseas. In addition, we conducted field work in three of the four countries that programmed some of the highest volumes of nonemergency monetized U.S. food aid between fiscal years 2008 and 2010—Bangladesh, Mozambique, and Uganda—and met with officials from U.S. missions, representatives from NGOs and other implementing partners that directly handle sales and implement development activities, and in Uganda and Mozambique, officials from relevant host government agencies. To determine the level of cost recovery, we obtained data from USAID and USDA on commodity costs, which include the procurement and ocean freight cost, and sales price for each monetization transaction. For the purposes of this report, we defined cost recovery as the ratio between sales proceeds from monetization and the cost to the U.S. government to procure and ship the commodities. We did not include transactions for which the agencies did not have actual sales prices. We analyzed cost recovery by agency, year, commodity, and recipient country to study the variations in the level of cost recovery. In order to analyze cost recovery, we took the following steps: Cleaned the data. We found many errors and discrepancies in the data we obtained from USAID and USDA, and sent questions asking them to explain the discrepancies we found and make corrections. Calculated cost recovery. Using the cleaned data, we calculated the cost recovery for each transaction and for the USAID and USDA programs in total. The program average we reported is a weighted average, the ratio between the sum of sales proceeds and the sum of commodity and freight costs. Estimated the difference between the proceeds generated through monetization and the cost the U.S. government incurred to procure and ship the commodities. To do so, subtracted the total cost the U.S. government incurred on procurement and shipping monetized food aid commodities from the total proceeds generated. We also estimated the extent to which freight costs account for the cost to the U.S. government for U.S. food aid procurement and shipping. In addition, we looked at how cargo preference affects cost recovery by examining the freight rate differentials between U.S.- and foreign-flag carriers, in shipping U.S. food aid. (For a detailed description of our methodology for this analysis, see appendix II.) To examine the extent to which USAID and USDA meet requirements to ensure that monetization does not cause adverse market impacts, we conducted a literature search to identify relevant studies and papers on the effect of monetization on recipient countries and trade. In addition, we conducted interviews with officials from USAID and USDA; representatives from NGOs engaged in monetization; and experts from academia with extensive research, published work, and experience in the field. We reviewed the federal requirements and agency documents such as policies and guidelines, the Bellmon Estimate for Title II (BEST) analyses, and the Usual Marketing Requirement (UMR). We also analyzed data from KCCO, USAID, and USDA on commodities that were programmed for monetization between fiscal years 2008 and 2010, including volumes programmed for monetization, import data, and consumption data in recipient countries. Specifically, we Examined the total volume programmed for monetization by both agencies for each commodity in each country and each year between fiscal years 2008 and 2010, for which we could obtain the commercial import volume using the UMR. We compared the total volume monetized of a given commodity to the commodity’s commercial import volume. To assess the data, we interviewed cognizant agency officials at USDA and reviewed documentation; however, we did not independently verify the underlying source data. We determined that the data we used were sufficiently reliable for our purposes. Reviewed the 7 BESTs and 87 UMRs that were available for all of the monetization cases that occurred between fiscal years 2008 and 2010. For the purposes of this report, we define the term “case” as the total volume of a given commodity programmed for monetization by either USAID and/or USDA in a given country in a given year. Examined the limits set by the BEST and the UMR and compared them to each other. Examined the monetization cases that occurred between fiscal years 2008 and 2010 and compared them to limits set by the BEST and/or the UMR. As we created a data set from the agencies’ documents and calculations to assess the extent to which USDA and USAID had exceeded the limits they set for monetization, we determined that it was beyond the scope of this engagement to assess the agencies’ underlying data. We did, however, check the internal logic of the agencies’ documents and their calculations. We consulted with the agencies if we found discrepancies and either had the agencies make the necessary corrections or did not use the data in our analysis. We also assessed both agencies’ efforts to monitor and evaluate the impact of monetization transactions. We conducted this performance audit from July 2010 to June 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To determine whether, and the extent to which, ocean freight rates differ between U.S.-and foreign-flag carriers in shipping U.S. food aid, we obtained data from the Kansas City Commodity Office (KCCO) and developed two regression models to estimate the differences in freight rates between U.S.-and foreign-flag carriers while controlling for various factors that affect the freight rate. We obtained data from KCCO, a division of the Department of Agriculture (USDA) responsible for procuring U.S. food aid commodities. The data contain more than 5,000 food aid purchase transactions between 2007 and 2010. For each transaction, we had the following information: 1. Name of program: Food for Peace, Food for Progress, or McGovern- Dole International Food for Education and Child Nutrition Program 3. Name of the recipient country 4. Name of the implementing partner 5. Name of the commodity 6. Type of food aid: monetization or direct delivery 7. Fiscal year the program is approved 8. Name of the port where commodity is loaded in the U.S 9. Date when commodity arrives at load port 10. Name of the port where commodity is discharged 11. Metric tons of commodity 12. Total commodity cost 13. Total freight cost Table 4 presents the summary statistics of the data. We generated a new variable called pertonfreight, measured in dollars per metric ton, by dividing total freight cost by metric tons. Table 5 compares the difference in freight rate between foreign and U.S.-flag carriers by commodity type (bulk vs. non-bulk) and by year without controlling for shipping routes or shipping terms. The results show that in general U.S. flag carriers charge higher freight rates than foreign flag carriers. However, part of the difference could be explained by shipping routes or shipping terms, which we incorporated in the regression analysis. In order to analyze the difference in ocean freight rate between U.S.-and foreign-flag carriers while controlling for various factors which affect freight rates, we performed a multivariate regression analysis. We attempted to explain the differences in freight rates using the shipping routes, shipping time, shipping terms, commodities shipped, and the ownership of the carriers. Freight rate = a0 + (a1 * load port dummy) + (a2 * discharge port dummy) + (a3 * year dummy) + (a4 * bulk dummy) + (a5 * shipping term dummy) + (a6 *flag dummy) load port dummy is a set of variables indicating where commodities were loaded. discharge port dummy is a set of variables indicating where commodities were unloaded. year dummy is a set of variables indicating the year the commodities were shipped. bulk dummy is a variable indicating if the commodities shipped were bulk (bulk dummy=1) or non-bulk (bulk dummy=0); term dummy is a set of variables indicating which of the four different shipping terms we used. flag dummy is a variable indicating if the ocean carriers were foreign-flag carriers (flag dummy=1) or U.S.-flag carriers (flag dummy=0). A negative and significant coefficient a6 would indicate that foreign-flag carriers charge a lower freight rate than U.S.-flag carriers after controlling for shipping routes, shipping time, commodity type, and shipping terms. Table 6 presents the main regression results for model 1. In order to capture the difference in freight rate between U.S.-and foreign- flag carriers on bulk and non-bulk commodities, we ran a regression with an interactive term flag * bulk. Freight rate = a0 + (a1 * load port dummy) + (a2 * discharge port dummy) + (a3 * year dummy) + (a4 * bulk dummy) + (a5 * shipping term dummy) + (a6 *flag dummy) + (a7 * (flag * bulk) For bulk commodities (bulk=1), and foreign-flag carriers (flag=1), Equation 2 becomes: Freight rate = a0+ (a1 * load port dummy) + (a2 * discharge port dummy) + (a3 * year dummy) + (a4 * 1) + (a5 * shipping term dummy) + (a6 *1)+ For bulk commodities (bulk=1) and U.S.-flag carriers (flag=0), Equation 2 becomes: freight rate = a0+ (a1 * load port dummy) + (a2 * discharge port dummy) + (a3 * year dummy) + (a4 * 1) + (a5 * shipping term dummy) + (a6 *0)+ The difference between Equation 3 and Equation 4 yields a6+a7, which is the difference in freight rate between U.S.-and foreign-flag carriers for bulk commodities. Similarly, for non-bulk commodities (bulk=0), and foreign-flag carriers (flag=1), Equation 2 becomes: freight rate = a0+ (a1 * load port dummy) + (a2 * discharge port dummy) + (a3 * year dummy) + (a4 * 0) + (a5 * shipping term dummy) + (a6 *1)+ For bulk commodities (bulk=0) and U.S.-flag carriers (flag=0), Equation 2 becomes: freight rate = a0+ (a1 * load port dummy) + (a2 * discharge port dummy) + (a3 * year dummy) + (a4 * 0) + (a5 * shipping term dummy) + (a6 *0)+ The difference between Equation 5 and Equation 6 yields a6, which is the difference in freight rate between U.S.-and foreign-flag carriers for non- bulk commodities. Table 7 presents the main regression results for model 2. The United States has principally employed six programs to deliver food aid: Public Law 480 Titles I, II, and III; Food for Progress; the McGovern- Dole International Food for Education and Child Nutrition; and the Local and Regional Procurement Project. Three of these programs allow for monetization: Title II (renamed Food for Peace), Food for Progress, and McGovern-Dole International Food for Education and Child Nutrition. Table 8 provides a summary of these food aid programs by program authority. As previously mentioned, between fiscal years 2008 and 2010, more than 1.3 million metric tons of food aid were programmed for monetization in 34 countries. Figure 10 shows the total volume of commodities programmed for monetization in each country by the U.S. Agency for International Development (USAID) and U.S. Department of Agriculture (USDA) between fiscal years 2008 and 2010. Table 9 shows the volume of each commodity programmed for monetization by country, program, and year. Figure 11 provides a percentage breakdown of the commodities programmed for monetization by USAID and USDA between fiscal years 2008 and 2010. The following table further outlines the steps in the monetization process from grant application through development project completion depicted in figure 3. We conducted a survey of 29 implementing partners that monetized either through U.S. Agency for International Development (USAID) or and U.S. Department of Agriculture (USDA) between fiscal years 2008 and 2010, and we received a 100 percent response rate. Of the 29 implementing partners, 6 monetized through USAID only, 13 monetized through USDA only, and 10 monetized through both agencies. The tables that follow summarize selected results of the implementing partners’ responses to our survey. The following are GAO’s comments on the U.S. Department of Agriculture letter dated June 10, 2011. The U.S. Department of Agriculture noted some advantages in addition to the economic benefits of development projects funded by monetization, such as encouraging commercial markets for agricultural products and other market-building benefits. However, the potential for adverse market impacts, such as artificially suppressing the price of a commodity due to excessive monetization, could work against the agricultural development goals for which the funding was originally provided. The following are GAO’s comments on the U.S. Department of Transportation letter dated June 14, 2011. 1. We are making this proposed amendment on the basis of the following four factors: First, the number of vessels participating in the food aid program has declined. In a 2009 report to Congress, U.S. Agency for International Development (USAID) and U.S. Department of Agriculture (USDA) jointly stated that, due to the declining size of the U.S.-flag commercial fleet, USAID and USDA are forced to compete with the Department of Defense and other exporters for space aboard the few remaining U.S.-flag vessels, thereby limiting competition in transportation contracting and leading to higher freight rates. Second, our analysis of ocean transportation costs showed that food aid shipments on foreign-flag carriers cost the U.S. government, on average, $25 per ton less than U.S.-flag carriers. Third, although the 3- year requirement was established to provide employment opportunities to U.S. shipyards, since 2005, U.S. shipyards have built only two new U.S.-flag vessels appropriate for transporting food and the vessels have not been awarded a food aid contract. Fourth, the 3- year requirement applies only to food aid and not to defense agencies and the U.S. Export-Import Bank. The elimination of the 3-year waiting period can ease entry of new vessels into U.S. food aid programs, with the potential to increase competition among eligible U.S.-flag ships and reduce the cost of transportation. 2. We added clarifying language to describe the U.S. Department of Transportation’s reimbursement to USAID and USDA for the ocean freight differential (OFD). However, the OFD represents a cost to the U.S. government. In addition, according to USAID and USDA, the OFD reimbursements for monetization are transferred to the general food aid accounts of both agencies, can be used to fund either emergency or nonemergency programs, and are likely not fully available to fund development assistance. 3. Our analysis of transportation cost was based on Kansas City Commodity Office (KCCO) data covering all monetization transactions for both agencies for fiscal years 2008 through 2010 and is thus a precise calculation of the actual cost to the U.S. government. 4. In a 2009 report to Congress, USAID and USDA jointly stated that, due to the declining size of the U.S.-flag commercial fleet, USAID and USDA are forced to compete with the Department of Defense and other exporters for space aboard the few remaining U.S.-flag vessels, thereby limiting competition in transportation contracting and leading to higher freight rates. 5. While we did not analyze cargo freight bids, our analysis of KCCO data included more than 5,000 food aid purchase transactions for fiscal years 2007 to 2010. We included in this data the number of vessels awarded all food aid contracts, not just Title II, by fiscal year and determined that both the number of vessels and the tonnage shipped per year had declined. We also determined the actual difference in cost to the U.S. government between U.S.- and foreign-flag vessels. 6. According to KCCO data, the number of U.S.-flag vessels awarded food aid contracts in fiscal year 2002 was 134. In addition to the individual named above, Joy Labez (Assistant Director), Carol Bray, Ming Chen, Debbie Chung, Kathryn Crosby, Martin De Alteriis, Mark Dowling, Francisco Enriquez, Etana Finkler, Sarah M. McGrath, Julia Ann Roberts, Jerry Sandau, Jena Sinkfield, Sushmita Srikanth, Phillip J. Thomas, Seyda Wentworth, and Judith Williams made key contributions to this report. International School Feeding: USDA’s Oversight of the McGovern-Dole Food for Education Program Needs Improvement. GAO-11-544. Washington, D.C.: May 19, 2011. International Food Assistance: Better Nutrition and Quality Control Can Further Improve U.S. Food Aid. GAO-11-491. Washington, D.C.: May 12, 2011. Global Food Security: U.S. Agencies Progressing on Governmentwide Strategy, but Approach Faces Several Vulnerabilities. GAO-10-352. Washington, D.C.: March 11, 2010. International Food Assistance: A U.S. Governmentwide Strategy Could Accelerate Progress toward Global Food Security. GAO-10-212T. Washington, D.C.: October 29, 2009. International Food Assistance: Key Issues for Congressional Oversight. GAO-09-977SP. Washington, D.C.: September 30, 2009. International Food Assistance: USAID Is Taking Actions to Improve Monitoring and Evaluation of Nonemergency Food Aid, but Weaknesses in Planning Could Impede Efforts. GAO-09-980. Washington, D.C.: September 28, 2009. International Food Assistance: Local and Regional Procurement Provides Opportunities to Enhance U.S. Food Aid, but Challenges May Constrain Its Implementation. GAO-09-757T. Washington, D.C.: June 4, 2009. International Food Assistance: Local and Regional Procurement Can Enhance the Efficiency of U.S. Food Aid, but Challenges May Constrain Its Implementation. GAO-09-570. Washington, D.C.: May 29, 2009. Foreign Aid Reform: Comprehensive Strategy, Interagency Coordination, and Operational Improvements Would Bolster Current Efforts. GAO-09-192. Washington, D.C.: April 17, 2009. Foreign Assistance: State Department Foreign Aid Information Systems Have Improved Change Management Practices but Do Not Follow Risk Management Best Practices. GAO-09-52R. Washington, D.C.: November 21, 2008. International Food Security: Insufficient Efforts by Host Governments and Donors Threaten Progress to Halve Hunger in Sub-Saharan Africa by 2015. GAO-08-680. Washington, D.C.: May 29, 2008. Foreign Assistance: Various Challenges Limit the Efficiency and Effectiveness of U.S. Food Aid. GAO-07-905T. Washington, D.C.: May 24, 2007. Foreign Assistance: Various Challenges Impede the Efficiency and Effectiveness of U.S. Food Aid. GAO-07-560. Washington, D.C.: April 13, 2007. Foreign Assistance: U.S. Agencies Face Challenges to Improving the Efficiency and Effectiveness of Food Aid. GAO-07-616T. Washington, D.C.: March 21, 2007. Maritime Security Fleet: Many Factors Determine Impact of Potential Limits on Food Aid Shipments. GAO-04-1065. Washington, D.C.: September 13, 2004.
Since the Food Security Act of 1985, Congress has authorized monetization--the sale of U.S. food aid commodities in developing countries to fund development. In fiscal year 2010, more than $300 million was used to procure and ship 540,000 metric tons of commodities to be monetized by the U.S. Agency for International Development and the U.S. Department of Agriculture. Through analysis of agency data, interviews with agency officials, and fieldwork in three countries, this report (1) assesses the extent to which monetization proceeds cover commodity and other associated costs and (2) examines the extent to which U.S. agencies meet requirements to ensure that monetization does not cause adverse market impacts. GAO found that the inefficiency of the monetization process reduced funding available to the U.S. government for development projects by $219 million over a 3-year period. The process of using cash to procure, ship, and sell commodities resulted in $503 million available for development projects out of the $722 million expended. The U.S. Agency for International Development (USAID) and the U.S. Department of Agriculture (USDA) are not required to achieve a specific level of cost recovery for monetization transactions. Instead, they are only required to achieve reasonable market price, which has not been clearly defined. USAID's average cost recovery was 76 percent, while USDA's was 58 percent. Further, the agencies conduct limited monitoring of sale prices, which may hinder their efforts to maximize cost recovery. Ocean transportation represents about a third of the cost to procure and ship commodities for monetization, and legal requirements to ship 75 percent of the commodities on U.S.-flag vessels further increase costs. Moreover, the number of participating U.S.-flag vessels has declined by 50 percent since 2002, and according to USAID and USDA, this decline has greatly decreased competition. Participation may be limited by rules unique to food aid programs which require formerly foreign-flag vessels to wait 3 years before they are treated as U.S.-flag vessels. USAID and USDA cannot ensure that monetization does not cause adverse market impacts because they monetize at high volumes, conduct weak market assessments, and do not conduct post-monetization evaluations. Adverse market impacts may include discouraging food production by local farmers, which could undermine development goals. To help avoid adverse market impacts, the agencies conduct market assessments that recommend limits on programmable volume of commodities to be monetized. However, USAID's assessments were conducted for just a subset of countries and have not yet been updated to reflect changing market conditions, and USDA's assessments contained weaknesses such as errors in formulas. Both agencies have at times programmed for monetization at volumes in excess of limits recommended by their market assessments. Further, the agencies monetized more than 25 percent of the recipient countries' commercial import volume in more than a quarter of cases, increasing the risk of displacing commercial trade. Finally, the agencies do not conduct post-monetization impact evaluations, so they cannot determine whether monetization caused any adverse market impacts. GAO recommends that Congress consider eliminating the 3-year waiting period for foreign vessels that acquire U.S.-flag registry to be eligible to transport U.S. food aid. Further, the USAID Administrator and the Secretary of Agriculture should develop a benchmark for "reasonable market price" for food aid sales; monitor these sales; improve market assessments and coordinate efforts; and conduct postmarket impact evaluations. USAID and USDA generally agreed with our recommendations. DOT disagreed with our Matter for Congressional Consideration due to its concern that the proposed statutory change might be detrimental to the U.S. maritime industry.
States historically have had jurisdiction over the way business entities within their boundaries are formed and over reporting requirements for these entities. Statutes and requirements vary from state to state. In general, however, forming a company involves certain steps. Initially, a company principal or someone acting on the company’s behalf submits formation documents to the appropriate state office—usually a division of the secretary of state’s office—but in some cases to a different state agency. All formation documents filed with the state are matters of public record and are available to anyone. Documents may be submitted in person, by mail or, increasingly, online. A minimal amount of basic information generally is required to form a company, although these requirements also vary from state to state. Generally, the documents must give the company’s name, an address where official notices can be sent to the company, share information for corporations, and the names and signatures of the persons incorporating (see fig. 1). State officials generally check to see that the documents supply the information required by statute. Fees vary by state from $25 to $1,000, and the process can take anywhere from 5 minutes to 60 days. See appendix II for more information on how formation documents are submitted and on the company formation fees in each state. Expedited services, available in some states, decrease processing times but may require an additional fee. Most states also require companies to file annual or biennial reports in order to stay in good standing, for a fee ranging from $5 to $500. Businesses may be incorporated or unincorporated. A corporation is a legal entity that exists independently of its shareholders—that is, its owners or investors—and that limits their liability for business debts and obligations and protects their personal assets. For example, the owners of a small store may desire limited liability protection in case a customer is accidentally injured inside the store and decides to sue. In this hypothetical case, the owners’ personal assets, such as their home and retirement savings, generally would not be subject to any award if the customer won the lawsuit. Limited liability means that owners or shareholders in a business entity are personally responsible only for the amount they have invested in the business, while the corporation itself is responsible for the debts and other obligations it incurs. The exception occurs when a court “pierces the corporate veil,” or disregards the legal entity that is the corporation, and holds the owners, shareholders, and sometimes the officers and directors responsible for the corporation’s acts and obligations. In contrast, the owners of unincorporated businesses, such as partnerships and sole proprietorships, are generally liable for all debts and liabilities incurred by their businesses. However, these types of businesses also offer tax advantages that corporations do not. The limited liability company (LLC) is a fairly new business form that is a hybrid of the corporation and the partnership. Wyoming passed the first law permitting formation of LLCs in 1977, and Florida followed suit in 1982. By the mid-1990s, all states had enacted LLC statutes. Like a corporation, an LLC protects its owners, which are referred to as members, from some debts and obligations; like partnerships and sole proprietorships, however, it may confer certain tax advantages. In addition, LLCs can choose a more flexible management structure than corporations. Table 1 shows the key characteristics of the different types of U.S. businesses. Historically, the corporation has been the dominant business form, but recently the LLC has become increasingly popular. According to our survey, 8,908,519 corporations and 3,781,875 LLCs were on file nationwide in 2004. That same year, a total of 869,693 corporations and 1,068,989 LLCs were formed. Figure 2 shows the number of corporations and LLCs formed in each state in 2004. Five states—California, Delaware, Florida, New York, and Texas—were responsible for 415,011 (47.7 percent) of the corporations and 310,904 (29.1 percent) of the LLCs. As shown in figure 3, Florida was the top formation state for both corporations (170,207 formed) and LLCs (100,070) in 2004. New York had the largest number of corporations on file in 2004 (862,647) and Delaware the largest number of LLCs (273,252). Data from the International Association of Commercial Administrators (IACA) shows that from 2001 to 2004, the number of LLCs formed increased rapidly—by 92.3 percent—although the number of corporations formed increased only 3.6 percent. Most states do not require ownership information at the time a company is formed, and while most states require corporations and LLCs to file annual or biennial reports, few states require ownership information on these reports. Similarly, only a handful of states mandate that companies list the names of company managers on formation documents, although many require managers’ information on periodic reports. States may require other types of information on company formation documents, but typically they do not ask for more than the name of the company and the name and address of the agent for service of process (where legal notices for the company should be sent). Most states conduct a cursory review of the information submitted on these filings, but none of the states verify the identities of company officials or screen names against federal criminal records or watch lists. The owners of a company are, in the case of a corporation, the shareholders of that corporation and in the case of an LLC, the members of that LLC. According to our survey results, none of the states collect ownership information in the formation documents—articles of incorporation—for corporations (see fig. 4). State statutes generally do, however, require corporations to prepare and maintain lists of shareholders that, unlike formation documents, are not filed with the state or part of the public record. With respect to LLCs, states generally require a manager-managed LLC to name the designated manager instead of a member on the formation document—articles of organization. However, the manager is not necessarily an owner of the LLC. LLCs usually prepare and maintain operating agreements that name the owners, members, and their financial interests in the company, but these operating agreements are not filed with the state or part of the public record. According to our survey results, four states—Alabama, Arizona, Connecticut, and New Hampshire—request some ownership information when an LLC is formed. For example, in Alabama, the formation documents must list the names and mailing addresses of the initial members of an LLC. A Connecticut official said that either a member’s or a manager’s name was required on the articles of organization. In New Hampshire, a member or manager is required to sign the articles of organization. Arizona statutes mandate that manager managed LLCs must list on formation documents the name and address of each member owning more than a 20 percent interest and that member-managed LLCs must list all members’ names and addresses. Depending on the management structure of an LLC, ownership information may be included on the formation documents in more states. If an LLC is managed by its members, some states require the LLC to provide the name and address of at least one member on the formation document. Most states require corporations and LLCs to file periodic—annual or biennial—reports, but not many states require ownership information on these reports (see fig. 4).With respect to corporations, three states (Alaska, Arizona, and Maine) indicated on our survey that the name of at least one owner was required on corporations’ periodic reports. In Alaska, any person owning more than a 5 percent interest in a corporation must be listed on the periodic report, according to a state official. An official from Arizona said the state requires that corporate periodic reports list the names and addresses of shareholders owning more than 20 percent of company stock. In Maine, statutes require that periodic reports include the names and addresses of shareholders of a corporation only if there are no directors. With respect to LLCs, our survey showed that five states require LLCs to list at least one member on their periodic reports. As with corporations, Alaska requires the name and address of any person owning more than a 5 percent interest in an LLC to be listed on the company’s periodic report. A state official told us that LLCs in Kansas are required to list on their periodic reports the names and post office addresses of members owning at least 5 percent of the capital in the company. Connecticut and New Hampshire require either a manager or at least one member name on their periodic reports. Maine requires the name and business or residential address of each manager, or if there are no managers, each member with a street address on the periodic report. Finally, in states that require a manager’s or managing member’s name on periodic reports, the reports for member-managed LLCs might include a member’s name. Less than half of the states require the names and addresses of company management or directors on company formation documents. Management may include officers—chief executive officers, secretaries, and treasurers—who help direct a corporation’s day-to-day operations, as well as managers or managing members of LLCs. Directors serve on the governing board of a corporation and are responsible for making important business decisions, especially those that legally bind the corporation. Two states require officers’ names and addresses on company formation documents, 10 states require the names of directors, and 9 states require the addresses of directors (see fig. 5) . Some states have additional information requirements for company formations. For instance, our review of state statutes found that Louisiana does not require information on directors on the incorporation documents, but does require directors’ names and addresses on an initial report that must be filed with the incorporation documents. We also found that Oklahoma requires the names and addresses of the directors only if the persons incorporating the company are not responsible for its operations after the incorporation documents are filed. More states require management information on LLCs. Nineteen states require the names of managers or managing members on formation documents, and 18 states require their addresses. Most states require the names and addresses of corporate officers and directors and of managers of LLCs on periodic reports (see fig. 5). For corporations, 47 states require the names of officers on periodic reports, and 46 states require officers’ addresses. Thirty-eight states require directors’ names and 37 require directors’ addresses. For LLCs, 28 states require the manager’s or managing member’s name, and 27 states require their addresses. However, even if states require disclosure of directors’ names, those listed may not be the individuals who are truly directing the company because in some cases, the individuals could be nominee directors that act only as instructed by the beneficial owner of the company. Also, managers may or may not be owners of the LLC. States may also ask for other general information about a company, including its name; the name and address of the agent for service of process (where legal notices for the company should be sent); and for corporations, information about the number and types of shares the company will issue. Appendix III shows the type of information that each state collects on formation documents. Many states specify that the agent’s address must be a physical street address and not a post office box. In addition, a majority of the states include on their formation documents space for an individual to sign as the incorporator (in the case of a corporation) or organizer (in the case of an LLC) of the company. The incorporator or organizer may be the agent who is forming the company on behalf of the owners or it may be an individual affiliated with the company being formed. Most states permit an individual or entity to serve as incorporator without regard to state residency or later participation in the company, but at least two states require that the incorporator be associated with the company in some way. For example, the articles of incorporation for Arkansas and California state that if a newly incorporated company has chosen initial officers or directors, one or more of them must sign as the incorporator. Otherwise, an unaffiliated individual can sign as the incorporator. Many states require a brief statement of purpose or a principal office address in order to form a corporation or LLC. In reviewing state statutes and state forms, we found that 20 states require a statement on the purpose of a corporation and 16 require a statement of purpose for LLCs on formation documents. In some states that ask for a statement of purpose, a general statement such as “the purpose of the corporation is to engage in any lawful act or activity…” is sufficient. Alaska requires an additional form that discloses the North American Industry Classification System (NAICS) number that most closely describes the activities of a corporation. Fourteen states require a principal office address to form a corporation, and 23 states require a principal office address to form an LLC. The principal office generally means either the address of the company’s place of business or its mailing address. Therefore, even in states where a principal office address is required, this address may not indicate the company’s actual place of business. For example, Arizona’s form asks for a known place of business in Arizona, but the instructions for the form state that this address may be in care of the address of the company’s agent. Some states have unique requirements for information on newly forming companies. For example, the articles of incorporation forms for Louisiana, Rhode Island, and South Dakota must be notarized. Similarly, an attorney licensed to practice in South Carolina must sign company formation documents in that state. Private sector officials told us that more states used to require a notary’s signature on company formation documents, but that most had repealed this provision. A Louisiana state official said that requiring a notary’s signature was a “historical” decision and, despite an effort to change the law, was likely to remain a requirement. A few states (Louisiana, Massachusetts, Mississippi, and Pennsylvania) also require a federal taxpayer identification number (TIN) on some company formation documents. Kansas requests a TIN on formation documents, but it is not required by statute. Louisiana and Massachusetts state officials told us that even though a TIN is required, company formation documents are not rejected if it is not included. These states originally used the TIN as a tracking number for filings. For instance, the Kansas Department of Revenue uses the information to match companies in its database. A Massachusetts official said that the state was moving away from using TINs in all cases and now assigns a private unique identification number to each company for tracking purposes. While the requirement to include a TIN is still in place for LLCs in Massachusetts, it was recently deleted from the corporation statute because the Secretary of State’s office received many complaints about this number being publicly available on filing documents. Forty-two states reported on our survey that their information requirements for persons or entities from outside the United States forming a U.S. company were the same as for U.S. citizens. Those states that say there was a difference also said that the difference was simply that proof of the company’s existence had to be included and that documents had to be translated into English. For example, Minnesota and North Carolina commented that if an entity from another country was applying to conduct business in those states, the entity must provide proof of good standing or a document certifying that the company existed in the original country. Alaska is the only state that requires the name and address of each alien affiliate or a statement on the articles of incorporation that there are no alien affiliates. An “alien affiliate” is an individual from another country who has some ownership or control of a company or an entity controlled by an individual or a corporation from another country. An Alaska state official said that this information was originally required to identify offshore fisheries and their owners. Nearly all of the states reported that they reviewed filings for the required information and fees and checked to see if the proposed name was available (see table 2). In Arizona, for example, state officials said that the main reasons filings were rejected were that required information, such as the agent’s address or signature or the type of management structure of an LLC, was missing and that the company name was not distinguishable from an existing entity’s name. Other state officials said they also rejected filings because they were missing key information, the company name was not available, or the fee was not included. Many states also reported that they reviewed filings to ensure compliance with state laws. In Virginia, for instance, filings are reviewed for more than just the required information. An attorney in the state office reviews all formation filings for substantive issues. For example, Virginia law requires that shareholders elect directors, and state officials said that they would reject a filing if the articles stated that the company’s directors would be chosen by a different method. None of the states reported verifying the identities of incorporators or company officials or using federal criminal records or watch lists to screen names. State officials gave several reasons for not taking this step when reviewing formation documents. In interviews and on the survey, many state officials emphasized that their role was authorized by statute as only administrative, not investigative. In fact, 45 states reported that they did not have investigative authority to take action if they identified information that could indicate criminal activity, although some state officials said they can refer suspicious activity to law enforcement. Only two states—Colorado and North Carolina—reported that they did have investigative authority. Further, two states noted that their state statutes required them to file formation documents as long as the documents contained the required information. In addition, one state official said that states did not have the resources to verify the information submitted on formation documents and other officials commented on the survey that verification would significantly increase the costs and workloads of their offices. Another stated that the staff would not know how to determine the validity of information individuals provided to verify their identity. While states do not verify the identities of individuals listed on company formation documents, an individual may be charged with perjury in some states if law enforcement officials find in the course of an investigation that an individual submitted false information on a company filing. We found in our review of state forms that 10 states note the penalties for providing false information on their company formation documents. One state official provided an example of a case in which state law enforcement officials charged two individuals with, among other things, perjury for providing false information about an agent on articles of incorporation. A few states reported that they directed staff to look for suspicious activity or fraud in company filings. For example, an official in Alabama told us that staff who reviewed filings looked for anything out of the ordinary, such as a bank from another country that wanted to form a company in Alabama but would not provide the required information. An official in Missouri said that despite not having a formal procedure or policy for reviewing filings for suspicious activity, staff were trained to look for things that were out of the ordinary. Such things might include discrepancies like two signatures of the same name with different handwriting. However, most states reported that they did not direct staff to look for suspicious information. According to an official in Alaska, the state has no formal mechanism for identifying or reporting suspicious information. The official said that staff would notice unusual fictitious names on filings, but with a filing fee of $250 in Alaska, this type of activity was rare. Two state officials told us that when staff noticed something unusual, they typically contacted the applicant for an explanation but still usually filed the documents. If something appeared especially unusual, they referred the issue to state or local law enforcement or the Department of Homeland Security. One official said his office had never received a response from law enforcement about issues that had been forwarded. The roles of company formation agents and agents for service of process differ, as do the state statutes that govern them. Company formation agents submit documents on a company’s behalf, and agents for service of process receive legal and tax documents for clients. Most states do little to oversee these agents and do not verify information about them. Further, states generally do not require agents to collect information on company ownership or management or to verify the information they collect. The agents we interviewed generally collect only contact information and any information required by the states and do not verify the information. In some circumstances—primarily with international clients and clients requesting special services—some agents may verify a client’s identity. Company formation agents are firms that help individuals form companies by filing required formation documents and other paperwork with the appropriate state agencies. Although individuals may file their own formation documents directly, a company formation agent can facilitate the process. Agents for service of process can be either persons or entities that are designated to receive important tax and legal documents on behalf of businesses. For example, if a company is being sued, the agents for service of process will accept the legal paperwork and forward it to their company contacts. Historically, the role of agents was to ensure companies had a presence in each state they operated in and were able to be reached. Our review of state statutes showed that almost all states require companies to designate an agent for service of process on company formation documents.These agents may provide other services, such as filing amendments and periodic reports, assisting with mergers and acquisitions, obtaining certificates of good standing, and conducting other public record searches. Agents may also provide assistance in setting up bank accounts or providing directors, although only a couple of the 12 agents we contacted said that they would provide these services, and then only in special situations. According to a few agents we interviewed, large companies are more likely to hire agents, especially large companies that need an agent for service of process in multiple states. Most states have basic requirements for agents for service of process. Forty-six states indicated on our survey that they required agents for service of process to have a physical address in the state (not a post office box) where documents could be received, while seven states required agents to keep specific office hours. Individuals serving as agents for service of process generally must be state residents or have a state address, but firms acting as agents generally must be authorized to do business in the state and must have filed company formation documents. A few states have additional requirements for agents. For example, in Maine, an agent must be a natural person, while in Louisiana, a professional law corporation or partnership may serve as the agent. In Virginia, agents for service of process must be individuals who are both a resident and an officer of the company being formed, members of the state bar, or companies authorized to do business in the state, and must specify their qualification on the company formation documents. We found limited incidences of state oversight of agents. A few state officials we spoke with reported checking company formation documents to ensure that agents had a local address, but in general they did not check to see whether the address was valid. One state official said the office verified addresses only in special cases. Delaware reviews its agents’ addresses if several hundred transactions occur from the same address to ensure it is an actual address and not a post office box. In addition, Delaware is unique in allowing approximately 40 agents to have direct access to the state’s database to enter or access company information. The state contracts with these agents, and in return they must meet certain guidelines and pay access fees. The state reserves the right to terminate these contracts at any time but thus far has not done so because of nefarious behavior. State officials in Florida and Wyoming told us that they checked their records to ensure that companies acting as agents for service of process were authorized to conduct business in the state. Thirty-nine states said they did not track the number of agents for service of process operating in the state and 36 did not have an official listing of agents. However, a couple of states have registration requirements for operating within their boundaries. Wyoming requires agents serving more than five corporations to register with the state annually, under a law that was enacted after some agents gave false addresses for their offices, according to a state official. To register, agents must pay a $25 annual fee and complete a form each January giving contact information, including a physical and mailing address, and indicating whether the applicant or any company principal has ever been convicted of a felony. The state official said that the office kept the information on file in case an agent was investigated. California law requires any corporation serving as an agent for service of process to file a certificate with the Secretary of State’s office and to list the California address where process can be served and the name of each employee authorized to accept process. Seventeen states indicated on the survey that they provide the names of all or some agents on a Web site, and 6 states reported having some requirements for agents wanting to be listed on the Web site. For example, Delaware requires a business to have been operating for at least 1 year, to be in good standing, and to serve more than 50 clients. Although the notion is controversial, some state officials and agents said that some level of uniform registration or certification in the industry might be desirable, for several reasons. One agent told us that the few agents who do not follow the current rules give the industry a bad name and that regulation would eliminate some of these agents. Another agent felt that registration would create some standards in the industry and provide some legitimacy for firms conducting business in international jurisdictions that require registration. However, some agents felt that regulation would be difficult if not detrimental to the industry. One agent felt that if the industry were regulated, individuals would avoid using agents and form their companies themselves. Another agent believed that the costs associated with meeting standards could be high enough to drive smaller firms out of business. In either case, both agents that supported and opposed regulation said that the industry should be involved in efforts to develop some type of registration or regulation that would affect their business. Agents we spoke with generally collected only contact information and the information required by a state for company formation documents or periodic reports. This information may include contact names for billing and for forwarding service of process, annual reports, or tax notifications. These agents said they may have only one contact name for a company. According to several agents, they rarely collect information on ownership since states do not require it. In general, agents said they collect the names and addresses of officers and managers, if required, and when serving as an incorporator, agents may collect information on the company directors or shareholders, even if it is not required. This information allows agents to resign as incorporators and pass on the authority to conduct business to the new company principals. Depending on the size of the company, the directors and the officers may also be the owners, but one agent told us that he did not try to determine if they were. Several agents also told us that they do not always work directly with the principals of the company because the agents interact directly with law firms or transact a large part of their business online, and therefore may not have access to additional information not required by the state. One agent also noted that collecting ownership information was not necessary to doing his job. Even if agents collect information such as the names of officers and directors, a few agents said that they might not keep records of the information. For example, two agents told us that their firms did not keep a database of company information, in part because company documents filed with the state are part of the public record. Because the information is public, one agent felt it was not necessary to bear the additional cost of storing it internally. According to our review of state statutes, some states have record retention requirements that oblige corporations to make shareholder lists or the stock ledger available at the registered office within the state (which may be the agent’s office), although the requirements vary by state. For example, in Nevada, the registered office is required to keep the stock ledger or a file listing the location of the ledger, and in New Mexico, a list of shareholders must be available at a company’s registered office 10 days prior to a shareholders’ meeting. States generally do not require agents to verify the information collected from clients, and few agents we interviewed do. In general, agents told us they do not verify the validity of names or addresses provided, screen names against watch lists, or require picture identification of company officials. The extent of agents’ verification might include checking that the minimum statutory requirements have been met, researching an address if a client’s mail is returned, or comparing a credit card address to a company’s address. One agent said that his firm generally relied on the information that it received and that in general did not feel a need to question the information, although another agent said that his firm might request additional information to assess risk if something about a potential client seemed suspicious. Two agents with whom we spoke indicated that they collected additional information that could be used to verify the identity of clients, often when working with international clients, although the choice to verify information did not appear to be based on a formal risk assessment. These agents said they might check names against caller identification systems on their telephones or against the Office of Foreign Assets Control (OFAC) list of Specially Designated Nationals and Blocked Persons. One agent said that her firm created a document to collect additional information from clients from unfamiliar countries. This agent’s document was based in part on federal standards for financial institutions from the USA PATRIOT ACT. On the document, the agent asks for a federal tax identification number (TIN); company ownership information; information from the company Web site; e-mail addresses; and, for individuals, identification, proof of occupation, and citizenship status. Another agent we interviewed in Delaware asked for identification and used a specific agreement with certain international clients. In some cases, international agents contact the Delaware agent for assistance in forming U.S. companies for their clients in other countries. According to this agreement, international agents must verify the identity of an individual wishing to form a company through the Delaware agent by requiring their client to provide the principals’ names, addresses, dates and places of birth, nationalities, and occupations, as well as certified copies of their passports, proof of address, and a reference letter from a bank. This agent also required a client requesting mail forwarding services to provide additional information, such as a Social Security number, in addition to the information required by the U.S. Postal Service on its mail forwarding form. The agent said the firm collected this information to screen potential clients and protect the firm and that it would stop representing a client if the client generated a significant amount of service of process, complaints, or visits from investigative agents. In general, the agent felt the additional requirements were not burdensome. Another agent noted that any extra time added to the process was a result of the time required for the client to provide the information. In addition, a few other agents said that they used the OFAC list to screen names on formation documents or on other documents required for other services provided by their company, although several agents told us they were not aware of the OFAC list. A few agents we interviewed in Delaware used commercially available software to screen client names against the OFAC list, a step strongly encouraged by the Secretary of State. However, one agent told us that his staff had never gotten a match on the list. One agent felt that running checks on the names listed on company documents could add time to the process but would likely not be a burden. Other agents found the list difficult to use and saw using it as a potentially costly endeavor. OFAC officials reported that they had also heard from agents that screening names against the OFAC list would result in increases in the time and cost of the process, which could lead to a loss in business. Law enforcement officials are concerned about the use of U.S. shell companies to facilitate or hide criminal activity. Law enforcement officials we interviewed noted that they often used the information available from states in investigating shell companies that were suspected of criminal activities and said that, in some cases, the names of officers and directors on company filings had generated additional leads. However, officials also said that the information states collected was limited, noting that it could provide a place to start but that some cases had been closed because of insufficient information on beneficial owners. Law enforcement officials and other reports indicate that shell companies have become popular tools for facilitating criminal activity, particularly laundering money. In December 2005, several agencies of the federal government, including the Departments of the Treasury, Justice and Homeland Security, issued the first governmentwide analysis of money laundering in the United States, which described, among other things, how shell companies can be used to launder money. Shell companies can aid criminals in conducting illegal activities by providing an appearance of legitimacy—for example, an artificial source of income or proof of the type of transactions legitimate companies conduct. Shell companies can also provide access to the U.S. financial system through U.S. bank accounts or offshore accounts in banks that have a correspondent relationship with a U.S. bank. For example, in a Financial Crimes Enforcement Network (FinCEN) December 2005 enforcement action, FinCEN determined, among other things, that the New York branch of ABM AMRO, a banking institution, did not have an adequate anti-money-laundering program and had failed to monitor approximately 20,000 funds transfers—with an aggregate value of approximately $3.2 billion—involving the accounts of U.S. shell companies and institutions in Russia or other former republics of the Soviet Union. Determining a precise number of criminal cases involving the use of shell companies to hide illicit activity is difficult because forming such companies is not a crime but rather is sometimes used as a method for moving money that may be associated with a crime. Therefore, the use of shell companies for illicit activities is not tracked by law enforcement or government agencies. However, law enforcement officials told us they are seeing a wide range of indicators that suggest the increased use of U.S. shell companies for illicit activities. FinCEN officials told us they see many suspicious activity reports (SAR) filed by financial institutions that potentially implicated shell companies in the United States. For example, FinCEN reported in the U.S. Money Laundering Threat Assessment that financial institutions filed 397 SARs between April 1996 and January 2004 involving shell companies, East European countries, and correspondent bank accounts. The aggregate amount of activity reported in these SARs totaled almost $4 billion. Justice officials said that law enforcement officials from other countries have asked the United States to help them track down the individuals that had formed U.S. shell companies to hide illicit activity, but the lack of ownership information is obstructing their investigations. For example, a review by Justice of requests for legal assistance in 2005 from Russia and Ukraine found 30 requests for assistance from Russian authorities and 75 requests from Ukraine authorities involving U.S. shell companies. These requests typically ask for assistance in identifying individuals associated with the U.S. companies. However, Justice’s attempts to gather information in response to these requests on the companies are obstructed by the lack of information maintained by states and agents. These requests often involve serious crimes occurring in other countries but implicate a U.S. company. For example, in early 2006, one request was seeking information on a U.S. corporation allegedly used to smuggle a toxic controlled substance between two Eurasian countries because the name of the U.S. corporation was on the foreign customs papers. OFAC expressed concerns that shell companies can be used to facilitate transactions with targets (individuals, entities, or countries) of U.S. economic sanctions. In one example, during the period when the United States maintained sanctions against Yugoslavia (Serbia and Montenegro), a U.S. company formation agent filed incorporation papers for a Serbian entity, which then opened bank accounts in the United States as a U.S. company to transfer money through the United States. The FBI told us they currently have over 100 ongoing cases investigating market manipulation and that the majority of these cases involve the use of shell companies. One closed case, for example, involved the sale of fraudulent private placement offerings to the investing public. The convicted individuals used U.S. shell companies to give investors the impression that they were investing in legitimate companies, but instead the individuals stole the investors’ proceeds. In some cases, individuals have used shell companies to pump up the price of a stock and then sell their entire position in the stock while legitimate investors are left with worthless stock. The FBI has also expressed concern about the use of third-party agents to form thousands of shell companies in the United States for criminals operating in other countries; the criminals then use the shell companies to open U.S. bank accounts. The FBI believes that U.S. shell companies are being used to launder as much as $36 billion from the former Soviet Union. An FBI analysis of the use of these third-party agents found that they often register the shell company using nominee officers to keep the foreign beneficial owner anonymous and use companies created at an earlier date—“aged shelf companies”—to give banks and regulatory authorities the impression the company has longevity. Law enforcement officials provided us with examples of cases involving the use of U.S. shell companies. According to a Department of Justice report on Russian money movements, many of the investigations involving shell companies use common schemes to launder money and conceal money movements. In a “fictitious services” scheme, the criminals enter into a contract with a company purportedly offering an intangible service, such as consulting. The consulting company is actually a shell company owned by the criminals, so that payments for consulting services are actually payments into a bank account under their control. In one case involving a fictitious services scheme, a former public official from the Russian Federation allegedly helped to unlawfully divert international nuclear assistance funds that were intended to upgrade the safety of nuclear power plants operating in Russia and several former republics of the Soviet Union. The indictment stated that the suspects formed shell companies in Pennsylvania and Delaware that received the nuclear assistance payments and then diverted over $15 million of this money to corporate bank accounts. Ultimately the money was allegedly transferred to other personal bank accounts in the United States and other countries and the transfers concealed behind fictitious business contracts. The subjects of the indictment allegedly used at least $9 million to fund business investments and loans for their personal enrichment. IRS investigations have also uncovered the use of U.S. shell companies in tax evasion schemes. In one tax evasion case, two co-conspirators used nominee names to open bank accounts and form U.S. corporations in Florida to hide their assets and income to avoid tax liabilities. One co-conspirator was sentenced to 10 years in prison and ordered to pay $1.6 million in restitution. The other co-conspirator was sentenced to 25 years imprisonment for his involvement in the tax evasion scheme, as well as a related investment fraud scheme. ICE officials also told us they have encountered the use of U.S. shell companies in their investigations. ICE officials interviewed a third-party agent who had registered approximately 2,000 companies for international clients. The registrations took place mostly in Oregon, but also in Arkansas, Colorado, Idaho, Iowa, Kentucky, Montana, South Dakota, Washington, and West Virginia. The investigation was prompted by a bank that had reported suspicious transactions in an account of one of the companies registered by this agent. This case was subsequently closed because the agent moved from the area and could not be found. Law enforcement officials obtain some company information from states and agents through a variety of methods. Our review of states’ Web sites found that 46 states provide some company information online for free, but that states post different amounts of company information on their Web sites. For instance, Virginia officials told us that while the name of the incorporator is on the articles of incorporation, it is not added to the on line database. In addition, Delaware lists only the company name and the name and address of the agent online, while Florida makes copies of all documents available with all of the information they contain, including names of directors and managers. Given the variations in what is available online, law enforcement officials may request paper copies of filings that could provide more information. Law enforcement officials may also obtain company information from agents, although some law enforcement officials said they do not usually request information from agents because too little would be available, and one state law enforcement official said the agents might tell their clients about the investigation. Some agents told us they usually collect the same information as the state, but other agents and law enforcement officials indicated that agents might have additional information that could be useful in investigations, such as contact addresses and methods of payment. While ownership information is typically not available from states or agents, some law enforcement officials said the names of officers and directors and other information on forms could be helpful in some investigations. If ownership information is not available, law enforcement officials said that the names of officers and directors—even false names—could provide productive leads. In addition, law enforcement officials said that other information, such as addresses, could be investigated and also might provide productive leads. In other cases, though ownership information is not required, the actual owners may include personal information on the state’s documents. For example, IRS investigated four people in Michigan who formed 15 shell corporations in Michigan and Indiana. Using these shell companies, the co-conspirators established 37 lines of credit at a bank and charged a number of large purchases, including real property, several luxury cars, jewelry, boats, and a motor home. The bank incurred losses of approximately $9.6 million. The IRS investigators found key pieces of evidence, including the identity of the co-conspirators, on the articles of incorporation and annual reports maintained by the states where the corporations were formed. Two of the co-conspirators were sentenced to 45 months and 51 months in prison and ordered to pay $327,500 and $2.8 million in restitution, respectively. In another IRS case, a man in Texas used numerous identities and corporations formed in Delaware, Nevada, and Texas to sell or license a new software program to investment groups. He received about $12.5 million from investors but never delivered the product to any of the groups. The man used the corporations to hide his identity and to provide a legitimate face to his fraudulent activities. He also used the companies to open bank accounts to launder the money obtained from investors. IRS investigators found from state documents that he had incorporated the companies himself and often included his co-conspirators as officers or directors. The man was sentenced to 40 years in prison. In some cases, law enforcement officials have evidence of a crime but cannot connect an individual to the criminal action without ownership information. For example, an Arizona law enforcement official charged with helping investigate an environmental spill that caused $800,000 in damage said that the investigators could not prove who was responsible for the damage because the suspect had created a complicated corporate structure involving multiple company formations. ICE officials described a subject who allegedly used an agent to establish a Nevada-based corporation that in almost 2 years received 3,774 wire transfers totaling $81 million from locations such as the Bahamas, British Virgin Islands, Latvia, and Russia. However, ICE could not identify the suspect as the beneficial owner of the corporation because other people had handled the transactions. These cases were not prosecuted because investigators could not identify critical ownership information. Most of the law enforcement officials we interviewed said they had also worked on cases that reached dead ends because of the lack of U.S. company ownership information. State officials, agents, and others we interviewed said that collecting company ownership information could be useful to law enforcement and other interested parties. As we have discussed, investigations can be closed because of a lack of information, such as the names of the beneficial owners of a company. But if states or agents collected additional information on companies, filing times could increase, and a few states worried that costs could increase and company start-ups could be deterred. Further, information collected when companies were being formed might not be complete or up to date, as officers and directors might not have been chosen and the ownership could change after the company was formed. In addition, including such information in public records could cause concerns about privacy and related issues. State officials, agents, and other experts in the field suggested internal company records, financial institutions, and the IRS as alternative sources that might already be collecting this information. However, obtaining information from these sources also has limitations because the information may not be up to date or available. Collecting ownership information when companies are formed could have some positive impacts for law enforcement as well as members of the public searching for this information. As shown in figure 6, 21 states in our survey said that if more ownership information were collected at company formation, that additional information would be available to law enforcement and the public. And as we have discussed, law enforcement investigations can benefit from knowing who owns and controls a company. A couple of state officials said that collecting such information would also allow them to be more responsive to consumer demands they have received for this information. For example, officials in Arizona and the District of Columbia told us that they often received phone calls from the public asking for ownership information they could not provide. In addition, one agent suggested that requiring agents to collect more ownership information could discourage dishonest individuals from using agents and could reduce the number of unscrupulous individuals in the industry. State officials and agents noted that collecting additional information could increase filing times, and a few were concerned about other negative effects. Our survey showed that 29 states reported that the time needed to review and approve formations would increase if information on ownership was collected, since more data would need to be recorded in their databases (see fig. 6). A few states calculated that they would incur additional costs in modifying their forms, databases, and online filing systems to accommodate the new requirements. One state official said the extra time that would be required to review filings would reduce the benefits of electronic filing. Agents we interviewed also said that collecting and storing ownership information would increase the time necessary to provide their services and raise costs for both themselves and their clients. Other agents said that collecting and verifying ownership information would be difficult because they may have contact only with law firms and not company officials when a company is formed. State officials and others also noted that individuals could easily provide false names if ownership information were required without being verified. Our survey results showed that in nearly half the states (23), officials thought the number of companies formed in their jurisdictions would stay about the same if all of the states collected this additional information (see fig. 6). But some state officials and others we interviewed said that if the requirements were not uniform, states with the most stringent requirements could lose business to other states or even countries, potentially losing state revenue. Some state officials noted the importance of the fees generated from company formations to state general revenue funds. For example, a Delaware official said that 22 percent of the state’s revenue comes from the company formation business. Also, Nevada and Oregon officials stated that their offices were revenue-generating offices for the state. State officials, agents, and industry experts commented that states would be unlikely to pass comparable laws because state officials have such different opinions about the amount of information that should be disclosed. As a result, individuals could form companies in states where the requirements were easiest to follow. Agents also expressed concern that they could lose business if they collected ownership information, because individuals might be more likely to form their own companies and serve as their own agents. Individuals forming businesses could also be affected by new requirements for collecting company information. Some officials noted that the additional time required to review filings could slow down and might derail business dealings. One state official commented that such requirements would create a burden for honest business people who would have provided accurate information in the first place but would not deter criminals, who would provide false information in any case. According to a report on the use of companies for illicit purposes, requiring companies to disclose up front and to update ownership information may impose significant costs, particularly on small businesses. A few state and some private sector officials noted that an increase in the time and costs involved in forming a company might reduce the number of companies formed, because entrepreneurs and investors might be less likely to take the risks involved in forming or investing in new companies. Some state officials also noted that to change the information requirements, state legislatures would have to pass new legislation and grant company formation offices new authority. A few states indicated that collecting additional information would require higher fees that would also need to be set by their state legislatures. State officials also noted that since they are administrative agencies, they generally do not have the authority to question or verify the information provided on the forms and would need additional authority from state legislatures to do so. State and private sector officials pointed out that ownership information collected at formation or on periodic reports might not be complete or up to date. Information collected at formation, for instance, might not be useful because ownership information can change frequently throughout the year. For example, an official from Delaware commented that many privately held LLCs and corporations in Delaware and other states may have thousands of shareholders and LLC members that buy and sell shares and memberships on a daily basis. Another state official commented that collecting this information at formation would not be useful without requiring that it be updated frequently. In addition, since LLCs can be owned by individuals or other businesses, even if states required LLCs to list a member name, the name provided may not be that of an individual but another company. Disclosing ownership information on periodic reports, however, could mean that a year or more would pass before it was collected—too long to be of use in many investigations. In addition, we found that some states do not require these reports. Further, once it is formed, a shell company being used for illicit purposes in the United States or other countries may not file required periodic reports. Law enforcement officials told us that many companies under investigation for suspected criminal activities had been dissolved by the states in which they were formed for failing to submit periodic reports. State officials, agents, and other industry experts said the need for access to information on companies must be weighed against privacy issues. Company owners may want to maintain their privacy in part because state statutes have traditionally provided this privacy and in part to avoid lawsuits against them in their personal capacity. Some business owners may also seek to protect personal assets through corporations and LLCs. One state law enforcement official also noted that if more information were easily available, criminals and con artists could take advantage of it. He noted that information available on official Web sites was sometimes used to target companies for scams. For example, the official described a case in which an individual sent letters that appeared to be from a secretary of state’s office to companies listed on the state Web site, telling the recipients that they were to file their annual meeting minutes with the state, although no such requirement existed. The individual offered to provide filing services for a fee, and collected the fees from companies, but did not forward any minutes to the state. Providing more easily accessible information to the public could result in more such activities. Business owners might be more willing to provide ownership information if it were not disclosed in the public record. Some state officials we interviewed said that since all information filed with their office is a matter of public record, keeping some information private would require new legislative authority. The officials added that storing new information would be a challenge because their data systems are not set up to maintain confidential information. However, one official from Maryland said that keeping some information private would not be a problem since the office that accepted company formation and periodic report filings also handled tax filings and already had procedures for keeping information such as taxpayer identification numbers confidential. An official in Oregon also told us that the Corporations Division office had recently enacted procedures to keep some information private in cases such as domestic abuse. Individuals can petition the state to have information removed from databases available online and redacted in the paper file, but it is still available to law enforcement. The Arizona Corporation Commission also tries to remove Social Security numbers from its Web site if applicants include them on their paper forms, but maintains the information on paper forms. Because states do not typically collect and verify ownership information and because state and private sector officials could not quantify the extent of the possible costs of taking these steps, we reviewed the experiences of Jersey and Isle of Man in implementing the regulation of firms that provide services such as company formation (company service providers). Fewer companies are formed in both jurisdictions, especially by local residents, than in the United States, and the number of company service providers is much smaller. However, some of the concerns states and agents expressed about increased regulation also have been born out in Jersey and the Isle of Man, although officials also pointed to certain benefits of collecting ownership information and the new regulatory regime. Company service providers in both jurisdictions must be licensed, and are subject to periodic monitoring and inspections by government agencies. In both of these jurisdictions, company service providers are required to conduct due diligence to verify the identity of their clients and obtain company ownership information to form a new company. The ownership information is not maintained in the public record, but is kept at the registry in Jersey and with company service providers in Isle of Man and is available only to law enforcement. Despite strong initial resistance, the company service provider industry in these two jurisdictions is now perceived as successful because licensed companies have continued to remain profitable. In addition, one company service provider told us that the regulations have instilled a degree of professionalism in the company service provider industry. Further, law enforcement officials can obtain information about company ownership when they need it. Jerey, which lie abt 100 mile th of minlnd Britin nd 14 mile from the coast of Frnce, has re of 45 uare mile. Ile of Mn i locted in the IriS nd has re of 227 uare mile. The two ind re elf-governing crown dependencie tht do not elong to the United Kingdom nd re not memer of the Eropen Union. Ech has it own prliment nd l. In repone to interntionl concern in the mid-1990 abt the role of compnie formed in offhore jridiction such as thee two ind in tx evasion cheme nd other illicit ctivitie, Jerey nd Ile of Megn regting compny ervice provider in 2001 nd 2000, repectively. The FinnciService Commission in Jerey nd the FinnciSuperviion Commission in Ile of Mn overee the regtion of the compny ervice provider industry. Offici from oth jridiction noted tht the regtion were implemented to improve the legitimcy nd reption of compnie formed there. However, government and private sector officials told us that implementing these regulations was a significant challenge. Both jurisdictions experienced consolidation in the company service provider industry. Some companies merged, and others moved to locations with fewer requirements or went out of business because they either did not want to comply with the new regulations or could not charge fees high enough to cover due diligence costs. One company service provider said the time required to form a company increased, as the due diligence requirements company service providers must follow can take weeks to complete depending on the client, though once documents are submitted to the Jersey or Isle of Man registry offices, formations are finished in 48 hours or less. The workload of company service providers has also increased. One company service provider told us that the company had increased its staff by 25 percent to 30 percent because of the requirement that the company verify customer information. Fewer companies are formed in Isle of Man, according to an Isle of Man official. Before the regulations, Isle of Man had 40,000 incorporated entities, but it now has 35,000. Finally, because ownership is fluid, it is a challenge to keep the information up to date. In Isle of Man, the responsibility for keeping information up to date lies with the company service providers. In Jersey, ownership information is updated on annual reports. State officials, agents, and others told us that some other sources of company ownership information that law enforcement officials could access existed, including internal company documents, financial institutions, and the IRS. Our review of state statutes found that all states require corporations to prepare a list of shareholders, typically before the mandatory annual shareholder meeting, and that almost all states require that this list be maintained at the corporation’s principal or registered office. Industry experts told us that LLCs also usually prepare and maintain operating agreements that generally name the members and outline their financial interests. These documents are generally not public record, but law enforcement officials can subpoena them to obtain ownership information, and ICE officials in one field office said they always looked at LLC operating agreements during an investigation. However, accessing these lists may be problematic, and the documents themselves might not be accurate or even exist. For example, law enforcement officials said that shell companies may not prepare these documents and that U.S. officials may not have access to them if the company is located in another country. In addition, law enforcement officials may not want to request these documents in order to avoid tipping off a company about an investigation. Industry experts also cautioned that even these internal documents may not reveal the true beneficial owners of a company. For example, the list could include nominee shareholders, which would reduce the usefulness of the shareholder list because the shareholder on record may not be the beneficial owner. In addition, shareholders could sell their stock and not register the sale with the company; in such cases, the new owners would not be known. Shareholders could also sell their stock before the filing date and then buy it back after the filing date to avoid being listed. Further, in states that allow bearer shares, the owners’ names are anonymous because bearer share certificates do not contain the names of the shareholders. Therefore, while law enforcement authorities could obtain lists of shareholders from companies by subpoena, further investigation might still be needed to find the true beneficial owners. Financial institutions may also have ownership information on some companies. Customer Identification Program (CIP) requirements implemented by the USA PATRIOT ACT in 2001 establish minimum standards for financial institutions to follow when verifying the identity of their customers in connection with the opening of an account. Under these standards, financial institutions must collect the name of the company, its physical address (for instance, its principal place of business), and an ID number, such as the tax identification number. The regulations also mandate that financial institutions develop risk-based procedures for verifying the identity of each customer to the extent that doing so is reasonable. For example, representatives from financial institutions told us that they typically requested a company’s articles of incorporation when a new account was opened to verify that the entity existed. One representative said that his institution also checked names against the OFAC list and requested photo identification from all signers on the account. Industry representatives noted that institutions may also compare the customer information with information obtained from a consumer reporting agency, public database, or other sources. Finally, based on a risk assessment, the institution may obtain information about individuals with authority or control over the account in order to verify their identities. Representatives of financial institutions told us that although they are not required to obtain ownership information in all cases, they may investigate high-risk applicants to uncover the ultimate beneficial owners. These applicants may include casinos, companies that are not listed on world stock exchanges, companies with complex structures, or companies from certain high-risk countries. For such applicants, financial institutions may ask about information such as beneficial owners and officers of the company. Financial industry representatives said that conducting the necessary due diligence on a company absorbs time and resources, because institutions must sometimes peel back layers of corporations or hire private investigators to find the actual beneficial owner or owners of a company. One financial institution we interviewed collects the name, date of birth, and tax identification number of all individuals with ownership and control of a corporation or LLC. However, officials from some institutions told us that obtaining such information on all applicants would be an added burden to an industry that is already subject to numerous regulations. Some industry officials also said that financial institutions may not want to request ownership information in all cases for fear of losing a customer. In addition, industry representatives noted that collecting ownership information at financial institutions might not always be useful or available, because ownership might change after the account was opened and not all companies opened bank or brokerage accounts. Furthermore, Department of Justice officials noted that, in some instances, the financial activity of a shell company under investigation does not involve U.S. financial institutions. Finally, correspondent accounts create opportunities to hide the identities of the account holders from the banks themselves. A foreign bank can open a correspondent account with a U.S. bank to avoid bearing the costs of licensing, staffing, and operating its own offices in the United States. Many of the largest international banks serve as correspondents for thousands of other banks. The USA PATRIOT ACT requires financial institutions that provide correspondent accounts to foreign banks to maintain records of the foreign bank’s owners and of the name and address of an agent in the United States designated to accept service of process for the foreign bank for records regarding the correspondent account. However, law enforcement and industry representatives told us that the foreign banks may commingle funds from many different customers into one correspondent account, making it difficult for U.S. banks to identify the individuals with access to the account. IRS was mentioned as another potential source of company ownership information for law enforcement, but IRS officials pointed to several limitations with this data. First, IRS may not have information on all companies formed. The agency collects company ownership information on certain forms, such as the application for an employer identification number (EIN) (SS-4). Form SS-4 requires the name and tax identification number (such as the Social Security number) of the principal officer if the business is a corporation, or general partner if it is a partnership, or owner if it is an entity that is disregarded as separate from its owner (disregarded entity), such as a single member LLC. Disregarded entities owned by a corporation enter the corporation’s name and EIN. However, not all LLCs are required to have EINs. In addition, the name of an owner may be on the form LLCs file to select how they will be taxed. IRS also currently collects some general ownership information, including an identifying number, name, and address, on certain LLCs on separate schedules that the company files with the IRS. For LLCs that are taxed as partnerships, this form specifies whether members are member-managers or another type of member of an LLC and reports the member’s share of the company profits, losses, and capital. But if an LLC has only one member, the individual reports income on an individual tax return. In addition, IRS classifies certain LLCs as corporations for tax purposes, and others may choose to be classified as corporations. Ownership information is available for LLCs that are classified as corporations and file as S corporations, but generally not for those that are taxed as C corporations. Second, IRS officials reported that the ownership information the agency collected may not be complete or up to date. As we have discussed, the agency does not have information on every company, because some companies do not request or need EINs. In addition, some EINs become inactive after a certain period, dropping off the IRS database. For example, Department of Justice officials told us that U.S. shell companies being used in foreign criminal activity are sometimes inactive in the United States. In addition, ownership information on LLCs owned by foreign individuals or entities would only be available if the LLC obtained an EIN for income that was subject to tax in the United States. Further, data gathered on IRS forms may not always be accurate. In a recent report, we found that data transcription errors made by IRS staff entering data into a database and invalid taxpayer identification numbers submitted by companies lowered the accuracy of these data. IRS officials also noted that the information collected might not always be useful in finding the ultimate beneficial owner of a company, because another entity could be listed as the owner, requiring further investigation to identify the true owner. Finally, IRS officials said that the information in the agency’s records might not be up to date because IRS was not always notified when ownership changed. Third, law enforcement officials could have difficulty accessing IRS taxpayer information. As part of the administration of federal tax laws, IRS investigators can use IRS data in their investigations of tax and related statutes, but access by other federal and state law enforcement is restricted by 26 U.S.C. § 6103. IRS officials said that federal law enforcement officials can access IRS information provided by taxpayers (or their representatives) when a federal court issues an ex parte order. Under 26 U.S.C. § 6103(i)(1), the federal law enforcement agency requesting the information through an ex parte order must show that it is engaged in preparation for a judicial, administrative or grand jury proceeding to enforce a federal criminal statute or that the investigation may result in such a proceeding. Information IRS receives from a source other than taxpayers (or their representatives), such as taxpayers’ employers or banks, can be obtained without a court order. Moreover, in certain limited situations, there are additional provisions currently in the tax code providing for disclosure of such information relating to criminal or terrorist activities or emergency circumstances. State law enforcement officials can access IRS information for enforcement of state tax laws when IRS has sharing agreements with state taxing authorities. Law enforcement officials can also obtain IRS information with the taxpayer’s consent. Officials in one ICE field office told us that they have obtained IRS information; however, officials in another ICE field office said that obtaining this information was difficult. IRS officials commented that collecting additional ownership and control information on IRS documents would provide IRS investigators with more detail when conducting investigations but that the agency’s ability to collect and verify such information would depend on the availability of resources. States and agents collect a variety of information when individuals form companies, but most state statutes do not require that they collect or verify information on ownership. Therefore, minimal information is collected on the owners of these companies. During our review, we encountered a variety of legitimate concerns about the merits of collecting ownership information on companies formed in the United States. Many of these concerns reflected conflicting interests. On the one hand, federal law enforcement agencies were concerned about the lack of information, because criminals can easily use U.S. shell companies to mask the identities of those engaged in illegal activities. From a law enforcement perspective, having more information would make using U.S. shell companies for illicit activities harder and give investigators more information to use in pursuing the actual owners. In addition, since U.S. shell companies are used in criminal activity abroad because of their perceived legitimacy, collecting more information when a company is formed could improve the integrity of the company formation process in the United States. On the other hand, states and agents were concerned about increased costs, potential revenue losses, and privacy protection. Collecting more information would require more time and resources and could reduce the number of start-ups. Approving applications could take longer, potentially creating obstacles for those forming companies for legitimate business purposes. And importantly, because information on companies is currently part of the public record, requiring certain information on ownership could be considered a threat to the current system, which values the protection of privacy and individuals’ personal assets. Any requirement that states, agents, or both collect more ownership information on certain types of companies would need to balance these conflicting concerns. Further, such a requirement would need to be uniformly applied in all U.S. jurisdictions. If it were not, those wanting to set up shell companies for illicit activities would simply move to the jurisdiction that presented the fewest obstacles, undermining the intent of the requirement. We provided a draft of this report to the Departments of Justice, Homeland Security, and the Treasury. Justice and Treasury provided technical comments that were incorporated into the report, where appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies to the Departments of Justice, Homeland Security, and the Treasury; and interested congressional committees. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. The survey and a more complete tabulation of state-by-state and aggregated results can be viewed at http://www.gao.gov/cgi-bin/getrpt?GAO-06-377SP. If you or your staff have any questions regarding this report, please contact me at (202) 512-8678 or jonesy@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This report describes states’ company formation and reporting requirements and the information that is routinely obtained and made available to the public and law enforcement officials regarding ownership of nonpublicly traded corporations and limited liability companies (LLC) formed in each state given concerns about the potential for using companies for illicit purposes. Specifically, this report discusses 1. the kinds of information—including ownership information—that the 50 states and the District of Columbia collect during company formation and the states’ efforts to review and verify it; 2. the roles of third-party agents, such as company formation agents, and the kinds of information they collect on company ownership; 3. the role of shell companies in facilitating criminal activity, the availability of company ownership information to law enforcement, and the usefulness of such information in investigating shell companies; and 4. the potential effects of requiring states, agents, or both to collect company ownership information. To respond to the first objective and describe the ways company formation and periodic reporting documents can be filed, we conducted a Web-based survey of the 50 states and the District of Columbia on formation and reporting practices. We worked to develop the questionnaire with social science survey specialists. Because these were not sample surveys, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database can introduce unwanted variability into the survey results. We took steps in the development of the questionnaires, the data collection, and data analysis to minimize these nonsampling errors. For example, prior to administering the survey, we pretested the content and format of the questionnaires with state officials in Florida, Maine, Maryland, Virginia, and Washington, D.C., to determine whether (1) the survey questions were clear, (2) the terms used were precise, (3) respondents were able to provide the information we were seeking, and (4) the questions were unbiased. An official from the International Association of Commercial Administrators (IACA) also reviewed a draft of the survey. We made changes to the content and format of the final questionnaires based on pretest results. We sent the finalized survey to contacts responsible for company filings in secretary of state offices (or their equivalents) in all 50 states and the District of Columbia. See Survey of State Officials Responsible for Company Formation, GAO-06-377SP, for the final version of the survey and state-by-state results. We received survey responses from each of the 50 states and the District of Columbia. In that these were Web-based surveys whereby respondents entered their responses directly into our database, the possibility of data entry error was minimized. We also performed computer analyses to identify inconsistencies in responses and other indications of error. We contacted survey respondents as needed to correct errors and verify responses. In addition, a second independent analyst verified that the computer programs used to analyze the data were written correctly. To test the reliability of survey data, we compared state responses on our survey with data states provided to IACA in its 2005 annual report of jurisdictions for four key variables—the number of LLCs and corporations filed in 2004 and the total number on file. The data were markedly the same, with very high correlations and no significant differences in mean values. Based on this testing, we believe our reporting of the trends based on the number of corporations and LLCs to be reliable. We also corroborated the survey results with information we collected from a systematic review of state Web sites and state statutes. Where we found a discrepancy on key variables, we contacted the relevant state official for clarification of the state’s requirement. Our review of the state corporation statutes included analysis of provisions regarding company formation, registered agents, shareholder identification, requirements for record keeping, and periodic reporting. In addition, we reviewed provisions in state LLC statutes relating to company formation, periodic reporting, and registered agents. We also reviewed the content of company formation forms and other information available on state Web sites. The data collected from our review of state statutes and Web sites is as of October 2005. We also visited Arizona, Delaware, Florida, Nevada, and Oregon to conduct in-depth interviews with state officials about practices in these states. We selected these states because of the number of companies formed there or unique practices we identified from the statutes, forms, or survey responses. To respond to the second objective and describe the roles of third-party agents, we interviewed academics with expertise in corporate and LLC law, selected professional agents, and state officials. In selecting agents to interview, we interviewed only companies that act as agents for service of process for more than one client. We chose a range of large national companies (three) as well as midsize or small companies (nine). We interviewed selected agents about the information they collect on companies and analyzed survey results on states’ requirements regarding oversight of these agents. We also interviewed officials from the National Public Records Research Association, an association that represents companies providing corporate services and public records research, and the Nevada Resident Agent Association, which represents a number of resident agents in Nevada. In addition, we reviewed state statutes for requirements regarding becoming an agent for service of process. To respond to the third objective and determine what information states and agents make available to law enforcement and the public, we reviewed company formation and periodic reporting forms on state Web sites and reviewed state Web sites for the type of information made available online and other methods individuals may use to obtain information. In addition, we interviewed selected state officials and agents about the methods they use to provide information. We also interviewed selected state and federal law enforcement officials about their experiences in obtaining company information from states to aid their investigations, including officials from the following state and federal agencies: the Arizona Attorney General, Drug Enforcement Agency, Federal Bureau of Investigation, the Florida Attorney General, Immigration and Customs Enforcement, Internal Revenue Service/Criminal Investigations, Financial Crimes Enforcement Network, U.S. Attorneys Office, and Office of Foreign Assets Control. To respond to the fourth objective and determine the implications of requiring states or agents to collect company ownership information, we analyzed survey results and interviewed selected state officials and a range of professional agents. To determine how other jurisdictions have implemented regimes requiring collection of ownership information, we interviewed officials from Jersey and Isle of Man, which require the collection of this information, about the implications of implementing these requirements. Jersey and Isle of Man are two of a small number of jurisdictions that require disclosure of beneficial ownership information when a company is formed. We also reviewed an Organization for Economic Co-operation and Development report describing requirements in one of the jurisdictions. To determine other potential sources of company information, we asked academics, agents, state officials, law enforcement officials, and representatives of professional associations their perspectives on where this information could be obtained. We also reviewed state statutes on requirements for company record keeping. In addition, we interviewed representatives of selected financial institutions and the IRS about the company information they typically collect. We conducted our work from May 2005 through March 2006 in Arizona, Delaware, Florida, Maryland, Nevada, New York, Oregon, Virginia, and Washington, D.C. We performed our work in accordance with generally accepted government auditing standards. Company formation and reporting documents can be submitted in person or by mail, and many states also accept filings by fax. Review and approval times can depend on how documents are submitted. For example, a District of Columbia official told us that a formation document submitted in person could be approved in 15 minutes, but a document that was mailed might not be approved for 10 to 15 days. Most states reported that documents submitted in person or by mail were approved within 1 to 5 business days, although a few reported that the process took more than 10 days. Officials in Arizona, for example, told us that it typically took the office 60 days to approve formation documents because of the volume of filings the office received. In 36 states, company formation documents, reporting documents, or both can be submitted through electronic filing (fig. 7 shows the states that provide a Web site for filing formation documents or periodic reports). In addition, some officials indicated that they would like or were planning to offer electronic filing in the future. Of the 36 states that allow electronic filing, 23 or more reported a moderate or greater benefit in the following areas as a result of electronic filing: reduced staff time for recording and processing filings; less need to store paper records; electronic transfer of filing fees; and built-in edit and data reliability checks. State officials also commented that they had seen their error or rejection rates fall, and had been able to improve their customer service with electronic filing. States said that there were some or moderate costs associated with electronic filing, such as increased expenses for technology (hardware and software) and staff training. Overall, according to our survey, 28 of the 36 states that offer electronic filing reported that the benefits exceeded the costs. As shown in table 3, in many cases states charge the same or nearly the same fee for forming a corporation or an LLC. In others, such as Illinois, the fee is substantially different for the two business forms. We found that in two states, Nebraska and New Mexico, the fee for forming a corporation may fall into a range. In these cases, the actual fee charged depends on the number of shares the new corporation will have. As stated earlier, the median company formation fee is $95, and fees for filing periodic reports range from $5 to $500. Thirty states reported offering expedited service for an additional fee. Of those, most responded that with expedited service, filings were approved either the same day or the day after an application was filed. Two states reported having several expedited service options. Nevada offers 24-hour expedited service for an additional $125 above the normal filing fees, 2- hour service for an extra $500, and 1-hour, or “while you wait,” service for an extra $1,000. Delaware offers same day service for $100, next day service for $50, 2-hour service for $500, and 1-hour service for $1,000. This appendix includes a table of the information states require in their company formation documents for corporations and LLCs. As shown in figure 8, states collect different information on their company formation documents. Most states require the company name, agent name and address, and the name and signature of the incorporator or organizer, and for corporations, information about the number and types of shares the corporation will issue. The requirements for the company’s purpose, principal address, and names and addresses of owners and management are not as consistent across the states. Figures 9 and 10 are examples of company formation documents from two states that have different information requirements. In addition to the contact named above, Kay Kuhlman (Assistant Director), LaKeshia Allen, Todd M. Anderson, Carolyn Boyce, Emily Chalmers, William R. Chatlos, Jennifer DuBord, Marc Molino, Jill M. Naamane, and Linda Rego made key contributions to this report. A person or entity authorized to accept service of process or other important tax and legal documents on behalf of a business. Agents for service of process may be known as registered agents, resident agents, statutory agents, or clerks in different states. A corporate formation document setting forth basic terms governing the corporation’s existence. The articles are filed in most states with the secretary of state during the formation process. This document is called a “certificate of incorporation” for corporations formed in Connecticut, Delaware, New Jersey, New York and Oklahoma; “articles of organization” for corporations formed in Massachusetts; and a “charter” for corporations formed in Tennessee. A governing document legally creating a nonstock organization, similar to “articles of incorporation” described above for incorporated entities. This document is called a “certificate of formation” for limited liability companies formed in Mississippi, New Hampshire, New Jersey, and Washington, and a “certificate of organization” for limited liability companies formed in Pennsylvania. An unregistered security payable to the holder. For instance, a bearer stock certificate is owned by the person legally holding (in possession of) the certificate even when no one else knows who holds the certificate. Bearer shares may be bought, sold, or exchanged in complete privacy. Shareholders with the power to buy or sell their shares in the company, but who are not registered or reflected in the company’s records as the owners. A beneficial owner is the natural person who ultimately owns or exercises effective control over a legal entity, transaction, or arrangement. A certificate issued by a state official as conclusive evidence that a corporation is in existence or authorized to transact business in that state. The certificate generally sets forth the corporation’s name, and that it is duly incorporated under the law of that state or authorized to transact business in that state; that all fees, taxes and penalties owed to that state have been paid; and that the corporation’s most recent annual report has been filed, and articles of dissolution have not been filed. Also may be known as a certificate of good standing or certificate of authorization. A person or business that acts as an agent for others by filing documents with officials of the selected jurisdiction for the formation of legal business entities. Such agents may also act, or arrange for another person to act, as a director or secretary of a company, a partner of a partnership, or a nominee shareholder for another person. Other business services may also be provided, such as providing a registered office, or a business, correspondence, or administrative address for a company. The legal doctrine of separating the acts of a corporation from the acts of its shareholders, which prevents the shareholders from being held personally liable for the acts of the corporation. An equitable doctrine where the separate existence of a corporation is disregarded by the law and the shareholders are held responsible for the acts and obligations of the corporation. This doctrine has also been used in certain circumstances to impose liability on corporate officers and directors. Piercing the corporate veil is justified only in extraordinary circumstances where a court finds that a unity of interest and ownership between an individual and a corporation exists to such an extent that recognizing a separate existence between the two would result in an injustice. In such cases, a court may disregard the corporate entity and impose personal liability on the individual. An artificial being (usually a business entity) created by law that provides authority for the entity to act as a separate and distinct legal person apart from its owners and provides other legal rights, such as the right to exist indefinitely and to issue stock. A small business corporation that elects to be taxed as an S corporation under the federal tax code. The taxable income of an S corporation is passed through to the shareholders and taxed at the shareholder level. A corporation that is not an S corporation. A person elected or appointed to serve as a member of the board of directors for a corporation, which generally manages the corporation and its officers. A member of a corporation’s board of directors who is a mere figurehead and who has no true control over the corporation. Typically, a nominee director may have no knowledge of the business affairs or accounts, may not exercise independent control of or influence over the business, and may not act unless instructed to act by the beneficial owner. Liability restricted by law or contract, such as the liability of the owners of a business entity for only the capital invested in the business. A company whose owners (members) have limited liability (see “limited liability”) and that is managed either by managers or its members. An LLC consists of one or more members (see “member”). A limited liability company that designates in its articles of organization that it is a manager-managed company. In this type of LLC, each member is not generally an agent of the LLC solely because of being a member of the LLC. Rather, each manager is such an agent. An owner of an LLC interest; similar to a shareholder in a corporation. A limited liability company that does not designate in its articles of organization that it is a manager-managed company. In this type of LLC, each member is an agent of the LLC and may generally act on behalf of the LLC for the purpose of the LLC’s business. An individual or entity designated to act on behalf of another, such as a nominee director acting on behalf of a beneficial owner (see “beneficial owner”). Most often in offshore tax avoidance schemes, the nominee may pretend to be the owner of an entity, asset, or transaction to provide a veil of secrecy as to the beneficial owner’s involvement. A person elected or appointed by a corporation’s board of directors to manage and oversee the day-to-day operations of the organization, such as a chief executive officer, chief financial officer, chief administrative officer, and secretary. An association of two or more persons jointly owning and conducting a business together where the individuals agree to share the profits and losses of the business. A partnership consisting of one or more limited partners who contribute capital to and share in the profits of the partnership, but whose liability for partnership debts is limited to the amount of their contribution and one or more general partners who control the business and are personally liable for the debts of the partnership. A partnership where a partner is not liable for the negligent acts committed by other partners or by employees not under the partner’s supervision. Certain businesses (typically law firms or accounting firms) are allowed to register under state statutes as this type of partnership. A partnership where general and limited partners are not liable for the partnership’s debts and obligations because of their status as a partner. The delivery of legal process or other legal notice, such as a writ, citation, summons, or a complaint or other pleading filed in a civil court matter. A business where one person owns all of the business assets, operates the business, and is responsible for all of the liabilities of the business in a personal capacity.
Companies form the basis of most commercial and entrepreneurial activities in market-based economies; however, "shell" companies, which have no operations, can be used for illicit purposes such as laundering money. Some states have been criticized for requiring minimal ownership information to form a U.S. company, raising concerns about the ease with which companies may be used for illicit purposes. In this report, GAO describes (1) the kinds of information each of the 50 states and the District of Columbia and third party agents collect on companies, (2) law enforcement concerns about the use of companies to hide illicit activity and how company information from states and agents helps or hinders investigations, and (3) implications of requiring states or agents to collect company ownership information. Most states do not require ownership information at the time a company is formed, and while most states require corporations and limited liability companies (LLC) to file annual or biennial reports, few states require ownership information on these reports. With respect to the formation of LLCs, four states require some information on members, who are owners of the LLC. Some states require companies to list the names and addresses of directors, officers or managers on filings, but these persons may not own the company. Nearly all states screen company filings for statutorily required information, but none verify the identities of company officials. Third-party agents may submit formation documents to the state on a company's behalf, usually collecting only billing and statutorily required information for formations. These agents generally do not collect any information on owners of the companies they represent, and instances where agents told us they verified some information were rare. Federal law enforcement officials are concerned that criminals are increasingly using U.S. shell companies to conceal their identity and illicit activities. Though the magnitude of the problem is difficult to measure, officials said U.S. shell companies are appearing in more investigations in the United States and other countries. Officials told us that the information states collect has been helpful in some cases because names on the documents, such as names of directors, generated additional leads. However, some officials said that the information was limited and that cases had been closed because the owners could not be identified. State officials and agents said that collecting company ownership information could be problematic. Some state officials and agents noted that collecting such information could increase the cost of company filings and the time needed to approve them. Some officials said that if they had additional requirements, companies would go to other states or jurisdictions. Finally, officials and agents expressed concerns about compromising individuals' privacy because owner information disclosed on company filings would be part of the public record, which has not historically been the case for private companies.
PPACA directed each state to establish a state-based health insurance marketplace for individuals to enroll in private health insurance plans, apply for income-based financial assistance, and, as applicable, obtain a determination of their eligibility for other health coverage programs, such as Medicaid or the State Children’s Health Insurance Program (CHIP). For states that did not establish a marketplace, PPACA required the federal government to establish and operate a marketplace for that state, referred to as the federally facilitated marketplace. For plan year 2014, 17 states elected to establish their own marketplace, and CMS operated a federally facilitated marketplace or partnership marketplace for 34 states. The act required the marketplaces to be operational on or before January 1, 2014, and Healthcare.gov began facilitating enrollments on October 1, 2013, at the beginning of the first annual open enrollment period established by CMS. The initial open enrollment period ended on April 15, 2014. Requirements for ensuring the security and privacy of individuals’ personally identifiable information (PII), such as that collected and processed by Healthcare.gov and related systems, have been established by a number of federal laws and guidance. These include the following: The Federal Information Security Management Act of 2002 (FISMA), which requires each federal agency to develop, document, and implement an agency-wide information security program. National Institute of Standards and Technology (NIST) guidance and standards, which are to be used by agencies to, among other things, categorize their information systems and establish minimum security requirements. The Privacy Act of 1974, which places limitations on agencies’ collection, access, use, and disclosure of personal information maintained in systems of records. The Computer Matching Act, which is a set of amendments to the Privacy Act requiring agencies to follow specific procedures before engaging in computerized comparisons of records for establishing or verifying eligibility or recouping payments for federal benefit programs. The E-Government Act of 2002, which requires agencies to analyze how personal information is collected, stored, shared, and managed before developing or procuring information technology that collects, maintains, or disseminates information in an identifiable form. The Health Insurance Portability and Accountability Act of 1996, which requires the adoption of standards for the electronic exchange, privacy, and security of health information. The Internal Revenue Code, which provides for the confidentiality of tax returns and return information. IRS Publication 1075, which establishes security guidelines for safeguarding federal tax return information used by federal, state, and local agencies. Under FISMA, the Secretary of HHS has overall responsibility for the department’s agency-wide information security program; this responsibility has been delegated to the department’s Chief Information Officer (CIO). The HHS CIO is also responsible for the department’s response to information security incidents and the development of privacy impact assessments for the department’s systems. The CMS Center for Consumer Information and Insurance Oversight has overall responsibilities for federal systems supporting the federally facilitated marketplace and for overseeing state marketplaces. Further, security and privacy responsibilities for Healthcare.gov and supporting systems are shared among several offices and individuals within CMS, including the CIO, the Chief Information Security Officer, component-level information systems security officers, the CMS Senior Official for Privacy, and the CMS Office of e-Health Standards Privacy Policy and Compliance. In particular, the CMS CIO is responsible for implementing and administering the CMS information security program, which covers the systems developed by CMS to satisfy PPACA requirements. The Chief Information Security Officer is responsible for, among other things, ensuring the assessment and authorization of all systems and the completion of periodic risk assessments, including annual security testing and security self-assessments. The process of enrolling for insurance through Healthcare.gov is facilitated by a number of major systems managed by CMS. Figure 1 shows the major entities that exchange data in support of marketplace enrollment in qualified health plans and how they are connected. The major systems that facilitate enrollment include the following: The Healthcare.gov website: This serves as the user interface for individuals to obtain coverage through a federally facilitated marketplace. It has two major functions: (1) providing information about PPACA health insurance reforms and health insurance options and (2) facilitating enrollment in coverage. Enterprise Identity Management System: This system allows CMS to verify the identity of an individual applying for coverage and establish a login account for that user. Once an account is created using a name and e-mail address, the person’s identity is confirmed using additional information, which can include a Social Security number, address, phone number, and date of birth. Federally Facilitated Marketplace System (FFM): This system consists of three major modules to facilitate (1) eligibility and enrollment, (2) plan management, and (3) financial management. For eligibility, an applicant’s information is collected to determine whether they are eligible for insurance coverage and financial assistance. Once eligibility is determined, the system allows the applicant to view, compare, select, and enroll in a qualified health plan. The plan management module is to provide state agencies and issuers of qualified health plans with the ability to submit, certify, monitor, and renew qualifying health plans. The financial management module is to facilitate payments to health insurers, among other things. From a technical perspective, the FFM system relies on “cloud-based” data processing and storage services from private- sector vendors. Federal Data Services Hub: This system acts as a single portal for exchanging information between the FFM system and other systems or external partners, which include other federal agencies, state-based marketplaces, other state agencies, other CMS systems, and issuers of qualified health plans. The data hub supports, among other things, real- time eligibility queries, transfer of applicant and taxpayer information, exchange of enrollment information with plan issuers, monitoring of enrollment information, and submission of health plan applications. Healthcare.gov-related activities are also supported by other CMS systems, including a data warehouse system to provide reporting and performance metrics; the Health Insurance Oversight System, which provides an interface for issuers of qualified health plans to submit information about qualifying health plans; and a general accounting system that handles payments associated with advance premium tax credits and cost-sharing reductions. In addition, CMS relies on a variety of federal, state, and private-sector entities to support Healthcare.gov-related activities, and these entities exchange information with CMS’s systems: Federal agencies such as the Social Security Administration (SSA), Department of Homeland Security (DHS), and Internal Revenue Service (IRS), along with Equifax, Inc. (a private-sector credit agency under contract with CMS) provide or verify information used in making determinations of a person’s eligibility for coverage and financial assistance. The Department of Defense (DOD), Office of Personnel Management (OPM), Peace Corps, and Department of Veterans Affairs (VA) assist in determining whether a potential applicant has alternate means for obtaining minimum essential coverage. State-based marketplaces may rely on the FFM system for certain functions, and state Medicaid and CHIP agencies may connect to the FFM to exchange enrollment data, which are typically routed through CMS’s data hub. In addition to accessing the plan management and financial management modules of the FFM, issuers of qualified health plans receive information from the system when an individual completes the application process. Agents and brokers may access the Healthcare.gov website on behalf of applicants. To facilitate offline, paper-based applications, CMS contracted with a private-sector company for intake, routing, review, and troubleshooting of paper applications for enrollment into health plans and insurance affordability programs. While CMS has security and privacy-related protections in place for Healthcare.gov and related systems, weaknesses exist that put the personal information these systems collect, process, and maintain at risk of inappropriate modification, loss, or disclosure. The agency needs to take a number of actions to address these deficiencies in order to better protect individuals’ personally identifiable information. CMS established security-related policies and procedures for Healthcare.gov. Specifically, it assigned overall responsibility for securing the agency’s information and systems to appropriate officials, including the agency CIO and Chief Information Security Officer, and designated information system security officers to assist in certifying particular CMS systems; documented information security policies and procedures to safeguard the agency’s information and systems; developed a process for planning, implementing, evaluating, and documenting remedial actions to address identified information security deficiencies; and established interconnection security agreements with the federal agencies with which it exchanges information, including DOD, DHS, IRS, SSA, and VA; these agreements identify the requirements for the connection, the roles and responsibilities of each party, the security controls protecting the connection, the sensitivity of the data to be exchanged, and the required training and background checks for personnel with access to the connection. In addition, CMS took steps to protect the privacy of applicants’ information. For example, it published and updated a system-of-records notice for Healthcare.gov that addressed required information such as the types of information that will be maintained in the system and the external entities that may receive such information without affected individuals’ explicit consent; developed basic privacy training for all staff and role-based training for staff who have access to PII while executing their routine duties; and established an incident-handling and breach response plan and an incident response team to manage responses to privacy incidents, identify trends, and make recommendations to HHS to reduce risks to PII. However, when Healthcare.gov was deployed in October 2013, CMS accepted increased security risks because of the following: CMS allowed four states to connect to the data hub even though they had not completed all CMS security requirements. These states were given a 60-day interim authorization to connect, because CMS officials regarded this as a mission-critical need. Subsequently, all four states addressed the weaknesses in their security assessments and were granted 3-year authorizations. CMS authorized the FFM system to operate even though all the security controls had not been tested for a fully integrated version of the system. This authority to operate was granted for 6 months, on the condition that a full security assessment was conducted within 60 to 90 days of October 1, 2013. In December 2013, an assessment of the eligibility and enrollment module was conducted. However, the plan management and financial management modules, which had not yet been fully developed, were not tested. Although CMS developed and documented security policies and procedures, it did not fully implement required actions before Healthcare.gov began collecting and maintaining PII from individual applicants: System security plans were not complete. While system security plans for the FFM and data hub incorporated most of the elements specified by NIST, each was missing or had not completed one or more relevant elements. For example, the FFM security plan did not define the system’s accreditation boundary, or explain why five of the security controls called for by NIST guidance were determined not to be applicable. Without complete system security plans, agency officials will be hindered in making fully informed judgments about the risks involved in operating those systems. Interconnection agreements were not all complete. CMS had not completed security documentation governing its interconnection with Equifax, Inc., but instead was relying on a draft data use agreement that had not been fully approved within CMS. This makes it more difficult for agency officials to ensure that adequate security controls are in place to protect the connection. Privacy risks were not assessed. In completing privacy impact assessments for the FFM and data hub, CMS did not assess risks associated with the handling of PII or identify mitigating controls to address such risks. Without such an analysis, CMS cannot demonstrate that it thoroughly considered and addressed options for mitigating privacy risks associated with these systems. Interagency agreements governing data exchanges were not complete. CMS established computer matching agreements with DHS, DOD, IRS, SSA, and VA for its data exchanges to verify eligibility for healthcare coverage and premium tax credits; however, it had not established such agreements with OPM or the Peace Corps. This increases the risk that appropriate protections will not be applied to the PII being exchanged with these agencies. Security testing was not complete. While CMS has undertaken, through its contractors and at the agency and state levels, a series of security-related testing activities for various Healthcare.gov-related systems, these assessments did not effectively identify and test all relevant security controls prior to deploying the systems. For example, the assessments of the FFM did not include all the security controls specified by NIST and CMS, such as incident response controls and controls specified for physical and environmental protection. In addition, CMS could not demonstrate that it had tested all the security controls specified in the FFM’s October 2013 security plan, and it did not test all the system’s components before deployment or test them on the integrated system. Testing of all deployed eligibility and enrollment modules and plan management modules did not occur until March 2014, and as of June 2014 FFM testing remained incomplete. Without comprehensive testing, CMS lacks assurance that security controls for the FFM system are working as intended. Alternate processing site was not fully established. CMS developed and documented contingency plans for the FFM and data hub that identified activities, resources, responsibilities, and procedures needed to carry out operations during prolonged disruptions of the systems. It also established system recovery priorities, a line of succession based on the type of disaster, and specific procedures on how to restore both systems and their associated applications in the event of a disaster. However, although the contingency plans designated a site at which to recover the systems, this site had not been established. Specifically, according to CMS, data supporting the FFM were being backed up at the recovery site, but backup systems are not otherwise supported there, limiting the facility’s ability to support disaster recovery efforts. CMS did not effectively implement or securely configure key security controls on the systems supporting Healthcare.gov. For example: Strong passwords (i.e., passwords of sufficient length or complexity) were not always required or enforced on systems supporting the FFM. This increases the likelihood that an attacker could gain access to the system. Certain systems supporting the FFM were not restricted from accessing the Internet, increasing the risk that unauthorized users could access data from the FFM network. CMS did not consistently apply security patches to FFM systems in a timely manner, and several critical systems had not been patched or were no longer supported by their vendors. This increased the risk that servers supporting the FFM could be compromised through exploitation of known vulnerabilities. One of CMS’s contractors had not properly secured its administrative network, which could allow for unauthorized access to the FFM network. In addition to these weaknesses, we also identified weaknesses in security controls related to boundary protection, identification and authentication, authorization, and configuration management. Collectively, these weaknesses put Healthcare.gov systems and the information they contain at increased and unnecessary risk of unauthorized access, use, disclosure, modification, and loss. The security weaknesses we identified occurred in part because CMS did not ensure that the multiple parties contributing to the development of the FFM system had a shared understanding of how security controls were to be implemented. Specifically, CMS and contractor staff did not always agree on how security controls for the FFM were to be implemented or who was responsible for ensuring they were functioning properly. For example, although CMS identified one subcontractor as responsible for managing firewall rules, this responsibility was not included in the subcontractor’s statement of work, and staff for the subcontractor said that this was the responsibility of a different contractor. Without ensuring agreement on security roles and responsibilities, CMS has less assurance that controls will function as intended, increasing the risk that attackers could compromise the system and the data it contains. In our September 2014 report, we made the following six recommendations aimed at improving the management of the security of Healthcare.gov: 1. Ensure that system security plans for the FFM and data hub contain all information recommended by NIST. 2. Ensure that all privacy risks associated with Healthcare.gov are analyzed and documented in privacy impact assessments. 3. Develop computer matching agreements with OPM and the Peace Corps to govern data that are being compared with CMS data to verify eligibility for advance premium tax credits and cost-sharing reductions. 4. Perform a comprehensive security assessment of the FFM, including the infrastructure, platform, and all deployed software elements. 5. Ensure that the planned alternate processing site for the systems supporting Healthcare.gov is established and made operational in a timely fashion. 6. Establish detailed security roles and responsibilities for contractors, including participation in security control reviews, to better ensure effective communication among individuals and entities with responsibility for the security of the FFM and its supporting infrastructure. In an associated report with limited distribution, we also made 22 recommendations to resolve technical security weaknesses related to access controls, configuration management, and contingency planning. Implementing these recommendations will enable HHS and CMS to better ensure that Healthcare.gov systems and the information they collect and process are effectively protected from threats to their confidentiality, integrity, and availability. In its comments on our draft reports, HHS concurred with 3 of the 6 recommendations to fully implement its information security program, partially concurred with the remaining 3 recommendations, and concurred with all 22 of the recommendations to resolve technical weaknesses in security controls, describing actions it had under way or planned related to each of them. In conclusion, Healthcare.gov and its related systems represent a complex system of systems that interconnects a broad range of federal agency systems, state agencies and systems, and other entities, such as contractors and issuers of health plans. Ensuring the security of such a system poses a significant challenge. While CMS has taken important steps to apply security and privacy safeguards to Healthcare.gov and its supporting systems, significant weaknesses remain that put these systems and the sensitive, personal information they contain at risk of compromise. Given the complexity of the systems and the many interconnections among external partners, it is particularly important to analyze privacy risks, effectively implement technical security controls, comprehensively test the security controls over the system, and ensure that an alternate processing site for the systems is fully established. Chairman Issa, Ranking Member Cummings, and Members of the Committee, this concludes my statement. I would be pleased to answer any questions you have. If you have any questions about this statement, please contact Gregory C. Wilshusen at (202) 512-6244 or Dr. Nabajyoti Barkakati at (202) 512- 4499. We can also be reached by e-mail at wilshuseng@gao.gov and barkakatin@gao.gov. Other key contributors to this testimony include John de Ferrari, Lon Chin, West Coile, and Duc Ngo (assistant directors); Mark Canter; Marisol Cruz; Sandra George; Nancy Glover; Torrey Hardee; Tammi Kalugdan; Lee McCracken; Monica Perez-Nelson; Justin Palk; and Michael Stevens. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
PPACA requires the establishment of health insurance marketplaces in each state to assist individuals in comparing, selecting, and enrolling in health plans offered by participating issuers. CMS is responsible for overseeing these marketplaces, including establishing a federally facilitated marketplace in states that do not establish their own. These marketplaces are supported by an array of IT systems, including Healthcare.gov, the website that serves as the consumer portal to the marketplace. This statement is based on two September 2014 reports examining the security and privacy of the Healthcare.gov website and related systems. The specific objectives of this work were to (1) describe the planned exchanges of information between the Healthcare.gov website and other organizations and (2) assess the effectiveness of programs and controls implemented by CMS to protect the security and privacy of the information and IT systems supporting Healthcare.gov. Enrollment through Healthcare.gov is supported by the exchange of information among many systems and entities. The Department of Health and Human Services' (HHS) Centers for Medicare & Medicaid Services (CMS) has overall responsibility for key information technology (IT) systems supporting Healthcare.gov. These include, among others, the Federally Facilitated Marketplace (FFM) system, which facilitates eligibility and enrollment, plan management, and financial management, and the Federal Data Services Hub, which acts as the single portal for exchanging information between the FFM and other systems or external partners. CMS relies on a variety of federal, state, and private-sector entities to support Healthcare.gov activities. For example, it exchanges information with the Department of Defense, Department of Homeland Security, Department of Veterans Affairs, Internal Revenue Service, Office of Personnel Management, Peace Corps, and the Social Security Administration to help determine applicants' eligibility for healthcare coverage and/or financial assistance. Healthcare.gov-related systems are also accessed and used by CMS contractors, issuers of qualified health plans, state agencies, and others. While CMS has security and privacy-related protections in place for Healthcare.gov and related systems, weaknesses exist that put these systems and the sensitive personal information they contain at risk. Specifically, CMS established security-related policies and procedures for Healthcare.gov, including interconnection security agreements with the federal agencies with which it exchanges information. It also instituted certain required privacy protections, such as notifying the public of the types of information that will be maintained in the system. However, weaknesses remained in the security and privacy protections applied to Healthcare.gov and its supporting systems. For example, CMS did not ensure system security plans contained all required information, which makes it harder for officials to assess the risks involved in operating those systems; analyze privacy risks associated with Healthcare.gov systems or identify mitigating controls; fully establish an alternate processing site for Healthcare.gov systems to ensure that they could be recovered in the event of a disruption or disaster. In addition, a number of weaknesses in specific technical security controls jeopardized Healthcare.gov-related systems. These included certain systems supporting the FFM not being restricted from accessing the Internet and inconsistent implementation of security patches, among others. An underlying reason for many of these weaknesses is that CMS did not establish a shared understanding of security roles and responsibilities with all parties involved in securing Healthcare.gov systems. Until these weaknesses are addressed, the systems and the information they contain remain at increased risk of unauthorized use, disclosure, modification, or loss. In its September 2014 reports GAO made 6 recommendations to HHS to implement security and privacy controls to enhance the protection of systems and information related to Healthcare.gov. In addition, GAO made 22 recommendations to resolve technical weaknesses in security controls. HHS agreed with 3 of the 6 recommendations, partially agreed with 3, agreed with all 22 technical recommendations, and described plans to implement them.
BSE and vCJD are among a group of diseases known as transmissible spongiform encephalopathies (TSE). Currently, there are no therapies or vaccines to treat TSEs, and a definitive diagnosis can only be made from a post mortem examination of the brain. The infective agent that gives rise to TSEs is generally thought to be a malformed protein, called a prion,which causes normal molecules of the same protein in the brain to become malformed. Prions cannot be killed by conventional heat, irradiation, or chemical disinfection and sterilization procedures. The precise amount of material needed to cause disease is unknown but is generally thought to be very small. TSE prions accumulate in central nervous system tissue—specifically the brain, spinal cord, and eye—but are also present in other body tissues of infected humans and animals. Other TSEs include Creutzfeldt-Jacob disease (in humans), scrapie (in sheep), transmissible mink encephalopathy, and chronic wasting disease (in elk and deer). The original source of BSE is not known with certainty. However, evidence suggests that the practice of recycling the remains of diseased animals, specifically scrapie-infected sheep, into feed for livestock, including cattle, was responsible for the emergence and spread of BSE in the United Kingdom. In 1988, the United Kingdom banned the practice of feeding ruminant-derived protein to ruminants. Following this ban, the number of new cases of BSE-infected cattle declined from a high in 1992 of 32,280 new cases to a total of 1,312 cases in 2000, and to 526 cases as of September 30, 2001. About 2,500 cases of BSE have appeared elsewhere in 18 other European countries, as well as Oman, Canada, the Falkland Islands, and Japan, as a result of the exportation of contaminated feed and cattle (see fig. 1). The one BSE-infected cow found in Canada had been imported and was destroyed without entering the animal or human food chains. The BSE-infected cattle found in Oman (two animals) and the Falkland Islands (one animal) had also been imported. In 1996, experts in the United Kingdom reported the first cases of vCJD. They believed the victims contracted it by eating beef contaminated by central nervous system tissue from BSE-infected cattle. Although contamination of meat with central nervous system tissue could occur in many ways during the slaughtering and processing of cattle, the major suspect in these cases was meat removed by a system that mechanically recovered (by squeezing under pressure) the remaining meat left on carcasses after all accessible meat has been removed by knife. Prior to December 1995, when the United Kingdom banned the practice, mechanically recovered meat, which was included in many cooked meat products such as sausages, could legally have contained spinal cords. While scientists believe that at least several hundred thousand people may have eaten BSE-infective tissue, many believe vCJD is difficult to contract. As of November 2001, 112 people have had vCJD, of whom just over 100 had died, nearly all in the United Kingdom. Most vCJD victims have been young—the average age at death was 28—and half died within 13 months from the time they first showed symptoms. As figure 2 shows, cattle provide meat and a wide array of consumer products. Many of these products may pose at least a theoretical risk for BSE infection. For example, dietary supplements, vaccines, cosmetics, and surgical replacement tissue, as well as gelatin, are produced from bovine carcasses, central nervous system tissue, and blood. The rendering industry in the United States and elsewhere recycles animals and animal tissues considered unfit for human consumption into, among other things, animal feed; diseased animals are routinely part of such recycling. The United States trades extensively in animals and the full range of animal products. No test for BSE or TSE infectivity has been proven adequate for diagnosis in humans or animals before symptoms appear or for screening blood and other products. Tests to detect proteins from cattle in animal feed do not distinguish between milk and blood proteins that are allowed and meat and bone proteins that are not. Furthermore, methods to test animal feeds are based on the analysis of genetic material, bone, and protein, all of which are degraded or destroyed in the rendering process. The lack of unique genetic material associated with BSE prions has led scientists to look for other biological markers for the disease, such as accumulations of abnormal forms of the prion protein in various tissues. Development of valid, sensitive, rapid, and reliable tests for live animals is difficult because the specific agent has not been fully identified and elicits no detectable immune response. Furthermore, efforts are hampered by the limited scientific understanding of BSE and other TSEs, including when during the incubation period infectivity appears, what mechanism causes infection, and whether infectivity is ever present in blood. Four federal agencies are primarily responsible for overseeing the many imported and domestic products that could pose a risk of BSE and for surveillance programs designed to detect and monitor animal and human diseases: The U.S. Customs Service screens all goods entering the country to enforce Customs laws and laws for 40 other agencies. USDA’s Animal and Plant Health Inspection Service monitors the health of domestic animals and screens imported animals and other products to protect animal health. USDA’s Food Safety Inspection Service monitors the safety of imported and domestically produced meat, poultry, and some egg products. FDA, within the Department of Health and Human Services (HHS), monitors the safety of all other foreign and domestic food products (including dietary supplements and animal feed), as well as vaccines for humans, drugs, cosmetics, medical devices, and the human blood supply. In addition, two other HHS agencies—the Centers for Disease Control and Prevention and the National Institutes of Health—monitor human health to detect vCJD should it appear and conduct research to better understand TSEs and the prions thought to cause them. In August 1997, FDA banned potentially BSE-infective animal proteins in feed for cattle and other ruminants. Proteins are added to feed to promote animal growth and can be derived from a number of sources, including animal meat and bone meal, fishmeal, and plant products. The feed ban prohibits the use of most animal-derived proteins in cattle feed. It also requires that, among other things, feed and feed ingredients that contain the prohibited proteins be labeled “Do not feed to cattle or other ruminants;” firms that handle both prohibited and nonprohibited feed and feed ingredients have procedures to ensure that the two are not commingled; and firms maintain records sufficient to track feed materials through their receipt and disposition for certain periods. The ban excludes animal blood and blood products, gelatin, plate waste, milk and milk protein, and protein derived from pigs and horses (and other equines). Renderers, feed manufacturers and blenders, and feed distributors are subject to the ban. Recent research on the ability of animals to be “silent” carriers of TSEs from another species raises questions about the advisability of including in feed for cattle, or other ruminants, proteins from animals such as pigs and horses that are currently not thought to be susceptible to BSE and other TSEs, according to researchers at the National Institutes of Health. Specifically, in November 2001 these researchers reported that even though mice experimentally infected with hamster scrapie did not develop clinical disease, infectivity persisted in the brains and spleens of the mice throughout their life spans. Although available laboratory methods were not sufficiently sensitive to detect the infectivity in these mice, the researchers could infect other mice and hamsters with tissue from the original asymptomatic mice. The European Commission—the executive and legislative body of the European Union— has had its Scientific Steering Committeeconduct assessments of the geographical risk of BSE for countries that requested an assessment. Between July 2000 and November 2001 these scientific experts issued assessments for 49 countries, including the United States, which the experts stated was unlikely to have BSE, but they also stated that the possibility could not be excluded. BSE differs greatly from foot and mouth disease (FMD). FMD is a highly contagious viral disease that primarily affects cloven-hoofed animals, including cattle, sheep, swine, and goats, and last appeared in the United States in 1929. In contrast to BSE, FMD does not threaten humans, rarely causes death in afflicted animals, and has an incubation period of 24 hours to 21 days. In addition, the virus that causes FMD can be killed using standard sterilization procedures. This report deals only with BSE. We also have a study underway, to be issued later in 2002, of federal measures to control the threat FMD may pose to U.S. livestock. The continuing absence of BSE in the United States today cannot be sufficiently ensured by current federal prevention efforts. The introduction and spread of BSE in the United States could stem from cattle and cattle- derived products imported from countries that subsequently developed BSE and from gaps in import controls, animal testing, and feed ban enforcement. As a result of these problems, consumers may unknowingly eat foods that contain central nervous system tissue from a diseased animal. Since 1989 and as recently as 2001, USDA and FDA have identified countries with BSE or at risk for BSE and issued import restrictions on ruminant-derived material from those countries. Figure 3 presents a timeline of the actions taken by USDA and FDA during that period. Figure 4 shows the countries on which the United States currently imposes trade restrictions for BSE-risk items. Although federal agencies have acted to reduce the possible ways that BSE-infected animals or products could enter the country, the United States has imported about 1,000 cattle; about 23 million pounds of inedible meat by-products, including meat and bone meal; about 101 million pounds of beef; and about 24 million pounds of prepared beef products during the past 20 years from countries where BSE was later found. These numbers represent a fraction of total imports in each category—0.003 percent of cattle, 0.665 percent of meat by-products, 0.314 percent of beef, and 0.728 percent of prepared beef products. In light of the long incubation period for BSE (up to 8 years), the possibility that some contaminated animals or products have entered the United States cannot be ruled out. The United States imported 334 breeding and dairy cattle from the United Kingdom between 1980 and 1989. According to USDA, 173 of these animals could have been used in animal feed or entered the human food supply. In addition, the United States imported 443 breeding and dairy cattle from continental Europe between 1983 and 1997, some of which may also have been used in animal feed or in the human food supply. Since 1996, USDA has placed under quarantine any of these imported cattle it has found still alive. These animals are monitored and, when they die, USDA obtains brain samples to test for BSE. Thus far, all tests on these animals have been negative. As of November 16, 2001, three head of cattle from the United Kingdom and five from continental Europe were still alive and being monitored. The United States also imported 242 cattle from Japan between 1993 and 1999. Japan reported its first case of BSE in September 2001. As of November 28, 2001, USDA had located 214 of these cattle. According to USDA, 24 of these cattle had gone to slaughter or to rendering, 40 had been exported, and 150 were still alive. USDA has begun monitoring those animals and is attempting to locate the remaining 28 cattle. In its evaluation of the potential for BSE in the United States, the Harvard study considered the ban on imports of cattle from the United Kingdom as one of the United States’ key prevention measures. The study assumed that remains from some of the cattle imported from the United Kingdom could have been used in animal feed, food for human consumption, or both. Although more than 95 percent of the study’s simulations, based on exposure to a low infective dose, resulted in no BSE cases in cattle, a few resulted in substantial numbers of cases. The study also assumed that cattle imported from continental Europe after 1996 had been traced and their movements controlled; it states that these cattle present virtually no risk for introducing BSE to the United States. However, the Harvard study did not take into account the 242 cattle imported from Japan between 1993 and 1999. The discovery of BSE in Japan occurred just before Harvard issued the results of its study. The United States also imported about 23 million pounds of inedible meat by-products—which would include meat and bone meal and other animal- derived meals, flours, and residues—between 1980 and 2000 from countries later found to have BSE (see fig 5.). However, the amount of meat by-products derived from cattle is uncertain because the code Customs uses to classify such shipments includes by-products from cattle or other animals. Likewise, any meat and bone meal imported under that code could be from cattle or other animals. While experts, including the Harvard researchers, see the risk of exposure posed by these shipments as extremely low, if any cattle feed contained BSE-infected meat and bone meal, it could create an opportunity to contaminate U.S. cattle. The beef and prepared beef products that the United States imported from countries that later found BSE, were for human consumption. According to scientific experts, meat products could represent a risk to people who ate them if the meat came from a BSE-infected animal (see figs. 6 and 7). Until February 2001, USDA regulations allowed the import of beef and beef products from countries with BSE or at risk of BSE if the facility that processed the meat did not receive, store, or process ruminant material from a country with BSE or at risk for BSE. In addition to the BSE risk posed by past imports, a small but steady stream of BSE-risk material may still be entering the United States through international bulk mail. USDA inspectors at a New Jersey international bulk mail facility have begun using new x-ray technology that clearly distinguishes organic from inorganic matter to screen packages for products that pose a risk of animal and plant diseases. At this facility, we saw USDA inspectors seize one package that contained beef soup mix from Germany, one of the countries from which the United States restricts trade in beef products. Inspectors also showed us a package from Ireland that was labeled “cutlery,” but contained corned beef. From May through October 2001, USDA inspectors, using the new x-ray technology, screened about 7 percent (about 116,000) of the over 1.5 million packages that passed through the New Jersey facility. Of the screened packages, 570 contained one or more at-risk beef or beef-derived products. However, USDA does not screen packages at the New Jersey facility during the 24 hours each week when inspectors are not on duty. According to the inspectors, the screening rate was low because only one or two inspectors are on duty at any time, and each has only seconds to visually inspect packages as they pass by on a conveyor belt. While all 14 international bulk mail facilities in the United States have some sort of x-ray technology that can distinguish organic from inorganic material, the new technology—used only at the New Jersey facility—provides greater accuracy and clearer imagery. The new technology is also compatible with the conveyor system and can be placed over the conveyor belt. USDA officials told us that the new x-ray technology would facilitate the inspection of international bulk mail arriving in the United States. At-risk items may also slip through federal inspections at ports of entry. Customs often finds discrepancies with the accuracy of importer-provided information during its annual reviews of trade compliance and, as a result, BSE-risk products may not be flagged for further inspection. For example, Customs found a shipment of animal feed ingredients incorrectly classified as pet food by the importer. It also found a shipment of animal feed identified by the importer as originating in Canada that inspectors discovered originated in Switzerland. For fiscal year 1999, Customs reported that importer-provided information on shipments of live bovine animals (e.g., cattle, bison, and buffalo) was inaccurate in over 24 percent of samples taken. Information on shipments of fresh or frozen beef was inaccurate in over 21 percent of samples and on shipments of animal feed in over 24 percent of samples. Additionally, the ever-increasing volume of imported shipments strains inspection resources for both FDA and USDA. In October 2001, we reported that during fiscal year 2000, FDA inspected about 1 percent of the over 4 million imported food entries under its jurisdiction. Additionally, FDA inspected less than one percent of the more than 146,000 entries of imported animal drugs and feeds. FDA has acknowledged that the increased volume of imports has severely hampered its ability to inspect a sufficient portion of imports. Specifically, while imported shipments under FDA’s jurisdiction have risen dramatically in recent years, the agency’s inspection staff has remained almost static since 1992. Prompted by bioterrorism concerns, the Secretary of Health and Human Services requested $61 million in October 2001 to hire 410 additional inspectors and other personnel to allow increased inspections of imported food products. In 1997 we reported that USDA’s inspection workload had increased dramatically since 1990; we concluded that USDA had little assurance that it was deploying its limited inspection resources at the ports of entry that are most vulnerable to the introduction of pests and diseases. USDA has acknowledged the lack of inspection coverage and, in the wake of foot and mouth disease outbreaks in Europe and other countries, authorized $32 million in fiscal year 2001 to hire 350 new inspection personnel and additional canine inspection teams at U.S. borders and ports of entry. USDA began testing animal brains to detect BSE in domestic cattle in 1990. This surveillance program consists primarily of collecting and analyzing brain samples from adult cattle with neurological symptoms and adult animals that were nonambulatory at slaughter. Testing animal brains is a key measure to detect BSE, and USDA’s surveillance program should build on current efforts to increase the number of brain samples tested each year, according to officials from organizations representing the beef and grain industries, state officials, and consumers, as well as federal officials. As table 1 shows, the number of samples collected and tested by USDA in its surveillance program has generally increased each year. The table also shows that a substantial portion of those samples have been taken from nonambulatory cattle since 1994, when USDA first began to collect this information. USDA has increased the portion of nonambulatory cattle because research has shown that this population includes animals that might have subtle neurological symptoms or injuries resulting from neurological impairment. In fiscal year 2001 these animals accounted for more than 90 percent of the 4,870 brains collected and tested by USDA. The remainder includes brain samples from animals rejected at slaughter for signs of neurological disease. In addition to increasing the sample size and the number of nonambulatory cattle tested, USDA has broadened its testing efforts. USDA tests samples using two complementary laboratory methods and conducts surveillance for two TSEs—scrapie in sheep and chronic wasting disease in deer and elk—that already exist in the United States. USDA officials and many scientific experts believe surveillance and eradication of scrapie and chronic wasting disease is important, in part, because of the suspected link between scrapie in the United Kingdom and the appearance of BSE, and because both have been experimentally transferred to other species. Although USDA has strengthened its surveillance efforts, the program does not include many samples from cattle that die on farms. Scientific experts consider these animals a high-risk population because they are generally older and the reasons for their death are often unknown. USDA told us that efforts to obtain samples more systematically from such animals are limited largely by the dispersed nature of the domestic livestock industry, the lack of adequate laboratory capacity to conduct the tests, and the lack of sufficient staff and time to collect the samples. When animals die on farms they may be buried on the farm, taken to landfills, or collected by renderers who recycle animals and other animal tissues into, among other things, animal feed. In 1998 USDA implemented a cooperative program with the rendering industry to ensure that carcasses of animals condemned at slaughter for signs of neurological disease are held until test results are completed. Under this program, USDA may share the expenses to store or dispose of carcasses during the testing period. USDA was not able to provide us with information on how frequently the program has been used, but it has been used only sporadically, according to USDA officials and the USDA veterinarians and renderers we spoke with in nine states and Puerto Rico. In its evaluation of the potential for BSE in the United States, the Harvard Center for Risk Analysis included animals that die on farms as a potential source of BSE exposure. According to their simulation model, excluding from the rendering process those animals that die on farms significantly reduces the potential for cattle to be exposed to BSE through animal feed. Harvard’s report also notes that farmers may not be willing to send animals displaying neurological symptoms to slaughter, thereby reducing the likelihood that infected animals would be inspected by USDA at slaughterhouses. Once dead, these animals might be rendered, as assumed in the simulation model, or disposed of on farms. According to USDA officials, when the Harvard study was issued to the public, the Secretary of Agriculture announced plans to more than double the number of BSE tests conducted in FY 2002 to more than 12,000. Federal and state officials and the scientific community agree that if BSE were to be found in a U.S. herd, a well-enforced feed ban would prevent its spread to other herds. State inspectors (who conduct about 80 percent of inspections) and FDA inspectors document their feed ban inspections on inspection forms. FDA headquarters compiles and maintains this information in a database, and it provided to us the information in that database through October 26, 2001. According to FDA’s data, more than 12,000 inspections have been conducted since 1997 at more than 10,000 firms, including renderers, feed manufacturers, feed haulers, and distributors, as well as at on-farm feed operations. According to FDA’s October 2001 quarterly update that summarized results of feed ban inspections, 364 firms were out of compliance. In addition, FDA believes that not all firms that should be subject to the ban have been identified and inspected, at least 1,200 or more based on industry estimates (see table 2). However, we could not verify these data because we found significant flaws in FDA’s database, which we discuss later in this report. FDA did not take prompt enforcement action to compel firms to comply with the feed ban. When we began this study, in April 2001, the only enforcement action FDA had taken was to issue two warning letters in 1999. The first letter was issued in May 1999—21 months after inspections began. However, since inspections began in 1997, FDA has reported hundreds of firms out of compliance—most often for failure to meet requirements to label feed that contained prohibited proteins or for including prohibited proteins in cattle feed. In our analysis of individual inspection forms, we found several instances in which firms were out of compliance in repeated inspections, yet FDA had not issued a warning letter. We also found instances in which firms were out of compliance but had not been reinspected for a year or more—and in some cases for more than 2 years. Between February and November 2001, FDA issued warning letters to another 48 firms. In addition, 17 firms voluntarily recalled feed, including 9 that had been issued a warning letter. As of November 30, 2001, FDA or states had reinspected 33 of the total of 50 firms that had been issued warning letters (2 in 1999 and 48 in 2001). Six of the firms were still out of compliance on reinspection. FDA has no enforcement strategy for feed ban compliance that includes a hierarchy of enforcement actions, criteria for actions to be taken, time frames for firms to correct violations, and time frames for follow-up inspections to confirm that violations have been corrected. According to FDA, rather than taking enforcement actions, it has emphasized educating firms subject to the feed ban about the ban’s requirements and working with those firms to establish cooperative relationships. FDA reported that some states might have taken enforcement actions, including requiring firms to recall noncompliant feed. However, FDA does not track enforcement actions taken by states; therefore, it does not know the extent of such actions. Even if FDA were to actively enforce the feed ban, its inspection database is so severely flawed that—until corrected—it should not be used to assess compliance. Nonetheless, FDA uses the database to manage and oversee compliance, respond to congressional inquiries about compliance, and keep industry and the public informed. From our review of FDA’s database of 12,046 feed ban inspection records (as of October 26, 2001), we found records lacked unique identifiers, were incomplete, contained inconsistent or inaccurate information, and were not entered into the database in a timely manner. Examples of the severe flaws we found include: Entries for 5,446 inspections—or about 45 percent of all inspections—lack information to uniquely identify individual firms. As a result, the data cannot be used to reliably determine the number of firms inspected, compliance trends over time, or the inspection history of an individual firm. In at least one case, the same unique identifier had been applied to six different firms and, in another case, a firm had two unique identifiers. In addition, we found 232 cases in which one or more inspections of the same firm lacked the unique identifier. Entries for 301 inspections of firms that handle prohibited proteins contain no response to whether feed was properly labeled; entries for 438 inspections of firms that handled both prohibited and non-prohibited proteins had no response to whether prohibited proteins were included in feed intended for cattle. Entries where responses to questions about feed labeling or whether prohibited proteins were included in feed intended for cattle indicated that the firms were in compliance; however, inspectors’ notes contained in other sections of the database contradicted the responses and indicated the firms were not in compliance. Inspections were not entered into the database. In assessing the warning letters, we discovered references to inspections that do not appear in the database. In fact, the inspection record for the firm that received the first warning letter—in May 1999—does not appear in the database. Inspections were not entered into the database in a timely fashion. We found several instances where inspections dating back to 1998 and 1999 were not entered into the database until mid to late 2001—too late for FDA to reinspect in a timely fashion if violations existed. Also, too much time had passed for FDA to reliably clarify inconsistent or conflicting information or obtain answers to questions left blank on the inspection forms. Moreover, any compliance information FDA reported to congressional overseers and others would not have been reliable. Several states did not use FDA’s inspection form, but instead used their own state-developed forms. Because the questions were different, certain assumptions had to be made when these data were entered into FDA’s database. The HHS Office of Inspector General noted, in a June 2000 report, that many FDA agreements with states, whose inspectors conducted about 80 percent of feed ban inspections, do not ensure that states routinely provide FDA with standardized information on the inspections they conduct. In September 2001 FDA revised the inspection form and asked states to use the revised form. States are free to ask other questions during the inspections, but FDA has also asked them to include FDA’s questions in FDA’s format. The database is incomplete. It does not include all firms subject to the feed ban. FDA officials relied on the personal knowledge of state and FDA field staff and on membership lists from industry groups to identify and locate firms. However, our review of membership records for the National Renderers Association—for the years 1998 to 2001—disclosed 21 rendering firms that were not in FDA’s database. According to association records, those firms process meat and bone meal and other products that could contain proteins subject to the feed ban. FDA did not count data entries with blanks—no responses—in the selected data fields it uses when it reports on compliance. Therefore, when FDA provided compliance information to the Congress—and when it publishes that information electronically—the data are misleading and the number of firms identified as out of compliance are undercounted. For example, for the 364 firms identified as out of compliance in FDA’s October 2001 update—the source for information in table 1 above—FDA assumed that all entries with blanks in the compliance fields were in compliance. However, we found entries where firms had blanks in the data fields FDA used, yet contained inspector comments in other fields showing that the firms were not in compliance. FDA also did not include these firms on published lists of noncompliant firms. About half of the inspection records contain inspector comments. On those entries where blanks also existed, the inspector comments showed that firms were in compliance in some instances and out of compliance in others. An FDA official told us that the database was not originally intended to track compliance of individual firms, but rather to guide the agency’s efforts to educate firms subject to the ban by illustrating particular states or practices that needed more intensive focus. However, FDA has no information system other than the inspection database to track compliance with the feed ban. FDA has not placed a priority on oversight of the feed ban. From the implementation of the feed ban in August 1997 until early 2001, one person in FDA’s Center for Veterinary Medicine was responsible for feed ban management. Although state and FDA District Office inspectors conducted the inspections, this individual designed the inspection form, compiled inspection data, and made enforcement decisions—in addition to that individual’s other duties. Furthermore, the inspection form had not been pretested—a standard practice to ensure that questions are interpreted and answered consistently. In the course of our review, FDA attempted to clean up the database so that it could serve as an accurate management tool. However, in October 2001, FDA turned that effort over to a contractor to (1) review the completeness of the feed ban inspection database to ensure that findings have been captured, including written comments by the inspectors on inspection forms; (2) analyze the data and present the findings in a report; and (3) review the current enforcement strategy to determine program strengths and weaknesses and to make recommendations for improvements that will better support FDA’s compliance goals. FDA expects this work to be completed by February 2002. Also in October 2001, FDA entered into a separate contract to reconfigure the data so that they can be incorporated into FDA’s primary database for all other inspection activities. Work on the two contracts is to be carried out concurrently. This work is to be completed in the spring of 2002. In evaluating the potential for BSE in the United States, the Harvard study noted that the feed ban is key to preventing the spread of BSE. It added, however, that the effectiveness of the feed ban is somewhat uncertain because compliance rates are not “precisely” known. Harvard’s simulation model assumed the feed ban was compromised to some extent by on-farm feeding of prohibited proteins to cattle and by some noncompliance with the requirement that feed containing prohibited protein carry a warning label. The study’s observations underscore the importance of the problems we found in FDA’s oversight and enforcement of the feed ban. Some consumers in the United States regularly eat cattle brains and central nervous system tissue. Brains are a routine part of the diet in several cultures. Eating such foods would not pose a safety concern unless they were from a BSE-infected animal. However, most consumers would not realize that central nervous system tissue could be found on many beef cuts and in several beef products. For example, bone-in meat cuts, such as T-bone steaks, are stripped directly from the animal’s vertebrae and may contain portions of the spinal cord. Many other edible products, such as beef stock, beef extract, and beef flavoring, are frequently made by further processing (e.g., boiling) the skeletal remains (including the vertebral column) of the carcass after most of the meat has been removed. USDA officials told us that they would expect to find central nervous system tissue in these foods. However, based on food quality—not food safety—concerns, USDA does prohibit central nervous system tissue in beef products that are labeled as meat and that are made using technology that mechanically removes meat from the bones of slaughtered animals in a way that approximates deboning by hand. Products made from meat using this technology include sausages and hot dogs. USDA has found central nervous system tissue in meat that was mechanically removed using a technology known as advanced meat recovery systems. USDA estimates that 28 beef processing plants use this technology and, in 2000, recovered 257 million pounds of beef. According to a beef industry official, this technology recovers up to 10 additional pounds of meat per carcass. Because it is not a food safety issue, USDA has not rigorously enforced its prohibition against the presence of central nervous system tissue in meat extracted by using the advanced meat recovery system technology. Since 1997, USDA has tested a total of 63 beef samples from 18 of the plants that use this technology. Of those samples, 12 tested positive for central nervous system tissue. USDA has not tested beef samples from the other plants that use the technology in at least 4 years. When its tests found central nervous system tissue in samples, USDA did not track to ensure that the processing plants relabeled the contaminated meat products as something other than meat. USDA plans to use the Harvard study to help it determine whether the presence of central nervous system tissue should be a food safety matter—whether all or some central nervous system tissue should be considered unsafe for human consumption. The Harvard study notes that a ban on the use of spinal cords, brains, and vertebral columns in human food or animal feed significantly reduced the risk of exposure in its simulation model. As part of its evaluation of the implications of the study, USDA will issue a Federal Register Notice after January 2002 to solicit comments on, among other things, the safety of the advanced meat recovery technology and any meat that comes from the vertebral column. In addition, FDA’s TSE Advisory Committee—composed of USDA, National Institutes of Health, Centers for Disease Control and Prevention, and other federal experts, as well as academic scientists and medical experts, and consumers—recommended, in October 2001, that FDA consider taking regulatory action to ban brains and other central nervous system tissue from human food because of the potential risk of exposure to BSE-infected tissue. According to FDA, it is considering banning central nervous system tissue from the foods it regulates as well as from cosmetics and over-the-counter drugs. FDA told us it is taking this action to ensure that consumers are protected from consuming BSE- contaminated products. Representatives of two consumer groups we interviewed expressed concern that central nervous system tissue remains a part of food generally and that the use of advanced meat recovery technology could expose consumers unknowingly to such tissues. If BSE were discovered in U.S. cattle, beef exports and domestic beef consumption would drop, damaging many sectors of the economy, according to federal economists. If the infected cattle were to enter the food supply, some people might develop vCJD. The economic impacts of a BSE outbreak in the United States would include the direct impacts on certain sectors, such as the beef and livestock industries, and indirect impacts on related industries, such as the animal feed and restaurant industries. In addition, an assessment of economic impacts would include costs relating to the public sector, such as farmer compensation payments, increased spending on research and development, and increased costs to government agencies. While the extent to which economic impacts would pass from one sector to another is unclear, these effects would eventually channel through to several sectors of the economy. Figure 8 lists the sectors and some of the likely qualitative impacts within each sector in the event of a BSE outbreak in the United States. To date, however, there are no comprehensive economic studies of the total direct and indirect economic impacts of a potential BSE crisis in the United States. A complete assessment of these impacts is difficult to forecast given the uncertainties surrounding key assumptions, such as the source of the BSE, the number and timing of cases, and the public’s reaction. For instance, if BSE were to enter the country through the importation of meat and bone meal rather than live cattle imports, the economic consequences could be more pervasive, because the meat and bone meal could potentially contaminate many cattle. Another difficulty in estimating impacts is the problem of determining how the increased costs of BSE would be passed on from the farmer to the final consumer in the beef-marketing channel. Moreover, studies that estimate losses due to BSE from other countries may not be totally applicable to the United States. Food safety experts have noted that perceptions about food safety risks vary from country to country, and the consumer impacts of BSE in one country may not be applicable to another country. If BSE were found here, the economic impact on the $56 billion beef industry and related industries could be devastating, according to USDA economists. For instance, consumers in the United States, in response to reports of BSE-infected cattle, may for a period of time restrict their purchases of beef and products containing beef. That response would be felt not only by the cattle and beef industries, but also by peripheral industries. For example, hamburger chains and soup and frozen dinner manufacturers could see dramatic declines in business. Similarly, in international trade, a loss in beef exports may be more devastating for the United States than for other beef-producing countries. In particular, since the United States exports nearly 10 percent (by volume) of its total beef production (about 25 percent of total world beef exports), the trade sector is also critical in estimating total economic impacts. As a first approximation, however, FDA officials estimated the direct effects to the beef and livestock industries based on a 1998 study of the economic impacts of the first year of the BSE outbreak in the United Kingdom. They estimate that if the United States were to experience an outbreak as severe as the one in the United Kingdom, the beef industry could lose as much as $15 billion in sales revenue. Specifically, these costs were based on the assumption that in the event of a BSE crisis, U.S. domestic and export demand would decrease by the same amounts as in the United Kingdom—a 24 percent decline in domestic beef sales and an 80 percent decline in beef and live cattle exports. In addition, the FDA estimated the livestock sector would incur a minimum of $12 billion to slaughter and dispose of at-risk cattle. This estimate was based on an assumption that the United States would need to destroy about four times as many cattle as the United Kingdom. However, the FDA analysis did not include the offsetting effects of government payments, as occurred in the United Kingdom, shifts in consumer demand for other types of meat, or the effects on other related sectors of the economy. Overall, however, FDA noted that those firms primarily engaged in the production of beef products would incur severe economic disruption. In terms of the health risks, if infected cattle were to enter the food supply, some people might develop vCJD; however, scientific experts disagree about how many people could develop the disease. Many experts believe that vCJD is difficult to contract and, therefore, that relatively few people would develop the disease. However, other scientific experts believe that, because of the long incubation period, no one can predict whether few or many might contract vCJD. According to some scientific experts in the United Kingdom, as many as 100,000 people in Europe may develop vCJD as a result of the BSE outbreak there. This could include Americans who lived in countries where BSE occurred. In addition to these direct health implications, an outbreak of BSE in the United States would carry an emotional toll on consumers who believe federal regulators will protect them from this devastating disease. Moreover, according to a National Institutes of Health scientist, the appearance of vCJD could cast doubt on the safety of organ donations and the U.S. blood supply. Any health implications would translate into medical treatment and related financial and economic costs, such as lost productivity. The United States prohibited the import of cattle and other ruminants 3 to 5 years earlier than many other countries. Its surveillance program to test cattle brains for BSE also met international targets for the number of animals tested earlier than many other countries. However, the United States has a more permissive feed ban than other countries—one that allows cattle feed to contain proteins from horses and pigs. FDA is reviewing whether these ingredients should continue to be allowed in cattle feed. Finally, as in most countries that are BSE-free, including the United States, cattle brains and other central nervous system tissue can be sold as human food. The European Commission’s Scientific Steering Committee has had scientific experts assess countries, including the United States, for the risk that BSE could enter the country through imported animals and feed and be spread through recycled animal proteins in feed. As of November 30, 2001, risk assessments had been completed for 49 countries. According to the scientific experts, most European countries are likely to have BSE, even if it has not yet been confirmed by surveillance testing, or to have BSE at a higher level of incidence than thought. The scientific experts assessed the United States as unlikely to get BSE, but indicated that the possibility could not be excluded. Table 3 presents the results of the 49 BSE risk assessments completed through November 30, 2001. Using information on each country’s past and present potential exposure and ability to stop the spread of BSE, the scientific experts qualitatively assessed the probability that an animal in a country is infected with BSE. The assessments relied on data voluntarily supplied by the countries and on discussions with the officials familiar with BSE prevention efforts from each country on (1) the potential import of BSE via live cattle or contaminated feed, (2) the adequacy of surveillance testing to detect the presence of BSE, (3) cattle feeding and rendering practices, and (4) the use of potentially infective tissue from cattle. The scientific experts also focused on the import of infected animals and animal feed as the only initial sources of infection and on animal feed as the only source of spread. The experts did not evaluate the risks from consumer products that could contain BSE-infected tissue. The scientific experts reported using a conservative, reasonable worst-case approach, whenever data or information from countries were insufficient. Based on our analyses of the 49 risk assessments, the United States compared with the other countries as follows in terms of the potential to import BSE, surveillance testing, cattle feeding practices, and use of potentially infective tissue. Potential to import BSE. The United States acted earlier than many countries to ban the import of cattle and meat and bone meal for use in cattle feed from the United Kingdom and other countries where BSE had appeared. The United States was one of three countries that banned trade in cattle from the United Kingdom by 1989; six other countries did so by 1994. Nine other countries had formal bans in place by 1996, the year that the United Kingdom stopped all trade in cattle. Actions to halt trade in cattle with other countries where BSE had appeared has been variable, and the United States and some other countries phased in restrictions as cases appeared. Also, many of the assessed countries, particularly those in South America and in Africa, had little or no trade in cattle with the United Kingdom or other countries where BSE had appeared. With regard to the import of meat and bone meal for use in cattle feed, the United States banned imports from the United Kingdom in 1991 and phased in restrictions from other countries as cases of BSE appeared. While one country banned such imports from the United Kingdom as early as 1978, due to concerns about foot and mouth disease, a few countries imported significant amounts of meat and bone meal from the United Kingdom and other BSE countries as recently as 1999. Surveillance testing to detect BSE. The United States is one of three countries that reported meeting Office International Des Epizooties (OIE)-recommended cattle testing levels by 1994. Most countries either had not met OIE levels at the time of their assessments or met the levels after 1994. However, nine countries, including six with BSE, had started or planned to start targeting cattle that die on farms in their surveillance testing. In their assessments of the United States and the other countries, the scientific experts most often recommended that countries improve surveillance largely by including tests of high-risk populations, such as animals that die on farms. Cattle feeding practices (feed bans). Of the 49 countries assessed, 41 had some sort of feed ban in place; however, those bans varied on the extent that they allowed protein from mammals in feed for cattle. Compared to other countries with a ban, the United States and 16 others allow at least some mammalian protein in feed for cattle. For example, the United States and Canada allowed cattle feed to contain protein from horses and pigs. The remaining 24 countries with a feed ban (including 13 that have BSE) prohibit all mammalian protein in cattle feed, although 9 allow such protein in feed for pigs and poultry. Four of the 24 countries have more stringent bans that prohibit mammalian protein in feed for all farm animals—a practice the European Commission asked its member countries to adopt on a temporary basis in 2000. In the assessments, scientific experts found that the potential for commingling prohibited protein with cattle feed existed in most countries. Enforcing existing feed bans was the second most common recommendation made by the scientific experts. In October 2001, FDA officials held a public hearing to elicit comments on, among other things, whether the existing feed ban exemptions should be modified. As of December 17, 2001, FDA had not announced whether it would propose any changes to the ban. Use of potentially infective tissue. Most of the countries assessed that had not found BSE-infected cattle, including the United States, generally allowed the sale of brains and other central nervous system tissue in human food. Nearly half of the countries with BSE prohibited this high-risk tissue in human food, and at least three countries—the United Kingdom, Ireland, and Switzerland—banned mechanically recovered beef, such as that used in meat pies, that may contain central nervous system tissue and had been linked to vCJD. However, the Court of Auditors—the investigative agency for the European Commission—found that efforts by European Union countries to remove potentially high risk tissue from the human food and animal feed chains have not been fully implemented and that the countries could not reach agreement on what constituted high-risk tissue. BSE and vCJD are devastating, incurable, inevitably fatal diseases. If they enter the country, they can bring dire economic consequences to the cattle and beef industries. Therefore, forceful federal prevention efforts are warranted to keep BSE away from U.S. shores. Nevertheless, Customs has reported significant error rates in importer-provided information for BSE-risk shipments, import controls over bulk mail are weak, and inspection capacity has not kept pace with the growth in imports. Because of these import weaknesses—and because BSE may have entered in imports from countries that have since developed the disease—BSE may be silently incubating somewhere in the United States. If that is the case, then FDA’s failure to enforce the feed ban may already have placed U.S. herds and, in turn, the human food supply at risk. FDA has no clear enforcement strategy for dealing with firms that do not obey the feed ban, and it does not know what, if any, enforcement actions the states may be taking. Moreover, FDA has been using inaccurate, incomplete, and unreliable data to track and oversee feed ban compliance. Furthermore, if there is even a slight chance that BSE is incubating in U.S. cattle, consumer groups believe that the American public has the right to know when food and other consumer products may contain central nervous system tissue that may pose a risk to the food supply. The importance of informing consumers is heightened by concerns raised in the Harvard study and by FDA’s TSE Advisory Committee regarding the potential public health risk posed by consuming such tissue. In addition, although USDA has been proactive in increasing the number of cattle brains tested, it does not test many animals that die on farms, even though it recognizes that older animals and animals that die from unknown causes are at higher risk for BSE. To better ensure that the United States is protected from the emergence and spread of BSE, we make the following recommendations: In order to strengthen inspections of imported products that could pose a risk of BSE, we recommend that the Secretaries of Health and Human Services and of Agriculture, in consultation with the Commissioner of Customs, develop a coordinated strategy, including identifying resource needs. In order to strengthen oversight and enforcement of the animal feed ban, we also recommend that the Secretary of Health and Human Services direct the Commissioner of FDA to take the following actions: Develop a strategy, working with states, to ensure that the information FDA needs to oversee compliance is collected and that all firms subject to the feed ban are identified and inspected in a timely fashion. Develop an enforcement strategy with criteria for actions to address firms that violate the ban and time frames for reinspections to confirm that firms have taken appropriate corrective actions. Track enforcement actions taken by states. Ensure that, as contractors modify the inspection database, they incorporate commonly accepted data management and verification procedures so that the inspection data can be useful as a management and reporting tool. In order to help consumers identify foods that may contain central nervous system tissue, we recommend that, as USDA evaluates whether such tissue from cattle poses a health risk, the Secretary of Agriculture consider whether some interim action, such as public service announcements or caution labels or signs, might be appropriate to advise consumers that certain beef cuts and beef products may contain central nervous system tissue; and better enforce the existing labeling requirement for products that contain beef extracted using advanced meat recovery technology and contain central nervous system tissue. Additionally, to further help consumers identify foods and other products that may contain central nervous system tissue, we recommend that the Secretary of Health and Human Services consider whether the products it regulates, including food, cosmetics, and over-the-counter drugs, should be labeled to advise consumers that the products may contain central nervous system tissue. In order to strengthen the BSE surveillance program, we further recommend that the Secretary of Agriculture increase the number of tests from cattle that die on farms in the BSE surveillance program. We provided HHS, USDA, and Customs with a draft of this report for review and comment. HHS conveyed comments from FDA. FDA concurred with our recommendations and said the report highlighted some key areas where U.S. efforts to prevent BSE could be bolstered. FDA agreed that further improvements in compliance with the feed ban would reduce the risk of introducing and spreading BSE in the United States. However, FDA did not agree that it had misled Congress and the public in reporting on compliance. It is true, as FDA pointed out, that its June 22, 2001, transmittal of compliance information to the Chairman of the House Committee on Energy and Commerce “made an effort to identify the fact that there were reporting problems, including incomplete data, i.e., blanks.” However, we do not believe that caveat conveyed the extent to which the information could be inaccurate. In fact, noncompliance could be much higher than FDA reported, because FDA treated all firms with blanks on compliance questions as if they were in compliance, even though some of those records contained inspector comments stating that the firms were not in compliance. FDA’s transmittal to the Chairman did not disclose this. Therefore, we believe our report is correct in characterizing FDA’s data as misleading. FDA also disagreed with our conclusion that it had not placed a high priority on oversight of the feed ban. However, throughout our review, FDA repeatedly pointed out that one individual, along with that individual’s other responsibilities, designed the feed ban program, the inspection form, and the database to monitor inspections and, until January 2001, made all decisions regarding enforcement actions. FDA’s comments and our detailed responses are presented in appendix II. USDA largely agreed with our recommendations and said that it will address them as it seeks public comment on any proposed regulatory changes. USDA stated that a portion of the funding it received to bolster USDA’s homeland security efforts in the January 10, 2002, Defense Appropriations legislation will be used to increase BSE surveillance. It plans to more than double the number of animals sampled and to obtain more samples from animals that die on farms. USDA also acknowledged its support for providing consumers with information on product contents and for an open process that allows consumers to make choices. However, USDA stated that labeling and warning statements should be reserved for known hazards, which BSE is not in the United States. In light of the experiences in Japan and other countries that were thought to be BSE free, we believe that it would be prudent for USDA to consider taking some action to inform consumers when products may contain central nervous system or other tissue that could pose a risk if taken from a BSE-infected animal. This effort would allow American consumers to make more informed choices about the products they consume. USDA’s comments and our detailed responses are presented in appendix III. Customs concurred with the report and the recommendations as they related to Customs. Its letter is presented in appendix IV. USDA and FDA also made technical clarifications, which we incorporated as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the Secretaries of Agriculture and HHS, the Commissioner of Customs, and other interested parties. We will make copies available to others upon request. If you have any questions about this report, please contact me or Erin Lansburgh at (202) 512-3841. Key contributors to this report are listed in appendix V. To address the effectiveness of federal efforts to prevent BSE or its spread, we focused on oversight activities in four key areas: import controls, compliance with feed rules, meat production, and disease surveillance. This included analysis of import data for calendar years 1980 through 2000 maintained by the U.S. Department of Commerce, the Treasury Department, and the International Trade Commission; analysis of FDA data on inspections for compliance with the feed ban for fiscal years 1997 through 2001; and review of USDA slaughter and meat processing procedures and BSE surveillance documents. To assess the effectiveness of compliance with the animal feed ban, we obtained and analyzed FDA’s feed inspection database to determine the accuracy, completeness, and reasonableness of key data elements, and timeliness of data entry. We interviewed FDA and feed industry officials and reviewed various FDA documents, including BSE inspection forms, assignment memorandums for conducting BSE inspections, and listings of firms that were out of compliance and firms that received FDA warning letters. In addition, we reviewed FDA contract information for evaluating the existing data in the BSE inspection database and for cleaning up the data and incorporating it into the agency’s main database. We did not independently verify the accuracy of trade data maintained by the International Trade Commission or inspection data maintained by FDA. We also visited two large ports of entry to observe procedures to screen shipments for BSE-risk products, one state to observe feed ban inspections, and another state to observe slaughter and advanced meat recovery operations. To assess the potential health risks and economic impacts of a BSE outbreak in the United States, we met or spoke with federal and state officials, as well as academic experts, industry representatives, and consumer groups, and we reviewed scientific literature. Specifically, we interviewed USDA officials responsible for oversight of imported animals and products, meat, animal disease surveillance, and agricultural statistics; FDA officials responsible for oversight of the feed ban, vaccines and blood, food regulated by FDA, dietary supplements, and imported products; officials at the U.S. Customs Service, International Trade Commission, United States Trade Representative, Department of State, Centers for Disease Control and Prevention, and National Institutes of Health. In addition, we attended public meetings on BSE-related topics sponsored by FDA, HHS, and the American Meat Institute. We also discussed risks and impacts with representatives from the National Association of State Departments of Agriculture, American Association of Feed Control Officials, Center for Science and the Public Interest, Public Citizen, American Feed Industry Association, American Meat Institute, National Cattlemen’s Beef Association, National Grain and Feed Association, National Milk Producers Federation, National Renderers Association, Inc., and the Pet Food Institute. We interviewed officials at the Harvard Center for Risk Analysis and reviewed their report, Evaluation of the Potential for Bovine Spongiform Encephalopathy in the United States, issued in November 2001. To compare federal efforts to those taken by other countries, we reviewed BSE risk assessments of 49 countries, including most major U.S. trading partners, prepared by the European Commission’s Scientific Steering Committee. We compared the U.S. prevention efforts with those of countries that have not reported a case of BSE and with countries in which existing prevention measures did not prevent the emergence of BSE. We also reviewed evaluations of BSE prevention programs in member states of the European Union conducted by the European Commission’s Food and Veterinary Office and the European Communities’ Court of Auditors. We conducted our study from April through December 2001 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of Health and Human Services’ letter dated January 9, 2002. 1. Our report acknowledged FDA’s ongoing review but also notes that FDA has not set a date to announce a decision on the exemptions. The report also recognizes that recent research suggests the possibility of “silent” incubation in species not previously thought susceptible to TSEs. This research argues against waiting until BSE is found to strengthen measures shown to prevent the spread of the disease. As FDA notes, other countries strengthened their feed bans due to concerns about commingling prohibited and non-prohibited proteins. Such commingling is a common area of noncompliance in the United States. 2. As FDA points out, its June 22, 2001, transmittal of compliance information to the Chairman of the House Committee on Energy and Commerce “made an effort to identify the fact that there were reporting problems, including incomplete data, i.e., blanks.” However, we do not believe that this caveat conveyed the extent to which the information could be inaccurate. In fact, noncompliance could be much higher than FDA reported, because FDA treated all firms with blanks on compliance questions as if they were in compliance. We found that over 700 inspection records for firms that handled prohibited proteins had blanks on compliance questions. In its response to the Chairman, FDA did not disclose that some of those records contained inspector comments stating that the firms were not in compliance. Nor did FDA disclose that, at the time it responded to the Chairman, it was aware of the need for “significant improvements in its data collection system for enforcing the feed ban.” As a result, we believe the data were misleading. 3. We believe that the nature and severity of the problems we found in FDA’s management, oversight, and enforcement of the feed ban point to insufficient attention by FDA management. Moreover, the fact that FDA gave all headquarters responsibility to one individual—as an add- on to that individual’s other duties—is further evidence of the relatively low priority FDA gave to its regulatory responsibility. 4. Although FDA’s field inspectors and state inspectors carried out the inspections, FDA headquarters tracked overall compliance with the feed ban and brought together data on FDA field and state compliance inspections. In meetings with FDA officials, we were repeatedly told that a single person had designed the program and the database to monitor inspections and, until January 2001, made all enforcement decisions. While administrative and other support may have been available for this person, the overall design and direction of feed ban implementation rested with this individual. Moreover, because FDA had no other information system, the database that individual developed was FDA’s only mechanism to monitor the program and track feed ban compliance. 5. Although FDA cites a number of high-level interagency policy and technical initiatives aimed at ensuring that BSE-risk products do not enter the United States, our recommendation is grounded in problems we found at the operational level. First, the high error rates in importer-provided information found by Customs are unacceptable. Second, the ever-increasing volume of imported shipments strains inspection resources at both USDA and FDA. Third, we observed or were told by federal field personnel about problems affecting USDA and FDA staff responsible for reviewing import documentation and conducting inspections of shipments. FDA staff told us that they need integrated information technologies, dedicated inspection facilities, and additional staff to effectively address their workload. 6. We do not agree that FDA has made extensive progress implementing our recommendation, based on the fact that it periodically meets with states on BSE-related issues and has increased the number of states under contracts to conduct inspections. With regard to its progress in identifying the universe of firms subject to the ban, our work shows that FDA’s efforts have not been successful. In reports, FDA states that the number of on-farm feed mills, feed blenders, and feed haulers is still unknown. FDA also asserts that the feed industry has undergone extensive consolidation, but it has not reconciled the number of firms inspected with industry or state estimates. Although FDA asserts it has incorporated into state contracts a requirement to identify firms subject to the ban, the contracts we reviewed did not include such provisions. Moreover, as recently as May 2001, we found that FDA was adding to its database information on inspections conducted in 1998 by states under contract. 7. FDA agrees on the need for a comprehensive strategy for BSE but points out that it began an enforcement strategy in 1998. However, our review shows that the strategy did not contain criteria and timeframes for specific enforcement actions against firms that fail to comply with the feed ban, as our recommendation envisions. FDA’s contention that its initial approach was to educate firms does not explain its failure to take action against firms found out of compliance on repeated inspections. Now that the feed ban has been in effect for more than 4 years, FDA points out that inspections have resulted in more than 200 recalls. However, those recalls consist of actions taken by 22 firms, one of which accounted for about 150 recalls. By FDA’s own estimates, more than 300 firms are out of compliance. 8. Regardless of variations in state laws, FDA has instructed states to provide specific information on the feed ban inspections they conduct. We believe FDA should request these states to also include information on enforcement actions taken as a result of those feed ban inspections. 9. While we agree that FDA has initiated efforts to increase the integrity and usefulness of the BSE inspection data, it has not taken the steps necessary to ensure that the inspection data are accurate and complete and recorded in a timely manner. For example, neither the steps listed in FDA’s letter nor the terms of the contracts we reviewed include periodic assessment of error rates or controls to help ensure data entered are complete and accurate. Moreover, FDA’s response does not address how the data on past BSE inspections will be merged with the Field Accomplishment Compliance Tracking System. Many of the firms have never before been subject to FDA oversight and would not have such control numbers to effectively merge the old and new data. Also, FDA has not included steps to capture timeliness of inspections, enforcement actions, and follow up, especially for past inspections. The following are GAO’s comments on the Department of Agriculture’s comments received January 11, 2002. 1. While USDA states that it agrees with our recommendation, in its discussion of policy-level coordination and strategic planning among various agencies, USDA does not fully address the substance of our recommendation. Our recommendation focuses on actions to strengthen the inspection of imported products at an operational level, including identifying resources needed to do so. 2. With regard to our recommendation to consider interim action to advise consumers when products may contain central nervous system tissue, USDA acknowledged its support for providing consumers with information on product content and for an open process that allows consumers to make choices. However, USDA stated that labeling and warning statements should be reserved for known hazards. We believe that it would be prudent for USDA to consider taking some action to inform consumers when products may contain central nervous system or other tissue that could pose a risk if taken from an infected animal, especially in light of the experiences in Japan and other countries that were believed to be BSE free. This would allow consumers to make informed choices about the products they consume. Caution labels or signs, if used, could facilitate more timely removal of products that could pose a health risk if BSE were to appear. 3. USDA states that it is more than doubling the number of animals sampled in its BSE surveillance program for 2002 and that it intends to obtain more samples from animals that die on farms. USDA notes that properly equipped laboratory facilities will be needed to support the increased surveillance. Because of this uncertainty, we are keeping the recommendation. In addition to those named above, Cheryl Williams, James Dishmon, Stuart Ryba, Janice Turner, Jason Holliday, Barbara Johnson, Barbara El-Osta, and Carol Herrnstadt Shulman made key contributions to this report.
Bovine spongiform encephalopathy (BSE), also known as mad cow disease, has been found in cattle in 23 countries. Countries with BSE have suffered large economic losses because of declines in both beef exports and domestic beef sales. The U.S. Department of Agriculture (USDA) and the Food and Drug Administration (FDA) have primary responsibility for preventing the introduction of BSE-contaminated cattle, beef, and cattle-derived products into the United States. GAO found that FDA has not acted promptly to force firms to keep prohibited proteins out of cattle feed and to label animal feed that cannot be fed to cattle. FDA's data on inspections are severely flawed, and FDA is unaware of the full extent of industry compliance. If BSE was discovered in U.S. cattle, many consumers might refuse to buy domestic beef; beef exports could decline dramatically as could sales in related industries, such as hamburger chains and frozen dinner manufacturers. Furthermore, some people might develop mad cow disease if infected cattle were to enter the food supply. The United States acted as many as five years earlier than did other countries to impose controls over imports of animals and animal feed ingredients from countries that had experienced mad cow disease. Similarly, U.S. surveillance efforts to test cattle brains for mad cow disease met internationally recommended testing targets earlier than did other countries. However, the United States' feed ban is more permissive than that of other countries, allowing cattle feed to contain proteins from horses and pigs. FDA is reviewing whether these ingredients should continue to be allowed in cattle feed. Finally, as in most countries that are BSE-free, cattle brains and other central nervous system tissue can be sold as human food.
Each year, FHA helps hundreds of thousands of Americans finance home purchases. Established under the National Housing Act, FHA insures private lenders against losses on mortgages for single-family homes. FHA plays a particularly large role in certain market segments, including low-income borrowers and first-time homebuyers. The loan amount that FHA can insure is based, in part, on the appraised value of the home. If a borrower defaults and the lender subsequently forecloses on the loan, the lender can file an insurance claim with HUD for nearly all of its losses, including the unpaid balance of the loan. After the claim is paid, the lender transfers the title to the home to HUD, which is responsible for managing and selling the property. Most of the mortgages are insured by FHA under its Mutual Mortgage Insurance Fund (Fund). To cover lenders’ losses, FHA deposits insurance premiums paid by borrowers in the Fund and historically, the Fund has been self-sufficient. The purpose of an FHA appraisal is to (1) determine the property’s eligibility for mortgage insurance on the basis of its condition and location and (2) estimate the value of the property for mortgage insurance purposes. In performing these tasks, the appraiser is required to identify any visible deficiencies impairing the safety, sanitation, structural soundness, and continued marketability of the property and to assess the property’s compliance with FHA’s other minimum property standards. According to HUD’s guidance, if an appraiser finds noncompliance with these standards, the appraiser should include in the appraisal report an appropriate and specific action to correct the deficiency. Private mortgage lenders making FHA-insured loans for single-family housing are required to select appraisers from FHA’s roster of about 31,500 state-licensed or -certified appraisers. In fiscal year 1998, 825,539 appraisals were performed for the purposes of FHA mortgage insurance. Ninety-six percent of these appraisals were for existing homes, while the remaining 4 percent were for newly constructed homes. On-site assessments of completed appraisals, known as field reviews, are HUD’s principal tool for monitoring the performance of the appraisers on FHA’s roster. In conducting a field review, a HUD staff person or contractor visits the appraised property to evaluate all aspects of the appraisal, including whether the value determination was reasonable and whether all needed repairs were identified. The field reviewer is required to document his or her findings on a standard HUD form and recommend a score using a scale from 1 to 5 (with 1 being unacceptable and 5 being excellent). As part of its 2020 Management Reform Plan announced in 1997, HUD consolidated the single-family housing activities of its 81 field offices into four HOCs, each of which is responsible for a multistate area. These activities include processing mortgage insurance and functions related to HUD’s oversight of appraisers and lenders participating in FHA’s programs. The HOCs are located in Denver, Colorado; Atlanta, Georgia; Philadelphia, Pennsylvania; and Santa Ana, California, and report directly to HUD’s Office of Insured Single-Family Housing, which is responsible for implementing FHA’s home mortgage insurance program. The consolidation of activities into the four HOCs was carried out in phases and was completed in December 1998. HUD has also established two new offices, the Real Estate Assessment Center and the Enforcement Center, which are expected to play important roles in HUD’s oversight of the FHA appraisal process. According to HUD officials, the Assessment Center’s responsibilities will include analyzing and tracking appraisal quality and appraiser performance, and the Enforcement Center’s responsibilities will include sanctioning appraisers, mortgage brokers, and lenders who do not comply with HUD’s requirements. On June 1, 1998, HUD announced a Homebuyer Protection Plan that outlined reforms that HUD intends to make to the FHA appraisal process. Specifically, the plan (1) requires that appraisals include a more thorough basic survey of the physical condition of homes; (2) requires lenders to inform potential homebuyers of defects found during appraisals; (3) requires appraisers to recommend complete, detailed inspections of homes if the appraisers find significant problems with the properties; (4) allows up to $300 of home inspection costs to be financed through FHA mortgages; and (5) imposes stricter accountability on appraisers and tougher sanctions on those who act improperly, including fines and potential prison sentences. HUD’s announcement did not identify a specific timetable for implementing the plan. HUD is not doing a good job of monitoring the performance of appraisers, thereby limiting the agency’s ability to assess the quality of appraisals used for FHA-insured loans. In fiscal year 1998, HUD performed about 81,000 field reviews of appraisals nationwide. However, three of the four HOCs did not meet HUD’s policy requirement to field review at least 10 percent of the FHA appraisals performed within their jurisdictions. In addition, HUD’s records for the first three quarters of fiscal year 1998 showed that over one-third of the appraisers who conducted 10 or more appraisals during that period did not have any of their work field reviewed. When field reviews were performed, many were not timely. At the HOCs we visited, we found that HUD staff did not routinely visit appraised properties to verify the work of field review contractors and that they lacked adequate systems for tracking consumers’ complaints about appraisals. In September 1997, HUD established a policy requiring its HOCs and field offices to field review no less than 10 percent of the appraisals conducted within their jurisdictions. HUD instituted this requirement in response to our July 1997 report, which showed that some HUD field offices were conducting few or no field reviews of appraisals. An official from HUD’s Office of Insured Single-Family Housing told us that once HUD consolidated its single-family housing activities into the four HOCs, the 10-percent standard no longer applied to HUD’s 81 field offices. Our analysis of HUD’s data showed that three of the four HOCs did not meet the 10-percent requirement in fiscal year 1998. Specifically, the Philadelphia, Denver, and Santa Ana HOCs reviewed 9.7, 8.3, and 8.1 percent of the total appraisals performed in their jurisdictions, respectively. The Atlanta HOC field reviewed 12.7 percent of the appraisals in its jurisdiction. HUD did not field review the work of thousands of appraisers who conducted 10 or more FHA appraisals during the period from October 1, 1997, through June 30, 1998. Our analysis showed that 25,560 appraisers performed FHA appraisals during that period. Of the 12,076 appraisers who performed 10 or more appraisals during that period, 4,465, or 37 percent, had not been field reviewed. Among these 4,465 appraisers, 49 percent had conducted between 10 and 19 appraisals, 20 percent had conducted between 20 and 29 appraisals, and 31 percent had conducted 30 or more appraisals. (See fig. 1.) While HUD’s procedures do not require field reviews for appraisers doing a higher volume of appraisals, HUD had little assurance that they were conducting accurate and thorough appraisals without performance information on these individuals. 10 to 19 appraisals (2,174) 20 to 29 appraisals (889) Philadelphia and Denver HOC officials told us that several factors contributed to problems with field review coverage. These factors included (1) HUD’s reliance on contractors to conduct field reviews and the unavailability of contract funds during the first several months of the fiscal year; (2) the reassignment of personnel during HUD’s reorganization, which, in some instances, left no one responsible for ordering field reviews; and (3) the lack of emphasis that some field offices placed on field reviews once they knew their functions would be transferred to the HOCs. HOC officials told us that they had placed a high priority on completing field reviews when they assumed this responsibility from the field offices but that they were constrained by the amount of time remaining in the fiscal year and the limitations on the number of field reviews that they could ask contractors to perform each month. As of February 1999, HUD was piloting a new process for selecting appraisals for field review. HUD plans to use statistical analysis of appraisal quality indicators (e.g., the completeness and mathematical accuracy of the appraisal report) to identify appraisals that may be problematic and, therefore, may be candidates for field review. According to HUD, this new process will allow the Department to target appraisers who may be performing poorly for field review instead of relying on the more random process now being used. Although HUD’s guidance states that timeliness is essential to ensure the quality of field reviews, a procedural change that HUD implemented in November 1997 has significantly reduced the timeliness of these reviews. Prior to November 1997, appraisers were required to send copies of their appraisal reports directly to HUD. This arrangement allowed HUD to field review some appraisals before the lenders closed on the loans and sent the remaining loan documents to HUD for approval of FHA mortgage insurance. However, effective November 1997, appraisers are no longer required to send their appraisal reports to HUD, and HUD does not get a copy of the appraisal report until the lender closes the loan and sends the appraisal report to HUD as part of the loan case file. According to HUD officials, the intent of this change was to reduce the amount of paperwork coming into the HOCs. HUD officials also told us that planned computer system enhancements would have allowed the HOCs to receive a sample of lenders’ appraisals prior to loan closing. However, the officials said these enhancements had been delayed because of work priorities relating to year 2000 compliance issues. HUD’s records showed that half of the field reviews conducted in fiscal year 1998 were not done until at least 77 days after the appraisal had been performed. In six of HUD’s field office jurisdictions, the corresponding figure was 140 days or more. In contrast, HUD reported in fiscal year 1997 that all field reviews were being completed within 45 days of the appraisals. Philadelphia and Denver HOC officials told us that the reduced timeliness of field reviews made it difficult to prevent the approval of FHA mortgage insurance for loans based on faulty appraisals and reduced the usefulness of field review reports as a monitoring and enforcement tool. For example, the HOCs’ records for 126 field reviews conducted during the period from October 1, 1997, through June 30, 1998, that rated the appraisals as poor, showed that HUD approved mortgage insurance for 96 of the homes that were the subjects of these reviews. In 37 of the 96 cases, the field reviews were performed after HUD had already approved mortgage insurance for the properties. We also noted one case in which the field reviewer gave the appraisal a score of 2, in part because the appraiser had overlooked several repair conditions, including areas of the foundation in need of repair and defective paint surfaces. A Philadelphia HOC official raised the score to a 3 because the field review was performed 5 months after the appraisal and some of the conditions needing repair could have developed during that intervening period. HUD relies primarily on licensed appraisers under contract with HUD to conduct field reviews of completed appraisals. About three-fourths of the 80,958 field reviews conducted in fiscal year 1998 were performed by contractors, while the remainder were performed by HUD staff. At HUD’s Philadelphia and Denver HOCs, we found that the staff did not routinely verify the observations of field review contractors or systematically evaluate the contractors’ performance as required. HUD’s policy guidance stresses the importance of evaluating the work of field review contractors and states that 5 percent of every contractor’s work should be reviewed and rated on scale from 1 to 5 (with 1 being unacceptable and 5 being excellent). The purpose of this rating system is to document performance problems and justify disciplinary actions against field review contractors, if necessary. Although HUD’s guidance is unclear on this point, an official from HUD’s Office of Insured Single-Family Housing told us that the review process was supposed to include a visit by HUD staff to properties the contractors had field reviewed. Officials at both the Philadelphia and Denver HOCs told us that they rarely conducted such evaluations because they lacked sufficient staff and travel resources. They said that, as a result, they neither tracked the percentage of each contractor’s work that received an on-site review nor evaluated the contractors’ performance using the numerical rating system. According to HUD, the HOCs are currently administering over 250 small field review contracts, most of which they inherited from the field offices as the work from the field offices was consolidated under the HOCs. Because HUD has found it difficult to monitor such a large number of contracts, the agency is planning to contract out the field review function to a small number of large appraisal firms. It also plans to have HUD staff perform quality assurance reviews of the contractors. Consumers’ complaints are another means by which HUD obtains information about the quality of the appraisals used to support FHA-insured mortgages. In a December 1997 policy memorandum, HUD’s Deputy Assistant Secretary for Single-Family Housing required the HOCs to establish written consumer complaint procedures and to maintain certain types of information about the complaints they received, including those relating to appraisals. During our visits to the Philadelphia and Denver HOCs, we found that the centers had yet to develop written complaint procedures. In October 1998, HUD officials told us that the Philadelphia HOC was developing a set of written procedures for all four HOCs to follow. We also found that the Philadelphia and Denver HOCs did not have complaint tracking systems that contained all of the information required by the December 1997 policy memorandum. Both HOCs maintained logs showing, among other things, the HOC official assigned to follow up on a complaint and the date the follow-up action was completed. However, these logs did not include other required information, such as the nature of the complaint, the actions taken to address the complaint, or the final disposition of the complaint. This information would enable the HOCs’ management to readily determine the frequency of different types of complaints and ensure that all complaints are being resolved in an appropriate manner. Contrary to HUD’s policy, most appraisers within the Philadelphia and Denver HOCs’ jurisdictions who received two or more poor ratings in field reviews during the first three quarters of fiscal year 1998 were allowed to continue performing appraisals for FHA. Of the 5,768 appraisers within the two HOCs’ jurisdictions who were field reviewed during this period, 246 received two or more poor field review scores. HUD prohibited only 11 of these appraisers from conducting further FHA appraisals. Poor record-keeping by HUD’s field offices and other factors hampered the HOCs’ ability to take enforcement actions against other poorly performing appraisers. HUD’s policy states that appraisers who receive two or more poor scores in field reviews during any 12-month period should be temporarily prohibited from conducting further FHA appraisals. A poor field review rating (i.e., a score of 1 or 2 on a scale of 1 to 5) indicates that the appraiser did not adequately support the value assigned to the home, overlooked serious repair conditions, or made other errors and omissions that could result in an unacceptable insurance risk to FHA. HUD’s HOCs may impose an administrative sanction, called a limited denial of participation, that excludes an appraiser from participating in FHA programs for up to a year. Our analysis of field review results recorded in HUD’s Computerized Homes Underwriting Management System (CHUMS) showed that 205 appraisers within the Philadelphia HOC’s jurisdiction and 41 appraisers within the Denver HOC’s jurisdiction received two or more poor scores in field reviews during the period from October 1, 1997, through June 30, 1998. These 246 appraisers accounted for about 19,100, or 6 percent, of the approximately 303,000 FHA appraisals performed during this period in the two HOCs’ jurisdictions. These appraisers combined had 749 field reviews in which they received scores of 1 or 2. A separate analysis by HUD’s Office of Insured Single-Family Housing indicated that this problem was not limited to the Philadelphia and Denver HOCs. HUD’s analysis showed that between May 1997 and May 1998, a total of 723 appraisers nationwide had received two or more poor scores in field reviews but were still active members of HUD’s appraiser roster. As of October 1, 1998, HUD had taken enforcement actions against 11 of the 246 appraisers we reviewed and prohibited them from performing FHA appraisals, in most cases for up to a year. Of the 11 enforcement actions, 5 were taken by the Philadelphia HOC, 3 by the Denver HOC, and 1 each by HUD’s Delaware, Montana, and Utah field offices. Of the appraisers we reviewed who were not subject to enforcement actions, several had received a substantial number of poor field review scores. For example, one Buffalo-area appraiser received poor scores in 9 field reviews, and a Detroit-area appraiser received poor scores in 22 field reviews. As of October 1, 1998, the two HOCs had taken enforcement actions against 12 other appraisers who were not among the 246 appraisers we reviewed. HUD’s policy is to sanction appraisers only when there is substantial evidence and documentation of performance that is less than acceptable. Philadelphia and Denver HOC officials told us that their efforts to sanction appraisers had been hampered primarily by a lack of supporting documentation. They said that other factors that impeded their enforcement efforts were the age of some of the field reviews and the lack of evidence that the appraisers had been given the chance to appeal the poor field review ratings. At the Philadelphia HOC, we reviewed the files for 72 of the 205 appraisers who received two or more poor field review scores, including at least one score of 1, during the period from October 1, 1997, through June 30, 1998, to determine the basis for these scores. At the Denver HOC, we reviewed the files for all 41 appraisers who received two or more poor field review scores during the same period. HUD’s field offices began transferring these files to the HOCs in February 1998. We found at both the Philadelphia and Denver HOCs that most of the field review reports that supported the poor field review scores recorded in HUD’s CHUMS were not in the appraisers’ files. At the Philadelphia HOC, we found that 196, or 65 percent, of the 301 poor ratings were not documented by field review reports in the files. As a result, the HOC’s files contained documentation of two or more poor scores for just 31 of the 72 appraisers we reviewed. For 8 of those 31 appraisers, the documentation showed that HUD officials had raised one or more of the field review scores, with the result that these appraisers no longer had two or more poor scores for the period we reviewed. At the Denver HOC, we found that 66 of the 101 poor ratings were not documented by field review reports in the files. Consequently, the HOC had documentation of two or more poor scores for only 16 of the 41 appraisers we reviewed. HOC officials told us that the appraiser files they had received from certain HUD field offices were incomplete, reflecting the poor record-keeping and lax enforcement efforts of these offices before and during the consolidation of HUD’s single-family housing activities. Philadelphia and Denver HOC officials told us that they would continue to monitor the performance of appraisers who had received poor scores in the past. Both HOCs have established appraiser files to document and maintain the results of field reviews and are developing computerized information systems to track appraisers’ field review scores, in accordance with HUD’s policy guidance. Of the 126 field review reports we found in the two HOCs’ files that assigned poor scores to the appraisers we reviewed, 76, or 60 percent, cited problems with the appraisers’ valuation of the properties. Figure 2 shows the percentage of the field review reports that cited certain types of deficiencies in the appraisals. In most cases, the field reviews found more than one type of deficiency in each appraisal. As part of its Homebuyer Protection Plan, HUD is revising its guidance for sanctioning appraisers. The guidance includes a matrix that shows the appropriate enforcement actions, including civil and criminal penalties, associated with various infractions of HUD’s appraisal policies and standards. The HOCs and HUD’s Enforcement Center will share the responsibility for taking enforcement actions against appraisers. According to HUD, the process of issuing limited denials of participation to remove appraisers from FHA’s roster can be difficult and time-consuming. As a result, HUD is drafting regulations that, if approved, would enable its HOCs to remove poorly performing appraisers from FHA’s roster more easily. HUD’s policy is that lenders are responsible, equally with the appraisers they select, for the accuracy and thoroughness of appraisals. HUD has not aggressively enforced this policy because of disagreement within HUD over its authority to do so. In May 1998, the Philadelphia HOC requested that HUD’s Mortgagee Review Board sanction a lender who refused to correct property deficiencies that the appraiser had overlooked. This was the first case of this type that had been referred to the Board. However, the Board never reviewed or acted on this request because the Board’s staff did not believe that HUD had the authority to hold a lender accountable for the quality of an appraisal simply because the lender selected the appraiser. As a result, the HOCs have been reluctant to refer similar cases to the Board. In October 1994, HUD issued regulations implementing a legislative provision that allowed lenders to choose the appraisers of properties to be insured by FHA. While the legislation did not address this issue, HUD’s regulations stated that lenders who selected their own appraisers were equally responsible, along with the appraisers, for the accuracy, integrity, and thoroughness of the appraisals. In May 1996, HUD repealed these regulations as part of a larger federal effort to reduce regulations. According to HUD, the regulations were not necessary because many of the standards in the regulations were already in HUD’s handbook guidance and mortgagee letters issued to lenders. HUD issued mortgagee letters to lenders in November 1994 and again in May and November of 1997 that reiterated its policy that lenders were equally responsible for the quality of appraisals. Also, in a December 1997 policy memorandum, HUD’s Deputy Assistant Secretary for Single-Family Housing instructed HUD staff that in cases in which appraisers missed serious repair conditions or significantly overvalued properties, HUD should request that the lenders who selected the appraisers pay for the needed repairs or pay down the mortgages by the amounts the properties were overvalued. The Deputy Assistant Secretary also indicated that the failure of a lender to voluntarily resolve the appraisal deficiencies raised by HUD would result in enforcement action against the lender, including probation and suspension. In September 1997, a Pennsylvania homeowner complained to the Philadelphia HOC that an independent inspection of her FHA-insured home had found numerous violations of FHA’s minimum property standards that she believed should have been identified by the appraiser. A subsequent HUD field review confirmed that the appraiser had missed repairs that were necessary to correct health and safety problems with the home. In January 1998, the Philadelphia HOC temporarily suspended the appraiser and prohibited him from taking further FHA appraisal assignments for 90 days. In addition, the HOC sent a letter to the Pennsylvania mortgage company that had selected the appraiser, requesting that the lender either make approximately $7,500 in repairs to the home or prepay the mortgage by that amount. In April 1998, attorneys for the lender informed HUD by letter that the lender had declined to pay for the repairs or prepay the mortgage. Among other things, the letter stated that (1) the lender did not know the appraiser had performed the appraisal in an unsatisfactory manner; (2) there was no basis to believe that the lender should have known about the unsatisfactory nature of the appraisal; (3) there was no financial tie, business affiliation, or conflict of interest between the lender and the appraiser; and (4) HUD did not have the authority to hold lenders responsible for the acts, errors, or omissions of independent appraisers. Because of the lender’s refusal to make the repairs or to prepay the mortgage as requested, the Philadelphia HOC in May 1998 referred the case to the Mortgagee Review Board for appropriate action against the lender. The Board is the entity within HUD that can impose administrative sanctions against a lender or withdraw a lender’s authority to make FHA-insured loans. However, the Board never reviewed this case. In discussing this case with the Board’s Secretary and the Deputy Chief Counsel for HUD’s Enforcement Center, we were told that the Board’s staff did not forward the HOC’s referral to the Board because the staff did not believe that HUD had the authority to hold lenders liable for the actions of independent appraisers simply because the lenders had selected the appraisers. According to the Deputy Chief Counsel, the Philadelphia HOC had no authority to assess the lender $7,500 because this constituted a civil penalty against the lender and only the Board had the authority to assess such penalties. While the Deputy Chief Counsel noted that there were circumstances in which HUD could hold lenders accountable for the work of appraisers, both he and the Board’s Secretary indicated that HUD’s policy, as written, was improperly attempting to hold lenders absolutely liable for the work of appraisers selected by the lenders. The Board’s Secretary told us that he would soon draft a response to the Philadelphia HOC about the referral, but as of January 1999, no response had been prepared. The Director of the Philadelphia HOC told us that he would like the Board to either sustain HUD’s policy of holding lenders responsible for appraisals or rule that HUD does not have such authority. He said that the HOC had two other cases that it would like to refer to the Board, including one in which another lender had also refused the HOC’s request to pay for repair conditions missed by the appraiser. The Director said he saw no benefit in forwarding other cases to the Board until it has made a decision on the HOC’s first referral. In October 1998, officials at the Denver HOC told us that they had not referred any lenders to the Board for using poor-quality appraisals, in part, because it was difficult to know where to lay the blame in such cases and that the issue had not been tested in court. They said a lender would vigorously fight any sanctions imposed on it for relying on a faulty appraisal because of the precedent such an action would set. HUD has limited assurance that the appraisers on FHA’s roster are knowledgeable of FHA’s appraisal requirements. Appraisers must be state-licensed or -certified to qualify for FHA’s appraiser roster, but the states’ minimum licensing standards do not require expertise in conducting FHA appraisals. HUD is revising its appraisal guidance and forms and is adopting a testing requirement for appraisers. To be eligible for FHA’s roster, appraisers must be state-licensed or -certified in accordance with the minimum criteria established by the Appraiser Qualifications Board of the Appraisal Foundation. The Qualifications Board’s minimum licensing criteria require that appraisers have 90 hours of classroom education in subjects related to real-estate appraisals, have 2,000 hours of appraisal experience, and pass the Qualifications Board’s endorsed examination or an equivalent examination. To be placed on FHA’s roster, an appraiser must submit an application and a copy of his or her license or certification to the HOC within whose jurisdiction the appraiser intends to work. The appraiser must certify on the application form that he or she has read or will read HUD’s handbook on valuation analysis before accepting an FHA appraisal assignment. HUD’s CHUMS contains licensing information for the appraisers on FHA’s roster. Our analysis of the appraisal license expiration dates in CHUMS indicated that the approximately 31,500 appraisers on FHA’s roster as of August 1998 held current licenses or certifications. In addition, using a national database maintained by the Appraisal Subcommittee of the Federal Financial Institutions Examination Council, we confirmed that the 246 Philadelphia and Denver HOC appraisers who received poor field review scores held current licenses. At the time HUD adopted its procedures for allowing lenders to select their own appraisers in 1994, it recognized that appraisers would need training in FHA’s appraisal requirements and procedures. Unlike appraisals for conventional mortgages, appraisals for FHA-insured mortgages must include an assessment of the properties’ compliance with FHA’s property standards as well as appropriate and specific actions to correct conditions not in compliance with these standards. In addition, the value an appraiser assigns to a property must reflect its value with all the required repairs completed. While HUD encouraged its field offices and local appraiser and lender associations to sponsor training in FHA appraisals, it decided not to make training a condition for placement on FHA’s roster. HUD decided to rely instead on lenders’ selecting only knowledgeable appraisers and on appraisers’ not accepting appraisal assignments that they were not competent to perform. In conjunction with its Homebuyer Protection Plan, HUD is developing a new appraisal report to record the results of appraisals. HUD believes that this report will provide more information about the physical condition of the appraised property than HUD’s current appraisal forms and will allow the appraiser to better identify health and safety hazards and structural problems that may require repairs. The new report lists specific physical conditions that the appraiser should check for and requires the appraiser to recommend whether a complete home inspection or some other type of inspection (e.g., electrical, roofing, or structural) should be conducted. HUD will require lenders to provide a summary of the report to homebuyers so that homebuyers will have information about needed repairs and recommended inspections. HUD has also drafted revised handbook guidance for appraising single-family homes. The handbook updates and consolidates information currently fragmented among numerous HUD handbooks and mortgagee letters. The draft handbook clarifies the roles and responsibilities of the appraiser, outlines protocols for appraisers to follow when conducting FHA appraisals, and specifies sanctions HUD will take against poorly performing appraisers. HUD expects to finalize and issue the handbook in April 1999. HUD is also in the process of adopting a requirement that appraisers pass a test on FHA appraisal requirements and procedures to be eligible to perform FHA appraisals. HUD plans to begin testing in June 1999. The importance of appraisals to FHA and prospective homeowners underscores the need for effective oversight of the appraisal process. FHA relies on appraisals to ensure that the billions of dollars in mortgage loans it insures annually accurately reflect the value of the homes being purchased. FHA homebuyers rely on appraisals, in part, to avoid buying homes with major defects that are costly to fix. However, weaknesses in HUD’s oversight of the FHA appraisal process have increased FHA’s risk of insuring properties that are overvalued or whose owners may default on their FHA-insured loans because of unexpected repair costs. The consequence of this increased risk is higher potential losses to FHA’s insurance fund. HUD could significantly improve its monitoring of appraisers. HUD has not ensured that the HOCs are meeting the agency’s requirements to field review 10 percent of all FHA appraisals. Also, HUD’s procedures do not target for field review appraisers who perform significant numbers of FHA appraisals. In addition, a procedural change by HUD has made field reviews less timely, with the result that HUD did not learn of problems with certain appraisals until after HUD had already approved mortgage insurance on the properties. Moreover, the two HOCs we visited did not regularly verify the work of field review contractors through on-site evaluations and lacked tracking systems necessary to readily determine the nature, frequency, and resolution of complaints from FHA homebuyers. These problems weaken HUD’s ability to accurately assess the quality of the appraisals used to support the loans FHA insures. HUD’s ability to sanction poorly performing appraisers was seriously impaired by the loss or misplacement of records prior to and during HUD’s field consolidation. Consequently, hundreds of appraisers whose work may be creating an unreasonable underwriting risk for FHA continue to conduct appraisals for FHA-insured mortgages. However, the two HOCs we visited have taken steps toward enforcing FHA’s performance standards for appraisers. HUD has not resolved internal disagreements about its authority and policy to hold lenders accountable for poor-quality appraisals. As a result, it has not aggressively enforced this policy. By not resolving this issue, HUD is sending a confusing message to both lenders and FHA borrowers about who is responsible for the quality of appraisals and what remedies exist when an appraisal is unsatisfactory. HUD’s reliance on the states’ licensing process and self-certification provide limited assurance that the appraisers on FHA’s roster are knowledgeable of FHA’s appraisal requirements. The states’ minimum licensing standards do not require proficiency in FHA’s guidelines, and HUD is considering, but has not implemented, its own testing requirement. HUD’s revision of its appraisal guidance and forms and its plans to test appraisers on their knowledge of FHA appraisal requirements are likely to help appraisers perform their work in accordance with FHA standards. To reduce the financial risks assumed by FHA and to improve HUD’s oversight of appraisers on FHA’s roster, we recommend that the Secretary of HUD direct the Assistant Secretary for Housing-Federal Housing Commissioner to achieve better field review coverage of FHA’s appraiser roster by (1) ensuring that each HOC field reviews the required percentage (currently 10 percent) of the FHA appraisals conducted annually within its geographic jurisdiction and (2) requiring that when selecting appraisals for field review, HUD staff give higher priority to the work of appraisers who have done a substantial number of FHA appraisals but have not been field reviewed within the past year; make field reviews of appraisals more timely by establishing a process to ensure that HUD staff obtain copies of appraisal reports and perform field reviews prior to FHA’s approval of mortgage insurance; and better assess the quality of appraisal field reviews by insuring that a portion of each field review contractor’s work is verified through on-site evaluation of properties field reviewed by the contractor. To improve HUD’s oversight of lenders participating in FHA’s programs, we recommend that the Secretary of HUD (1) determine the Department’s authority to hold FHA-approved lenders accountable for poor-quality FHA appraisals performed by the appraisers they select from FHA’s roster and (2) issue policy guidance that sets forth the specific circumstances under which and actions by which HUD may exercise this authority. We provided a draft copy of this report to HUD for its review and comment. In its letter commenting on the report, HUD said that the report did not describe the changes the Department had made to FHA’s single-family mortgage insurance programs. HUD indicated that, prior to our report, FHA management had already identified appraisal quality as an area needing improvement and had announced a Homebuyer Protection Plan to address this problem. Because the report contains ample discussion of the Department’s Homebuyer Protection Plan and other steps HUD has taken to improve the FHA appraisal process, we did not make any changes to the report. In commenting on our recommendation that HUD achieve better field review coverage of FHA’s appraiser roster, HUD indicated that it will implement a revised field review process by July 1, 1999, that will improve the Department’s sampling and targeting of appraisers for field review. In response to our recommendation that HUD conduct on-site evaluations of a portion of each field review contractor’s work, HUD indicated that it would begin performing supervisory reviews of field review contractors in conjunction with a national field review contract scheduled to begin in July 1999. Regarding our recommendation that HUD determine its authority to hold FHA-approved lenders accountable for poor-quality appraisals, HUD responded that it would target for monitoring those lenders that used poorly performing appraisers. Because HUD’s response did not address the Department’s authority to hold FHA-approved lenders accountable for poor-quality appraisals, we believe that HUD still needs to clarify this matter and issue policy guidance that reflects this clarification. HUD disagreed with our recommendation to improve the timeliness of appraisal field reviews by obtaining copies of the appraisal reports and performing field reviews prior to loan closings and the approval of FHA mortgage insurance. HUD indicated that the collection of all appraisals and the performance of field reviews before the approval of mortgage insurance would be impractical and inconsistent with HUD’s Direct Endorsement Program, which allows qualified mortgagees to process and close FHA loans without prior review by HUD. We modified this recommendation to reflect the fact that it may be difficult for HUD to field review appraisals before the lenders close on the loans. However, we continue to believe that it would be feasible for HUD to field review, in advance of approving mortgage insurance, those appraisals that the Department has selected for field review. For example, HUD could require lenders to submit copies of selected appraisal reports immediately after the Department makes the selections rather than waiting for the lenders to include the appraisal reports as part of the loan files sent to HUD prior to the endorsement of mortgage insurance. We believe that such a procedure would not infringe on the underwriting responsibilities of Direct Endorsement lenders and would improve the quality and usefulness of field reviews by (1) significantly reducing the time elapsed between appraisals and the field reviews of those appraisals and (2) reducing HUD’s risk of insuring mortgages based on faulty appraisals. The full text of HUD’s letter is presented in appendix I. We conducted our work at HUD’s headquarters and its Philadelphia and Denver HOCs. Together, the two HOCs account for about half of FHA’s loan activity for single-family housing. We interviewed officials from HUD’s Office of Insured Single-Family Housing, Real Estate Assessment Center, Enforcement Center, Mortgagee Review Board, and Philadelphia and Denver HOCs. We reviewed laws, regulations, mortgagee letters, and other documents related to the FHA appraisal process and developed information on HUD’s procedures for monitoring appraisers, overseeing field review contractors, and handling consumers’ complaints. We analyzed data from HUD’s CHUMS for information on the currency of appraisal licenses for appraisers on FHA’s roster and the number of appraisers who received two or more poor scores in field reviews during the first three quarters of fiscal year 1998. We reviewed HOCs’ files for documentation of field review scores, information on enforcement actions against appraisers and lenders, and on the nature of consumers’ complaints. Appendix II provides additional details on our scope and methodology. We performed this review from May 1998 through April 1999 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to Representative Barney Frank, Ranking Minority Member, House Subcommittee on Housing and Community Opportunity; Representative James A. Leach, Chairman, and John J. LaFalce, Ranking Minority Member, House Committee on Banking and Financial Services; and Senator Phil Gram, Chairman, and Paul S. Sarbanes, Ranking Minority Member, Senate Committee on Banking, Housing, and Urban Affairs. We will also send copies of this report to The Honorable Andrew M. Cuomo, Secretary of HUD; The Honorable William C. Apgar, HUD Assistant Secretary for Housing-Federal Housing Commissioner; and The Honorable Jacob J. Lew, Director, Office of Management and Budget. We will make copies available to others upon request. Please call me at (202) 512-7631 if you or your staff have any questions. Major contributors to this report were Paul Schmidt, Steve Westley, Jackie Garza, Stan Ritchick, Mitch Karpman, and John McGrail. Our objectives were to answer the following questions: (1) How well is the Department of Housing and Urban Development (HUD) monitoring the performance of the appraisers on its roster and implementing procedures for addressing consumers’ complaints about Federal Housing Administration (FHA) appraisals? (2) To what extent is HUD holding appraisers accountable for poor-quality FHA appraisals? (3) To what extent is HUD holding lenders responsible for the quality of the FHA appraisals they use? (4) How does HUD ensure that the appraisers on its roster are qualified to perform FHA appraisals? To assess how well HUD was monitoring the performance of appraisers, we reviewed pertinent HUD handbook and policy guidance and discussed this information with officials from HUD’s Office of Insured Single-Family Housing. We reviewed HUD appraisal and field review data for fiscal year 1998 and determined the extent to which HUD’s four homeownership centers (HOC) field reviewed at least 10 percent of their appraisals, as required by HUD. We analyzed field review data in HUD’s Computerized Homes Underwriting Management System (CHUMS) to determine for the first 9 months of fiscal year 1998 how many appraisers nationwide that conducted 10 or more appraisals were subject to at least one field review and how many did not have any of their work field reviewed during the period. In addition, we reviewed CHUMS data for fiscal year 1998 on the median amount of time elapsed between appraisals and the field reviews of those appraisals. We interviewed Denver and Philadelphia HOC officials about factors affecting their ability to monitor appraisers and oversee field review contractors. We also discussed with Real Estate Assessment Center officials the planned changes to HUD’s procedures for tracking and evaluating the performance of appraisers. In addition, we interviewed Denver and Philadelphia HOC officials responsible for handling FHA consumer complaints and reviewed consumer complaint logs and files maintained by the centers. To determine the extent to which HUD was holding appraisers accountable for poor-quality appraisals, we reviewed HUD’s guidance regarding enforcement actions against poorly performing appraisers. For the period from October 1, 1997, through June 30, 1998, we examined the field review data in HUD’s CHUMS for the appraisers working in the Philadelphia and Denver HOCs’ jurisdictions and identified those appraisers who received two or more poor scores (i.e., scores of 1 or 2 on a scale of 1 to 5) in field reviews during that period. At the Philadelphia and Denver HOCs, we reviewed the files for the 72 and 41 appraisers, respectively, who fell into that category. In reviewing these files, we determined (1) whether the poor field review scores recorded in CHUMS were documented in field review reports and (2) whether the HOCs had prohibited the appraisers from conducting further FHA appraisals. We interviewed officials at the Denver and Philadelphia HOCs about factors that affected their ability to sanction poorly performing appraisers. To determine the extent to which HUD was holding lenders responsible for the quality of the FHA appraisals they used, we reviewed pertinent legislation, HUD regulations, mortgagee letters, and policy guidance. We also reviewed correspondence between HUD and mortgage lenders regarding specific cases of faulty appraisals. In addition, we interviewed officials from HUD’s Denver and Philadelphia HOCs and from its Mortgagee Review Board and Enforcement Center about HUD’s authority to hold lenders accountable for poor-quality appraisals. To determine how HUD ensures that appraisers on FHA’s roster are qualified, we reviewed pertinent HUD regulations and policy guidance and the minimum licensing criteria established by the Appraiser Qualifications Board of the Appraisal Foundation. We interviewed officials from HUD’s Office of Single-Family Housing and its Real Estate Assessment Center and reviewed revised appraisal guidance being developed by HUD for information on the changes planned to HUD’s appraiser eligibility requirements. We analyzed appraiser license expiration dates in HUD’s CHUMS to determine whether the approximately 31,500 appraisers on FHA’s appraiser roster as of August 1998 held current appraiser licenses. We also verified the licensing information in CHUMS for the 246 appraisers in the Philadelphia and Denver HOCs’ jurisdictions who had received two or more poor scores in field reviews during the period from October 1, 1997, through June 30, 1998, with licensing data maintained by the Appraisal Subcommittee of the Federal Financial Institutions Examination Council. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Federal Housing Administration's (FHA) appraisal process, focusing on: (1) how well the Department of Housing and Urban Development (HUD) is monitoring the performance of the appraisers on its roster and implementing procedures for addressing consumers' complaints about FHA appraisals; (2) the extent to which HUD is holding appraisers accountable for poor-quality FHA appraisals; (3) the extent to which HUD is holding lenders responsible for the quality of the FHA appraisals they use; and (4) how HUD ensures that appraisers on its roster are qualified to perform FHA appraisals. GAO noted that: (1) HUD is not doing a good job of monitoring the performance of appraisers; (2) on-site evaluations of completed appraisals, known as field reviews, are HUD's principal tool for assessing the quality of appraisers' work; (3) in fiscal year (FY) 1998, HUD performed about 81,000 of these reviews, but three of the four HUD homeownership centers (HOC) did not meet HUD's requirement to field review no less than 10 percent of the FHA appraisals performed within their jurisdictions; (4) although HUD's guidance states that timeliness is essential to ensure quality field reviews, half of the field reviews conducted in FY 1998 did not occur until more than 2 months after the appraisals had been performed; (5) moreover, HUD did not learn about problems with some appraisals until after it had already approved mortgage insurance for the properties; (6) the Philadelphia and Denver HOCs' records for 126 field reviews that rated the appraisals as poor showed that HUD approved mortgage insurance for 96 of the homes covered by these reviews; (7) HUD staff did not routinely visit appraised properties to determine the accuracy of the field review contractors' observations; (8) the Philadelphia and Denver HOCs did not fully implement guidance on the handling and tracking of consumers' complaints, including those relating to appraisals; (9) HUD is not holding appraisers accountable for the quality of their appraisals; (10) contrary to HUD's policy, appraisers who received two or more poor ratings in field reviews were frequently not prohibited from conducting further FHA appraisals; (11) a poor field review score indicates that the appraiser made errors and omissions that could result in an unacceptable insurance risk to FHA; (12) HUD has not aggressively enforced its policy to hold lenders equally accountable with the appraisers they select for the accuracy and thoroughness of appraisals because of a disagreement within HUD over its authority to do so; (13) HUD has limited assurance that the appraisers on its roster are knowledgeable about FHA's appraisal requirements; (14) HUD relies largely on the states' licensing process to ensure that appraisers are qualified, but the states' minimum licensing standards do not include proficiency in FHA's appraisal requirements; and (15) HUD is revising its appraisal guidance and forms to better clarify the roles and responsibilities of appraisers and is adopting a testing requirement for appraisers to ensure their competency in FHA's appraisal standards.